Bio. I am a final year PhD student at Image and Video Computing group of computer science department of Boston University. I work with Professor Kate Saenko in exploring Effective, Robust, Applicable (ERA) transfer learning algorithms. I develop and implement learning frameworks to transfer the knowledge learned from one application domain to another. In my undergraduate study at Peking University, I was luckily working with Professor Yuxin Peng on object recognition in the mobile system. Over the course of my PhD I squeezed in two internships at ICSI Berkey, working with Dr. Stella Yu and Professor Trevor Darrell on aerial image recognition. I am a teaching fellow for Machine Learning (CS542) in 2018 Fall. I have published several papers in top-tier conferences and I have actively served as the PC member/Reviewer for top-tier conferences and journals, a short summary can be found in my Curriculum Vitae and my Google Scholar.

I collect two large-scale datasets to facilitate transfer learning research, VisDA (28K images) and DomainNet (0.6 million images). Together with Kate Saenko and other colleagues, I have launched three Visual Domain Adaptation Challenges (VisDA) through 2017, 2018 and 2019, based on the two datasets I collected. It's an enjoyable experience for me to see that multiple cool domain adaptation ideas are brought up through these challenges.

Summer 2017: Research Intern @ Neurala Inc Generative Domain Adaptation Model.
2016-now: PhD Student @ Boston University Large-Scale Deep Domain Adaptation For Object Recognition.
Summer 2016: Research Intern @ ICSI Lab, UC Berkeley Natural Language Object Retrieval. Adviser: Trevor Darrell
Summer 2015: Research Intern @ ICSI Lab, UC Berkeley Large-Scale Deep Learning for Aerial Image Recognition. Adviser: Stella Yu
2014-2016: PhD Student @ Umass Lowell Syn2Real Deep Domain Adaptation for Object Recognition. Adviser: Kate Saenko
2012-2013: ICST Lab, Peking University Content-based Image retrieval on mobile devices. Adviser: Yuxin Peng
2009-2013: Peking University Majoring in Computer Science.

March 2020: I am serving as a program committee member for DIRA workshop in conjunction with CVPR2020.
March 2020: Our VisDA 2020 in conjunction with ECCV 2020 TASKCV workshop is online now!
Dec 2019: Our paper Federated Adversarial Domain Adaptation is accepted by ICLR 2020.
Oct 2019: We will organize a Task-CV workshop in conjecture with the VisDA challenge in ICCV 2019, Seoul, South Korean.
Sep 2019: Our VisDA 2019 has ended. Congratulations to the challenge winner JD AI Research and Lunit, Inc!
Aug 2019: We have released the ground truth labels for VisDA 2017, check Git Repo for details.
July 2019: Our paper Moment Matching for Multi-Source Domain Adaptation has been accepted as an oral paper in ICCV 2019.
June 2019: I present our paper Domain Agnostic Learning With Disentangled Representations in ICML 2019, Long Beach, CA. (video)
April 2019: Our paper Domain Agnostic Learning with Disentangled Representations has been accepted as a long oral paper in ICML 2019.


Federated Adversarial Domain Adaptation
We present a principled approach to the problem of federated domain adaptation, which aims to align the representations learned among the different nodes with the data distribution of the target node. Our approach extends adversarial adaptation techniques to the constraints of the federated setting. We devise a dynamic attention mechanism and leverage feature disentanglement to enhance knowledge transfer.
X. Peng, Z. Huang, Y. Zhu, K. Saenko
ICLR 2020 (Poster)
Moment Matching for Multi-Source Domain Adaptation
We collect and annotate by far the largest UDA dataset, called DomainNet, which contains six domains and about 0.6 million images distributed among 345 categories. In addition, we propose a new deep learning approach, Moment Matching for Multi-Source Domain Adaptation M3SDA, which aims to transfer knowledge learned from multiple labeled source domains to an unlabeled target domain by dynamically aligning moments of their feature distributions. At last, we provide new theoretical insights specifically for moment matching approaches in both single and multiple source domain adaptation.
X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, B. Wang
ICCV 2019 (Oral)
Domain Agnostic Learning with Disentangled Representations
We propose the task of Domain-Agnostic Learning (DAL): How to transfer knowledge from a labeled source domain to unlabeled data from arbitrary target domains? To tackle this problem, we devise a novel Deep Adversarial Disentangled Autoencoder (DADA) capable of disentangling domain-specific features from class identity. We demonstrate experimentally that when the target domain labels are unknown, DADA leads to state-of-the-art performance on several image classification datasets.
Xingchao Peng, Zijun Huang, Ximeng Sun, Kate Saenko
ICML 2019 (Long Oral)
Adapting control policies from simulation to reality using a pairwise loss
This paper proposes an approach to domain transfer based on a pairwise loss function that helps transfer control policies learned in simulation onto a real robot. We explore the idea in the context of a 'category level' manipulation task where a control policy is learned that enables a robot to perform a mating task involving novel objects. We explore the case where depth images are used as the main form of sensor input. Our experimental results demonstrate that the proposed method consistently outperforms baseline methods that train only in simulation or that combine real and simulated data in a naive way.
Ulrich Viereck, Xingchao Peng, Kate Saenko, Robert Platt
ISER 2018
Synthetic to Real Adaptation with Generative Correlation Alignment Networks
In this work, we propose a Deep Generative Alignment Network (DGCAN) to synthesize images. DGCAN leverages a shape loss and a low-level statistic matching loss to minimize the domain discrepancy between synthetic and real images deep feature space. Experimentally, we show training off-the-shelf classifiers on the newly generated data can significantly boost performance when testing on the real image domains (PASCAL VOC 2007 benchmark and Office dataset).
Xingchao Peng, Kate Saenko
WACV 2018
Combining Texture and Shape Cues for Object Recognition with Minimal Supervision
We propose a two-stream deep learning framework that combines shape and texture cues separately, with one stream learning visual texture cues from image search data, and the other stream learning rich shape information from 3D CAD models. Our method outperforms previous web image-based models, 3D CAD model based approaches, and weakly supervised learning.
Xingchao Peng, Kate Saenko
ACCV 2016
Learning Deep Object Detectors from 3D Models
We propose an effective deep learning approach that transfers fine-grained knowledge gained from high resolution training data to the coarse low-resolution test scenario. Such fine-to-coarse knowledge transfer has many real world applications, such as identifying objects in surveillance photos or satellite images where the image resolution at the test time is very low but plenty of high resolution photos of similar objects are available
Xingchao Peng, Baochen Sun, Karim Ali, Kate Saenko
ICCV 2015

The template is borrowed from Andrej Karpathy