Xu Luo

I am currently a Ph.D student at University of Electronic Science and Technology of China (UESTC).

I am interested in understanding how visual representations behave in face of out-of-distribution tasks with limited labeled data, and in developing new algorithms that enable rapid model adaptation. This interesting direction connects several fields including visual representation learning, few-shot learning, meta-learning, model robustness and transfer learning.

Email  /  CV  /  Google Scholar  /  Github

profile photo
News

  • [2023/07] One paper was accepted to ICCV'23.
  • [2023/04] One paper was accepted to ICML'23.
  • [2022/09] One paper was accepted to NeurIPS'22.
  • [2022/05] One paper was accepted to ICML'22.
  • [2021/09] One paper was accepted to NeurIPS'21.
  • Publications
    Less is More: On the Feature Redundancy of Pretrained Models When Transferring to Few-shot Tasks
    Xu Luo, Difan Zou, Lianli Gao, Zenglin Xu, Jingkuan Song
    arXiv, 2023
    [PDF]

    Uncovering and analyzing extreme feature redundancy phenomenon of pretrained vision models when transferring to few-shot tasks.

    DETA: Denoised Task Adaptation for Few-shot Learning
    Ji Zhang, Lianli Gao, Xu Luo, Hengtao Shen, Jingkuan Song
    ICCV, 2023
    [PDF] [Code]

    Proposing DETA--a framework that solves potential data/label noise in downstream few-shot transfer tasks.

    A Closer Look at Few-shot Classification Again
    Xu Luo*, Hao Wu*, Ji Zhang, Lianli Gao, Jing Xu, Jingkuan Song
    ICML, 2023
    [PDF] [Code]

    Empirically proving the disentanglement of training and adaptation algorithms in few-shot classification, and performing interesting analysis of each phase that leads to the discovery of several important observations.

    Alleviating the Sample Selection Bias in Few-shot Learning by Removing Projection to the Centroid
    Jing Xu, Xu Luo, Xinglin Pan, Yanan Li, Wenjie Pei, Zenglin Xu
    NeurIPS, 2022   (Spotlight)
    [PDF] [Code]

    Revealing a strong bias caused by the centroid of features in each few-shot learning task. A simple method is designed to rectify this bias by removing the dimension along the direction of task centroid from the feature space.

    Channel Importance Matters in Few-Shot Image Classification
    Xu Luo, Jing Xu, Zenglin Xu
    ICML, 2022
    [PDF] [Code]

    Revealing and analyzing the channel bias problem that we found critical in few-shot learning, through a simple channel-wise feature transformation applied only at test time.

    Rectifying the Shortcut Learning of Background for Few-Shot Learning
    Xu Luo, Longhui Wei, Liangjian Wen, Jinrong Yang, Lingxi xie, Zenglin Xu, Qi Tian
    NeurIPS, 2021
    [PDF] [Code]

    Identifying image background as a shortcut knowledge ungeneralizable beyond training categories in Few-Shot Learning. A novel framework, COSOC, is designed to tackle this problem.

    Boosting Few-Shot Classification with View-Learnable Contrastive Learning
    Xu Luo, Yuxuan Chen, Liangjian Wen, Lili Pan, Zenglin Xu
    ICME, 2021
    [PDF] [Code]

    Applying contrastive learning to Few-Shot Learning, with views generated in a learning-to-learn fashion.

    Academic service


    Conference reviewer

  • NeurIPS 2023
  • ICML 2022, 2024
  • ICLR 2024
  • CVPR 2023, 2024
  • ICCV 2023
  • ECCV 2022, 2024
  • CoLLAs 2023, 2024
  • AAAI 2023

  • Journal reviewer
  • IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
  • IEEE Transactions on Image Processing (TIP)

  • This well-designed template is borrowed from this guy