Bio
I am an Assistant Professor (Principal investigator, PhD supervisor) in the Department of Statistics and Data Science at SUSTech (China, Shenzhen). I obtained my Ph.D. degree at the School of Computer Science and Engineering, Nanyang Technological University, supervised by Prof. Bo An.
During my Ph.D, I was fortunate to work as a visiting scholar in the group of Prof. Sharon Yixuan Li at the University of Wisconsin Madison in 2022.
Previously I spent a wonderful year as a research assistant in the Institute for Interdisciplinary Information Sciences at Tsinghua University.
Prior to that, I received my B.E. in Software Engineering from Huazhong University of Science and Technology in 2016.
My research interest falls in the scope of reliable machine learning (uncertainty estimation), and its applications in data optimization and privacy.
Generally, we expect deep learning models to produce precise estimation of their uncertainty in predictions, using the form of probabilities (confidences in ML) or conformal prediction sets.
Besides, my research is closely related to data-centric machine learning and foundation model, like data quality and efficiency. We are also interested in the statistic theory of data selection in machine learning, which provides theoretical principles to guide the data optimization in machine learning workflows.
We are always actively looking for Postdocs, PhDs, and RA/interns to join our research.
News
May 2024
Two papers are accepted by NeurIPS 2024. Congratulations to Hongfu Gao!
May 2024
I am accepting Mphil and PhD applications (2025 fall). I am always looking for highly-motivated research interns, RAs and PostDocs to join our research (refer to
this page).
May 2024
Four papers are accepted by ICML 2024.
March 2024
I will be serving as a Area Chair for NeurIPS 2024.
January 2024
Three papers are accepted by ICLR 2024 (Two are Spotlights).
December 2023
We release a Python toolbox for conformal prediction research
TorchCP.
May 2022
Two papers are accepted by ICML 2022 (Accept rate: 21.9%).
Working papers
C-Adapter: Adapting Deep Classifiers for Efficient Conformal Prediction Sets
Kangdao Liu, Hao Zeng, Jianguo Huang, Huiping Zhuang, Chi-Man Vong, Hongxin Wei *
Fine-tuning can Help Detect Pretraining Data from Large Language Models
Hengxiang Zhang, Songxin Zhang, Bingyi Jing, Hongxin Wei *
Defending Membership Inference Attacks via Privacy-aware Sparsity Tuning
Qiang Hu, Hengxiang Zhang, Hongxin Wei *
Understanding and Mitigating Miscalibration in Prompt Tuning for Vision-Language Models
Shuoyuan Wang, Yixuan Li, Hongxin Wei *
TorchCP: A Library for Conformal Prediction based on PyTorch
Hongxin Wei, Jianguo Huang
Does Confidence Calibration Help Conformal Prediction?
Huajun Xi, Jianguo Huang, Lei Feng, Hongxin Wei *
Exploring Learning Complexity for Downstream Data Pruning
Wenyu Jiang, Zhenlong Liu, Zejian Xie, Songxin Zhang, Bingyi Jing, Hongxin Wei *
MetaInfoNet: Learning Task-Guided Information for Sample Reweighting
Hongxin Wei, Lei Feng, Rundong Wang, Bo An
Selected Publications
On the Noise Robustness of In-Context Learning for Text Generation
NeurIPS 2024
Hongfu Gao, Feipeng Zhang, Wenyu Jiang, Jun Shu, Feng Zheng, Hongxin Wei*
GACL: Exemplar-Free Generalized Analytic Continual Learning
NeurIPS 2024
Huiping Zhuang, Yizhu Chen, Di Fang, Run He, Kai Tong, Hongxin Wei, Ziqian Zeng, Cen Chen
Open-Vocabulary Calibration for Vision-Language Models
ICML 2024
Shuoyuan Wang, Jindong Wang, Guoqing Wang, Bob Zhang, Kaiyang Zhou, Hongxin Wei *
Conformal Prediction for Deep Classifier via Label Ranking
ICML 2024
Jianguo Huang, Huajun Xi, Linjun Zhang, Huaxiu Yao, Yue Qiu, Hongxin Wei *
Mitigating Privacy Risk in Membership Inference by Convex-Concave Loss
ICML 2024
Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei *
Towards Minimal Coreset Size under Model Performance Constraints
ICML 2024 (Spotlight)
Xiaobo Xia, Jiale Liu, Shaokun Zhang, Qingyun Wu, Hongxin Wei, Tongliang Liu
CroSel: Cross Selection of Confident Pseudo Labels for Partial-Label Learning
CVPR 2024 (Oral)
Shiyu Tian, Hongxin Wei, Yiqun Wang, Lei Feng
DOS: Diverse Outlier Sampling for Out-of-Distribution Detection
ICLR 2024
Wenyu Jiang, Hao Cheng, MingCai Chen, Chongjun Wang, Hongxin Wei *
Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks
ICLR 2024(Spotlight)
Hao Chen, Jindong Wang, Ankit Shah, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj
Consistent Multi-Class Classification from Multiple Unlabeled Datasets
ICLR 2024 (Spotlight)
Zixi Wei, Senlin Shu, Yuzhou Cao, Hongxin Wei, Bo An, Lei Feng
Optimization-Free Test-Time Adaptation for Cross-Person Activity Recognition.
IMWUT/Ubicomp 2024
Shuoyuan Wang, Jindong Wang, HuaJun Xi, Bob Zhang, Lei Zhang, Hongxin Wei
On the Importance of Feature Separability in Predicting Out-Of-Distribution Error.
NeurIPS 2023
Renchunzi Xie, Hongxin Wei *, Lei Feng, Yuzhou Cao, Bo An
In Defense of Softmax Parametrization for Calibrated and Consistent Learning to Defer
NeurIPS 2023
Yuzhou Cao, Hussein Mozannar, Lei Feng, Hongxin Wei, Bo An
Regression with Cost-based Rejection
NeurIPS 2023
Xin Cheng, Yuzhou Cao, Haobo Wang, Hongxin Wei, Bo An, Lei Feng
Mitigating Memorization of Noisy Labels by Clipping the Model Prediction
ICML 2023
Hongxin Wei, Huiping Zhuang, Renchunzi Xie, Lei Feng, Gang Niu, Bo An, Yixuan Li
A Generalized Unbiased Risk Estimator for Learning with Augmented Classes
AAAI 2023
Senlin Shu, Shuo He, Haobo Wang,
Hongxin Wei, Tao Xiang,
Lei Feng
Can Adversarial Training Be Manipulated By Non-Robust Features?
NeurIPS 2022
Analytic Class-Incremental Learning with Absolute Memorization and Privacy Protection
NeurIPS 2022
Huiping Zhuang, Zhenyu Weng, Hongxin Wei, Renchunzi Xie, Toh Kar-Ann, Zhiping Lin
Mitigating Neural Network Overconfidence with Logit Normalization
ICML 2022
Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets
ICML 2022
Deep Learning from Multiple Noisy Annotators as A Union
TNNLS
Hongxin Wei, Renchunzi Xie, Lei Feng, Bo An
GearNet: Stepwise Dual Learning for Weakly Supervised Domain Adaptation
AAAI 2022
Renchunzi Xie, Hongxin Wei *, Lei Feng, Bo An
Open-set Label Noise Can Improve Robustness Against Inherent Label Noise
NeurIPS 2021
Hongxin Wei, Lue Tao, Renchunzi Xie, Bo An
Multiple-Instance Learning from Similar and Dissimilar Bags
SIGKDD 2021
Lei Feng, Senlin Shu, Yuzhou Cao, Lue Tao, Hongxin Wei, Tao Xiang, Bo An, Gang Niu
Commission Fee is not Enough: A Hierarchical Reinforced Framework for Portfolio Management
AAAI 2021
Rundong Wang †, Hongxin Wei †, Bo An, Zhouyan Feng, Jun Yao
Embedding-Augmented Generalized Matrix Factorization for Recommendation with Implicit Feedback
IEEE Intelligent Systems (IEEE-IS)
Lei Feng, Hongxin Wei *, Qingyu Guo, Zhuoyi Lin, Bo An
Combating noisy labels by agreement: A joint training method with co-regularization
CVPR 2020
Hongxin Wei, Lei Feng, Xiangyu Chen, Bo An
Research Group
Hao Zeng
Postdoc (co-supervised with Prof. B. Jing)
PhD degree from Xiamen University
Shuoyuan Wang
Ph.D. student
Master degree from University of Macau
Hengxiang Zhang
Ph.D. student
Master degree from UESTC
Zhenlong Liu
Ph.D. student
Bachelor degree from SUSTech
Beier Luo
Master student
Bachelor degree from SUSTech
Cong Ding
Master student
Bachelor degree from SUSTech
Wenyu Jiang
Research Intern
Ph.D. student at Nanjing University.
Kangdao Liu
Research Intern
Ph.D. student at University of Macau
Hongfu Gao
Research Intern,
Master student at XJTU
Jianqing Song
Research Intern,
PhD student at Nanjing University
Zhile Xu
Research Intern,
Master student at The University of Edinburgh
Ziyuan Wang
Research Intern (Remote),
Master student at Chalmers University of Technology
Qiqi Tao
Research Intern,
Master degree from NUS, Singapore
Mi Zhou
Research Intern,
Master student at Fudan University
Xuanning Zhou
Undergraduate student at HITSZ
Huajun Xi
Undergraduate student at SUSTech
Zicheng Xie
Undergraduate student at SUSTech
Yunxin Huang
Undergraduate student at SUSTech
Qiang Hu
Undergraduate student at SUSTech
Past Members
Jianguo Huang
Research Intern (2023-2024)
Now PhD student at NTU, Singapore