Mingjia (Samuel Jayden) Shi’s Homepage.

Here’s the page (academic CV) about Mingjia (Samuel Jayden) Shi, 石明佳 in Chinese. An ENFP who shouldn’t say too much, but especially wants to. If you think the one in the sidebar is too casual, I have a formal version. You should focus more on the content below than here! Feel free to contact me for collaboration, discussion or just to say hi!

👨‍🎓 Biography

  • Ph.D. career begins in the VAST Lab, University of Virginia, with the greatest advisor ever of the whole mankind, Jundong Li.
  • Internship in HoumoAI (ended in July, 2025), researching on topics related to real-world and industrial applications (e.g., resource-preserving AI) here brings me new challendges here.
  • It’s the 2nd year (written in 2024) as an intern student in NUS HPC-Lab, and I enojoy the challenges and interesting topics here (e.g. Resource Preserving and Efficiency.).
  • From Sep. 2021 to June. 2024, I completed my master degree at Sichuan University majored in Artificial Intelligence (Data Science), right where I had completed my 4-year bachelor’s degree before, supervised by Prof. Jiancheng Lv. The majors of my career in Sichuan University are federated learning (i.e., Decentralized Data Analyses and Large System).

🎉 Latest News

  • [Aug. 25] Ph.D. in computer engineering career has begun in UVa! Welcome N.A. proposal, paper and all other academic collaborations.
Old ones before 2025
[Jan. 25] 25 Fall PhD and internship in my gap year are decided. Interesting collaborations are still welcome.
[Dec. 24] Waiting for 2025 Fall PhD and projects in my gap year.
[Aug. 24] Actively applying for a 2025 Fall PhD! If you are interested in a student familiar with theoretical analysis, generative model with extensive industry experiences as well, feel free to mail!
[Aug. 24] Actively applying for a 2025 Fall PhD! If you are interested in a student familiar with theoretical analysis, feel free to mail!

👣 Current Career

Settled down in University of Virginia, Charlottesville. Charlottesville is a beautiful town of great natural and academical environment. I am now exploring new research directions for my following Ph.D. career. Proposal collaborations are all welcomed.

🔍 Research Interests

My research focus on Resource Preserving and …

  • 🖐️ Works in hands. My works are mainly both theoretical analyses and corresponding methods about Resource Preserving. Multi-Modal and Efficiency on Trends. Besides those only collaborative works before my enrollment in UVa that are kept processing, and the other future works and collaboration are in U.S. or other nonsensitive countries.
  • 🎓 Overall background. A knowledge and research background about math+cs, system/control/information theories, deep learning thoeries, optimization and generalization analyses.
  • 🌟 More-interest. There is a continuing interest in technical research as well as basic science research. Physics and other science disciplines are always beautiful.

📄 Selected Publications

📅 2025:

  1. Arxiv 2025 Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights. Z. Liang, D. Tang, Y. Zhou, X. Zhao, M. Shi, W. Zhao, Z. Li, P. Wang, K. Schürholt, D. Borth, M. M. Bronstein, Y. You, Z. Wang, K. Wang (paper)
  2. Arxiv 2025 DD-ranking: Rethinking the evaluation of dataset distillation. Z. Li, X. Zhong, S. Khaki, Z. Liang, Y. Zhou, M. Shi, Z. Wang, X. Zhao, W. Zhao, Z. Qin, M. Wu, P. Zhou, H. Wang, D. J. Zhang, J. Liu, S. Wang, D. Liu, L. Zhang, G. Li, K. Wang, Z. Zhu, Z. Ma, J. T. Zhou, J. Lv, Y. Jin, P. Wang, K. Zhang, L. Lyu, Y. Huang, Z. Akata, Z. Deng, X. Wu, G. Cazenavette, Y. Shang, J. Cui, J. Gu, Q. Zheng, H. Ye, S. Wang, X. Wang, Y. Yan, A. Yao, M. Z. Shou, T. Chen, H. Bilen, B. Mirzasoleiman, M. Kellis, K. N. Plataniotis, Z. Wang, B. Zhao, Y. You, K. Wang (paper)
  3. Arxiv 2025 REPA Works Until It Does not: Early-Stopped, Holistic Alignment Supercharges Diffusion Training. Z. Wang, W. Zhao, Y. Zhou, Z. Li, Z. Liang, M. Shi, X. Zhao, P. Zhou, K. Zhang, Z. Wang, K. Wang, Y. You (paper)
  4. Arxiv 2025 Make Optimization Once and for All with Fine-grained Guidance. Arxiv. M. Shi, R. Lin, X. Chen, Y. Zhou, Z. Ding, P. Li, T. Wang, K. Wang, Z. Wang, J. Zhang, T. Chen. (paper)
  5. CVPR 2025 A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training. K. Wang*, M. Shi*, Y. Zhou, Z. Li, Z. Yuan, Y. Shang, X. Peng, H. Zhang, Y. You (paper, code)
  6. CVPR 2025 Ferret: An Efficient Online Continual Learning Framework under Varying Memory Constraints. Y. Zhou, Y. Tian, J. Lv, M. Shi, Y. Li, Q. Ye, S. Zhang, J. Lv (paper)
  7. TNNLS E-3SFC: Communication-Efficient Federated Learning With Double-Way Features Synthesizing. Y. Zhou, Y. Tian, M. Shi, Y. Li, Y. Sun, Q. Ye, J. Lv (paper)
  8. ACL 2025 Findings GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning. S. Zhou, S. Wang, Z. Yuan*, M. Shi, Y. Shang, D. Yang (paper)

📅 Early Selected:

Released Pre-Print

  1. Arxiv 2024 Faster Vision Mamba is Rebuilt in Minutes via Merged Token Re-training. M. Shi*, Y. Zhou*, R. Yu, Z. Li, Z. Liang, X. Zhao, X. Peng, T Rajpurohit, R. Vedantam, W. Zhao, K. Wang, Y. You. (paper, code, page)
  2. Arxiv 2024 Tackling Feature-Classifier Mismatch in Federated Learning via Prompt-Driven Feature Transformation. X. Wu, J. Niu, X. Liu, M. Shi, G. Zhu, S. Tang (paper)

Conference

  1. ICASSP 2024 Federated CINN Clustering for Accurate Clustered Federated Learning. Y. Zhou, M. Shi, Y. Tian, Y. Li, Q. Ye, J. Lv (paper)
  2. NeurIPS 2023 PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning. M. Shi, Y. Zhou, K. Wang, H. Zhang, S. Huang, Q. Ye, J. Lv (paper, code)
  3. ICONIP 2023 Unconstrained Feature Model and Its General Geometric Patterns in Federated Learning: Local Subspace Minority Collapse. M. Shi, Y. Zhou, Q. Ye, J. Lv (paper)
  4. ICCV 2023 Communication-efficient Federated Learning with Single-Step Synthetic Features Compressor for Faster Convergence. Y. Zhou, M. Shi, Y. Li, Y. Sun, Q. Ye, J. Lv (paper)

Journal

  1. InfoSci DeFTA: A Plug-and-Play Peer-to-Peer Decentralized Federated Learning Framework. Y. Zhou, M. Shi, Y. Tian, Q. Ye, J. Lv (paper)
  2. Trans.ETCI DLB: a dynamic load balance strategy for distributed training of deep neural networks. Q. Ye, Y. Zhou, M. Shi, Y. Sun, J. Lv (paper)
  3. JoSc FLSGD: free local SGD with parallel synchronization. Q. Ye, Y. Zhou, M. Shi, J. Lv (paper)

Please see the full list in Google Scholar

🛎 Services

Reviewer: NeurIPS, ICLR, ICML, CVPR, ICCV, MM, ICPP, ICASSP, ICONIP; TNNLS, TCSVT, TPDS, TKDE, Neurocomputing and others.

🎈 Hobbies

📷 Photograph, ⛺ Travel, 🎵 Music, 🏸 Badminton, ⚽ Football (Soccer), ♟️ Chess and other Table-top Games, 🎱 Billiards, 🎮 Games, 🔬 Study on interesting topics,