Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights
Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights. Arxiv. Z. Liang, D. Tang, Y. Zhou, X. Zhao, M. Shi, W. Zhao, Z. Li, P. Wang, K. Schürholt, D. Borth, M. M. Bronstein, Y. You, Z. Wang, K. Wang
Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights. Arxiv. Z. Liang, D. Tang, Y. Zhou, X. Zhao, M. Shi, W. Zhao, Z. Li, P. Wang, K. Schürholt, D. Borth, M. M. Bronstein, Y. You, Z. Wang, K. Wang
A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training. CVPR 2025. K. Wang*, M. Shi*, Y. Zhou*,Z. Yuan, Y. Shang, X. Peng, H. Zhang, Y. You
Ferret: An Efficient Online Continual Learning Framework under Varying Memory Constraints. CVPR 2025. Y. Zhou, Y. Tian, J. Lv, M. Shi, Y. Li, Q. Ye, S. Zhang, J. Lv
REPA Works Until It Does not: Early-Stopped, Holistic Alignment Supercharges Diffusion Training. Arxiv. Z. Wang, W. Zhao, Y. Zhou, Z. Li, Z. Liang, M. Shi, X. Zhao, P. Zhou, K. Zhang, Z. Wang, K. Wang, Y. You
DD-ranking: Rethinking the evaluation of dataset distillation. Arxiv. Z. Li, X. Zhong, S. Khaki, Z. Liang, Y. Zhou, M. Shi, Z. Wang, X. Zhao, W. Zhao, Z. Qin, M. Wu, P. Zhou, H. Wang, D. J. Zhang, J. Liu, S. Wang, D. Liu, L. Zhang, G. Li, K. Wang, Z. Zhu, Z. Ma, J. T. Zhou, J. Lv, Y. Jin, P. Wang, K. Zhang, L. Lyu, Y. Huang, Z. Akata, Z. Deng, X. Wu, G. Cazenavette, Y. Shang, J. Cui, J. Gu, Q. Zheng, H. Ye, S. Wang, X. Wang, Y. Yan, A. Yao, M. Z. Shou, T. Chen, H. Bilen, B. Mirzasoleiman, M. Kellis, K. N. Plataniotis, Z. Wang, B. Zhao, Y. You, K. Wang
Make Optimization Once and for All with Fine-grained Guidance. Arxiv. M. Shi, R. Lin, X. Chen, Y. Zhou, Z. Ding, P. Li, T. Wang, K. Wang, Z. Wang, J. Zhang, T. Chen.
GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning. Arxiv. S. Zhou*, S. Wang*, Z. Yuan*, M. Shi, Y. Shang, D. Yang.
Faster Vision Mamba is Rebuilt in Minutes via Merged Token Re-training. Arxiv. M. Shi*, Y. Zhou*, R. Yu, Z. Li, Z. Liang, X. Zhao, X. Peng, T Rajpurohit, R. Vedantam, W. Zhao, K. Wang, Y. You.
Tackling Feature-Classifier Mismatch in Federated Learning via Prompt-Driven Feature Transformation. Arxiv. X. Wu, J. Niu, X. Liu, M. Shi, G. Zhu, S. Tang.
Federated CINN Clustering for Accurate Clustered Federated Learning. ICASSP 2024. Y. Zhou, M. Shi, Y. Tian, Y. Li, Q. Ye, J. Lv
DeFTA: A Plug-and-Play Peer-to-Peer Decentralized Federated Learning Framework. InfoSci. Y. Zhou, M. Shi, Y. Tian, Q. Ye, J. Lv
PRIOR: Personalized Prior for Reactivating the Information Overlooked in Federated Learning. NeurIPS 2023. M. Shi, Y. Zhou, K. Wang, H. Zhang, S. Huang, Q. Ye, J. Lv
Unconstrained Feature Model and Its General Geometric Patterns in Federated Learning: Local Subspace Minority Collapse. ICONIP 2023. M. Shi, Y. Zhou, Q. Ye, J. Lv
Communication-efficient Federated Learning with Single-Step Synthetic Features Compressor for Faster Convergence. ICCV 2023. Y. Zhou, M. Shi, Y. Li, Y. Sun, Q. Ye, J. Lv
DLB: a dynamic load balance strategy for distributed training of deep neural networks. Trans.ETCI. Q. Ye, Y. Zhou, M. Shi, Y. Sun, J. Lv
FLSGD: free local SGD with parallel synchronization. JoSc. Q. Ye, Y. Zhou, M. Shi, J. Lv