Code for paper "Learning Multi-Source and Robust Representations for Continual Learning" NeurIPS2025.
Continual learning models must strike a delicate balance between plasticity (learning new tasks effectively) and stability (retaining previous knowledge). Although many recent methods utilize pre-trained backbones to improve stability, they largely rely on a single backbone, limiting adaptiveness and representation richness.
LMSRR introduces a multi-source, dynamically optimized representation framework, combining multiple heterogeneous pre-trained models with a novel set of optimization strategies, yielding robust and adaptive features for continual learning.
LMSRR contains three major components:
- Interacts multi-source features across scales.
- Learns task-relevant feature selection via attention modules.
- Dynamically refines backbone layers.
- Improves plasticity while preserving critical representations.
- Learns a switch vector controlling layerwise updating.
- Avoids over-regularization and improves new task learning.
Together, these form a unified optimization framework offering a strong trade-off between stability and plasticity.
- Multi-source representation learning via coordinated pre-trained backbones.
- Dynamic multi-scale fusion (MSIDF) capturing cross-source semantic complementarities.
- Adaptive multi-level optimization (MLRO) improving plasticity.
- Layerwise adaptive regularization (ARO) preventing catastrophic forgetting.
- State-of-the-art performance across standard continual learning benchmarks.
conda create -n LMSRR4CL python=3.10
conda activate LMSRR4CL
pip install -r requirements.txtcd LMSRR/bash command/lmsrr_cifar10.sh
bash command/lmsrr_cifar100.shLMSRR/
├── backbone/ # Pre-trained backbone models
│ ├── lmsrr.py # LMSRR backbone implementation
│ └── ...
├── command/ # Training scripts
├── datasets/ # Dataset loaders
│ └── ...
├── models/ # Method implementations
│ ├── lmsrr.py # LMSRR method implementation
│ └── <baseline>.py # ER, DER++, etc.
├── utils/ # Helper tools
│ └── ...
├── main.py # Main training entry
├── requirements.txt
└── README.mdIf you find this repository helpful, please cite our paper:
@inproceedings{ye2025lmsrr,
title={Learning Multi-Source and Robust Representations for Continual Learning},
author={Ye, Fei and Zhong, Yongcheng and Liu, Qihe and Bors, Adrian G and Hu, Rongyao and others},
booktitle={Proceedings of the 39th Conference on Neural Information Processing Systems},
year={2025}
}
This project is built upon the excellent continual learning framework Mammoth. We sincerely thank the authors for open-sourcing their work.
