Bridging PretrainβDownstream Task Misalignment in EEG Foundation Models via Test-Time Training
NeuroTTT is a novel framework that bridges the gap between pretrained EEG foundation models and downstream tasks through Test-Time Training (TTT). Our approach addresses the fundamental challenge of domain misalignment in EEG foundation models by introducing:
- Domain-specific self-supervised fine-tuning that augments foundation models with task-relevant objectives
- Test-time training for individual unlabeled test samples during inference
- Prediction entropy minimization (Tent) for continual model calibration
The framework integrates multiple state-of-the-art components:
- CBraMod: A Criss-Cross Brain Foundation Model for EEG decoding
- Tent: Fully test-time adaptation by entropy minimization
- Multiple pretext tasks: Band filtering, temporal ordering, channel masking, and more
π¨Β Installation | πΒ Quick Start | πΒ Datasets | ποΈΒ Training | π§ͺΒ Test-Time Adaptation | πΒ Documentation | πΒ Citation
If you use NeuroTTT in your research, please cite:
@misc{wang2025neurotttbridgingpretrainingdownstreamtask,
title={NeuroTTT: Bridging Pretraining-Downstream Task Misalignment in EEG Foundation Models via Test-Time Training},
author={Suli Wang and Yangshen Deng and Zhenghua Bao and Xinyu Zhan and Yiqun Duan},
year={2025},
eprint={2509.26301},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2509.26301},
}- π§ Foundation Model Integration: Built on CBraMod, a state-of-the-art EEG foundation model
- π Test-Time Training: Self-supervised adaptation during inference without labeled data
- π― Multiple Pretext Tasks: Band filtering, temporal ordering, channel masking, phase prediction
- π Comprehensive Dataset Support: Multiple EEG datasets across three domains
- β‘ Efficient Implementation: Optimized for both research and practical applications
- π§ Flexible Configuration: UV-based configuration system for easy experimentation
- Python 3.10+
- CUDA-compatible GPU (recommended)
- uv package manager
- Install uv package manager:
curl -LsSf https://astral.sh/uv/install.sh | sh- Clone the repository:
git clone https://github.com/wsl2000/NeuroTTT.git
cd NeuroTTT- Initialize environment:
bash init_env.sh- Activate environment:
source .venv/bin/activateDATASET_PROCESSED_SPEECH="${DATASET_PROCESSED_ROOT}BCIC2020-3/"
PRETRAINED_WEIGHTS="./CBraMod/pretrained_weights/pretrained_weights.pth"
MODEL_WEIGHTS_ROOT="./model_weights/BCIC2020-3/"
SPLIT_CONFIG="./configs/speech_dataset_default.yaml"
# Fine-tune on BCIC2020-3 dataset
python CBraMod/finetune_main.py \
--epochs 50 \
--cuda 0 \
--seed 8888 \
--batch_size 64 \
--lr 5e-4 \
--split_config ${SPLIT_CONFIG} \
--multi_lr 0 \
--weight_decay 5e-2 \
--dropout 0.1 \
\
--downstream_dataset BCIC2020-3 \
--datasets_dir ${DATASET_PROCESSED_SPEECH} \
--num_of_classes 5 \
--model_dir ${MODEL_WEIGHTS_ROOT} \
--use_pretrained_weights True \
--foundation_dir ${PRETRAINED_WEIGHTS} \
--classifier all_patch_reps \
--pretext noneNeuroTTT provides built-in support for three EEG datasets. You can extend it to additional datasets, and weβve retained CBraModβs original dataset support for convenience:
- BCIC2020-3: Imagined Speech Classification (5 classes)
- BCIC-IV-2a: Motor imagery classification (4 classes)
- MentalArithmetic: Mental workload (binary classification)
- CHB-MIT: Seizure detection (binary classification)
- TUAB: Abnormal EEG detection (binary classification)
- TUEV: EEG evaluation (multi-class)
- ISRUC: Sleep stage classification (5 stages)
- SEED-V: Emotion recognition (5 emotions)
- SEED-VIG: Vigilance estimation (regression)
- Faced: Face processing (multi-class)
- Mumtaz2016: Mental state classification (binary)
- Stress Dataset: Stress level classification
Fine-tune with self-supervised pretext tasks:
# With band filtering pretext task
python CBraMod/finetune_main.py \
--downstream_dataset MentalArithmetic \
--pretext band \
--pretext_weight_band 0.1 \
--epochs 20 \
--lr 1e-4
# With multiple pretext tasks
python CBraMod/finetune_main.py \
--downstream_dataset BCIC-IV-2a \
--pretext all \
--pretext_weight_band 0.2 \
--pretext_weight_temporal 0.6 \
--epochs 20band: Frequency band filtering and reconstructiontemporal: Temporal order predictionchannel: Channel masking and reconstructionphase: Phase prediction tasksreverse: Reverse sequence predictionall: Combination of multiple pretext tasks
Adapt the model to individual test samples using self-supervised objectives:
python CBraMod/ttt_main.py \
--downstream_dataset BCIC2020-3 \
--model_path ./model_weights/BCIC2020-3/best_model.pth \
--ttt_lr 1e-4 \
--ttt_steps 5 \
--pretext band \
--split_config ./configs/speech_dataset_default.yamlApply Tent for continual adaptation during inference:
python CBraMod/tta_main.py \
--test_time_method tent \
--downstream_dataset MentalArithmetic \
--model_path ./model_weights/MentalArithmetic/best_model.pth \
--lr 1e-3 \
--steps 1 \
--episodicsource: No adaptation (baseline)tent: Entropy minimization adaptationnorm: Batch normalization statistics update
NeuroTTT/
βββ CBraMod/ # CBraMod foundation model
β βββ models/ # Model architectures
β βββ datasets/ # Dataset loaders
β βββ preprocessing/ # Data preprocessing scripts
β βββ pretrained_weights/ # Pretrained model weights
β βββ finetune_main.py # Fine-tuning script
β βββ ttt_main.py # Test-time training script
β βββ tta_main.py # Test-time adaptation script
βββ tent/ # Tent implementation
β βββ tent.py # Core Tent algorithm
β βββ norm.py # Normalization methods
β βββ cifar10c.py # Example usage
βββ configs/ # Configuration files
β βββ speech_dataset_default.yaml
β βββ ...
βββ figure/ # Figures and diagrams
βββ init_env.sh # Environment setup script
βββ README.md # This file
Configure dataset splits using YAML files in the configs/ directory:
# Example: speech_dataset_default.yaml
trial_range: [0, 399]
subject_range: [1, 16]
test_split_by: trial
val_split_by: trial
test_split: [350, 399]
val_split: [300, 349]Key parameters for model configuration:
classifier:all_patch_reps,all_patch_reps_onelayer,avgpooling_patch_repsdropout: Dropout rate (default: 0.1)pretext: Pretext task selectionpretext_weight_*: Weights for different pretext tasks
- CBraMod: Original implementation by wjq-learning
- Tent: Original implementation from ICLR 2021 paper
- All dataset providers and the EEG research community
