3University of Science and Technology of China 4Harbin Institute of Technology, Shenzhen
- [04/2026] Formatted README.
- [07/2023] Paper accepted at ACM SIGIR 2023, Taipei.
- [07/2023] Code released.
This is the official PyTorch implementation of LightGT, a Light Graph Transformer for Multimedia Recommendation.
.
├── image/ # Framework and model figures
│ ├── figure1.png
│ └── figure2.png
├── main.py # Training and evaluation entry point
├── model.py # LightGT model definition
├── transformer.py # Transformer module
├── dataloader.py # Data loading utilities
├── Parser.py # Argument parser
├── sparsity_group_test.py # Sparsity group evaluation
└── README.md
git clone https://github.com/iLearn-Lab/SIGIR23-LightGT.git
cd SIGIR23-LightGTThe code has been tested under Python 3.8.15. Required packages:
- PyTorch == 1.7.0
- NumPy == 1.23.4
You can find the full version of recommendation datasets via Kwai, Tiktok, and Movielens. Due to copyright restrictions, we cannot release them directly.
| #Interactions | #Users | #Items | Visual | Acoustic | Textual | |
|---|---|---|---|---|---|---|
| Movielens | 1,239,508 | 55,485 | 5,986 | 2,048 | 128 | 100 |
| Tiktok | 726,065 | 36,656 | 76,085 | 128 | 128 | 128 |
| Kwai | 1,664,305 | 22,611 | 329,510 | 2,048 | - | 100 |
MMGCN provides corresponding toy datasets that can be used for research.
Data format:
train.npy— Train file. Each line is a user with positive interactions: (userID, itemID)val.npy— Validation file. Each line is a user with positive interactions: (userID, itemID)test.npy— Test file. Each line is a user with positive interactions: (userID, itemID)
-
Movielens dataset
python main.py --l_r=1e-2 --weight_decay=1e-2 --src_len=50 --score_weight=0.05 --nhead=1 --transformer_layers=4 --batch_size=2048 --lightgcn_layers=4 --dataset=movielens
-
Tiktok dataset
python main.py --l_r=1e-2 --weight_decay=1e-2 --src_len=50 --score_weight=0.05 --nhead=1 --transformer_layers=4 --batch_size=2048 --lightgcn_layers=4 --dataset=tiktok
-
Kwai dataset
python main.py --l_r=1e-2 --weight_decay=1e-2 --src_len=50 --score_weight=0.05 --nhead=1 --transformer_layers=4 --batch_size=2048 --lightgcn_layers=4 --dataset=kwai
If you find this work useful for your research, please kindly cite our paper:
@inproceedings{wei2023lightgt,
title = {Lightgt: A light graph transformer for multimedia recommendation},
author = {Wei, Yinwei and
Liu, Wenqi and
Liu, Fan and
Wang, Xiang and
Nie, Liqiang and
Chua, Tat-Seng},
booktitle = {Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval},
pages = {1508--1517},
year = {2023}
}This work is developed based on MMGCN and LightGCN. We thank the authors for their open-source contributions.

