Skip to content

iLearn-Lab/SIGIR23-LightGT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LightGT: A Light Graph Transformer for Multimedia Recommendation

1National University of Singapore    2Shandong University   
3University of Science and Technology of China    4Harbin Institute of Technology, Shenzhen   

Updates

  • [04/2026] Formatted README.
  • [07/2023] Paper accepted at ACM SIGIR 2023, Taipei.
  • [07/2023] Code released.

Introduction

This is the official PyTorch implementation of LightGT, a Light Graph Transformer for Multimedia Recommendation.


Project Structure

.
├── image/                 # Framework and model figures
│   ├── figure1.png
│   └── figure2.png
├── main.py                # Training and evaluation entry point
├── model.py               # LightGT model definition
├── transformer.py         # Transformer module
├── dataloader.py          # Data loading utilities
├── Parser.py              # Argument parser
├── sparsity_group_test.py # Sparsity group evaluation
└── README.md

Installation

1. Clone the repository

git clone https://github.com/iLearn-Lab/SIGIR23-LightGT.git
cd SIGIR23-LightGT

2. Install dependencies

The code has been tested under Python 3.8.15. Required packages:

  • PyTorch == 1.7.0
  • NumPy == 1.23.4

Dataset

You can find the full version of recommendation datasets via Kwai, Tiktok, and Movielens. Due to copyright restrictions, we cannot release them directly.

#Interactions #Users #Items Visual Acoustic Textual
Movielens 1,239,508 55,485 5,986 2,048 128 100
Tiktok 726,065 36,656 76,085 128 128 128
Kwai 1,664,305 22,611 329,510 2,048 - 100

MMGCN provides corresponding toy datasets that can be used for research.

Data format:

  • train.npy — Train file. Each line is a user with positive interactions: (userID, itemID)
  • val.npy — Validation file. Each line is a user with positive interactions: (userID, itemID)
  • test.npy — Test file. Each line is a user with positive interactions: (userID, itemID)

Usage

Training & Evaluation

  • Movielens dataset

    python main.py --l_r=1e-2 --weight_decay=1e-2 --src_len=50 --score_weight=0.05 --nhead=1 --transformer_layers=4 --batch_size=2048 --lightgcn_layers=4 --dataset=movielens
  • Tiktok dataset

    python main.py --l_r=1e-2 --weight_decay=1e-2 --src_len=50 --score_weight=0.05 --nhead=1 --transformer_layers=4 --batch_size=2048 --lightgcn_layers=4 --dataset=tiktok
  • Kwai dataset

    python main.py --l_r=1e-2 --weight_decay=1e-2 --src_len=50 --score_weight=0.05 --nhead=1 --transformer_layers=4 --batch_size=2048 --lightgcn_layers=4 --dataset=kwai

Citation

If you find this work useful for your research, please kindly cite our paper:

@inproceedings{wei2023lightgt,
  title      = {Lightgt: A light graph transformer for multimedia recommendation},
  author     = {Wei, Yinwei and
                Liu, Wenqi and
                Liu, Fan and
                Wang, Xiang and
                Nie, Liqiang and
                Chua, Tat-Seng},
  booktitle  = {Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  pages      = {1508--1517},
  year       = {2023}
}

Acknowledgement

This work is developed based on MMGCN and LightGCN. We thank the authors for their open-source contributions.

About

[SIGIR 2023] LightGT: A Light Graph Transformer for Multimedia Recommendation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages