3School of Computer Science, University of Birmingham
This repository contains frameworks for pre-processing, training, and evaluating full face or multi-region (face, left and right eyes) gaze estimation models on the following three datasets: ETH-XGaze, MPIIFaceGaze, and Gaze360. These frameworks allow for flexible load single-face input or multi-region input and serve to reproduce our results and benchmark new models.
The readme files in the submodules of different datasets here contain step-by-step tutorials to help you set up, train and evaluate our existing and your new models.
- Paper arXiv page
- To prepare the normalized data for ETH-XGaze, MPIIFaceGaze and Gaze360 datasets, please refer to our data normalization repository.
The pre-trained models on different datasets can be found in the sub-folders (ETHX-Gaze, MPIIFaceGaze, Gaze360).
@misc{wang2023investigation,
title={Investigation of Architectures and Receptive Fields for Appearance-based Gaze Estimation},
author={Yunhan Wang and Xiangwei Shi and Shalini De Mello and Hyung Jin Chang and Xucong Zhang},
year={2023},
eprint={2308.09593},
archivePrefix={arXiv},
primaryClass={cs.CV}
}