Skip to content

The official PyTorch implementation of L2CS-Net for gaze estimation and tracking

License

Notifications You must be signed in to change notification settings

Ahmednull/L2CS-Net

Repository files navigation

animated


L2CS-Net

The official PyTorch implementation of L2CS-Net for gaze estimation and tracking.

Installation

Install package with the following:

pip install git+https://github.com/edavalosanaya/L2CS-Net.git@main

Or, you can git clone the repo and install with the following:

pip install [-e] .

Now you should be able to import the package with the following command:

$ python
>>> import l2cs

Usage

Detect face and predict gaze from webcam

from l2cs import Pipeline, render
import cv2

gaze_pipeline = Pipeline(
    weights=CWD / 'models' / 'L2CSNet_gaze360.pkl',
    arch='ResNet50',
    device=torch.device('cpu') # or 'gpu'
)
 
cap = cv2.VideoCapture(cam)
_, frame = cap.read()    

# Process frame and visualize
results = gaze_pipeline.step(frame)
frame = render(frame, results)

Demo

  • Download the pre-trained models from here and Store it to models/.
  • Run:
 python demo.py \
 --snapshot models/L2CSNet_gaze360.pkl \
 --gpu 0 \
 --cam 0 \

This means the demo will run using L2CSNet_gaze360.pkl pretrained model

Community Contributions

MPIIGaze

We provide the code for train and test MPIIGaze dataset with leave-one-person-out evaluation.

Prepare datasets

  • Download MPIIFaceGaze dataset from here.
  • Apply data preprocessing from here.
  • Store the dataset to datasets/MPIIFaceGaze.

Train

 python train.py \
 --dataset mpiigaze \
 --snapshot output/snapshots \
 --gpu 0 \
 --num_epochs 50 \
 --batch_size 16 \
 --lr 0.00001 \
 --alpha 1 \

This means the code will perform leave-one-person-out training automatically and store the models to output/snapshots.

Test

 python test.py \
 --dataset mpiigaze \
 --snapshot output/snapshots/snapshot_folder \
 --evalpath evaluation/L2CS-mpiigaze  \
 --gpu 0 \

This means the code will perform leave-one-person-out testing automatically and store the results to evaluation/L2CS-mpiigaze.

To get the average leave-one-person-out accuracy use:

 python leave_one_out_eval.py \
 --evalpath evaluation/L2CS-mpiigaze  \
 --respath evaluation/L2CS-mpiigaze  \

This means the code will take the evaluation path and outputs the leave-one-out gaze accuracy to the evaluation/L2CS-mpiigaze.

Gaze360

We provide the code for train and test Gaze360 dataset with train-val-test evaluation.

Prepare datasets

  • Download Gaze360 dataset from here.

  • Apply data preprocessing from here.

  • Store the dataset to datasets/Gaze360.

Train

 python train.py \
 --dataset gaze360 \
 --snapshot output/snapshots \
 --gpu 0 \
 --num_epochs 50 \
 --batch_size 16 \
 --lr 0.00001 \
 --alpha 1 \

This means the code will perform training and store the models to output/snapshots.

Test

 python test.py \
 --dataset gaze360 \
 --snapshot output/snapshots/snapshot_folder \
 --evalpath evaluation/L2CS-gaze360  \
 --gpu 0 \

This means the code will perform testing on snapshot_folder and store the results to evaluation/L2CS-gaze360.