Skip to content

Learning Precise Affordances from Egocentric Videos for Robotic Manipulation (ICCV 2025)

License

Notifications You must be signed in to change notification settings

Reagan1311/Aff-Grasp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning Precise Affordances from Egocentric Videos for Robotic Manipulation

arXiv GitHub

Overview

  • Code for affordance extraction from egocentric videos is in the folder ego2aff
  • Code for affordance model learning is in the folder affordance-learning
  • Data: Data_for_Aff-Grasp
  • Model and Log: Model_for_Aff-Grasp

Citation

@article{li2025affgrasp,
      title     = {Learning Precise Affordances from Egocentric Videos for Robotic Manipulation}, 
      author    = {Li, Gen and Tsagkas, Nikolaos and Song, Jifei and Mon-Williams, Ruaridh and Vijayakumar, Sethu and Shao, Kun and Sevilla-Lara, Laura},
      journal   = {Proceedings of the IEEE/CVF International Conference on Computer Vision},
      year      = {2025},
    }

Anckowledgement

Part of the code is derived from the hand_object_detector, hoi-forecase, GroundedSAM, and ViT-Adapter. Thanks for their great work!

About

Learning Precise Affordances from Egocentric Videos for Robotic Manipulation (ICCV 2025)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages