Skip to content

Conversation

@Flova
Copy link
Collaborator

@Flova Flova commented Apr 12, 2023

Proposed changes

  • tune hyperparams
  • scale based on mini batch size
  • linear instead of step wise decay
  • use sgd
  • only log once per optimizer step

Related issues

Necessary checks

  • Update poetry package version semantically
  • Write documentation
  • Create issues for future work
  • Test on your machine

@Flova Flova changed the title Tune coco parameters Tune for COCO Apr 12, 2023
@J-LINC
Copy link

J-LINC commented May 2, 2023

hi ,Regarding what you mentioned:linear instead of step wise decay, I have reviewed your code and found that there is only one learning rate decay strategy, which is to adjust according to steps. How can I perform linear decay or other decay methods
as for using sgd ,i think you are right for most trains and i successed when i using sgd in trianing bdd100k

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants