Skip to content

Conversation

@lkhphuc
Copy link
Contributor

@lkhphuc lkhphuc commented Aug 15, 2022

I discovered a optax.set_to_zero() from this thread.

When compare with the original optax.scale(0.0) on a ViT H/16 with some heads, peak GPU memory usage (by setting XLA_PYTHON_CLIENT_PREALLOCATE=false):

  • Full trainable: 18GiB
  • optax.scale(0.0) (current): 9.8GiB
  • optax.set_to_zero (PR): 5.6GiB

The frozen weight was set in the config like this (for both current and PR change):

  config.schedule = [
    (".*ViT_0/.*", None),
    (".*", dict(warmup_steps=2500))
  ]

Theoretically memory usage should be the same after jitted, so I'm not sure if this is a GPU-specific bug from jax or not.

Copy link
Contributor

@akolesnikoff akolesnikoff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, great catch! I did independently confirm that this change results in a reduced memory requirements (both for TPUs and GPUs), as well as faster execution.

@akolesnikoff akolesnikoff merged commit 1c6f5aa into google-research:main Aug 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants