-
Notifications
You must be signed in to change notification settings - Fork 238
fix: fix the deprecated usage of tuple slicing, wandb's start_method, and transformers's dtype #462
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This pull request addresses deprecated API usage and improves tensor slicing robustness. The changes replace the deprecated torch_dtype parameter with dtype in HuggingFace model loading functions, update tensor slicing to use explicit tuples for improved clarity and correctness, and remove an explicit wandb.Settings configuration that is no longer necessary.
Key changes:
- Standardized all HuggingFace model loading calls to use
dtypeinstead oftorch_dtype - Updated tensor slicing operations in Ulysses parallel processing utilities to use tuple indexing
- Simplified wandb initialization by removing explicit
start_method="fork"setting
Reviewed Changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| examples/docs/debug/cmp_rollout.py | Updated model loading to use dtype parameter |
| evaluation/model_utils.py | Changed AutoModelForCausalLM loading to use dtype parameter |
| areal/utils/ulysses.py | Converted list-based tensor slicing to tuple-based indexing |
| areal/utils/stats_logger.py | Removed explicit wandb start_method configuration |
| areal/engine/base_hf_engine.py | Migrated all model loading functions from torch_dtype to dtype |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
Summary of ChangesHello @rchardx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on enhancing the robustness and consistency of model loading and tensor manipulation within the codebase. It addresses potential issues arising from deprecated argument usage in HuggingFace Transformers and improves tensor indexing practices. Additionally, it streamlines the Weights & Biases logging configuration, contributing to a cleaner and more up-to-date system architecture. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly addresses several deprecated API usages, replacing torch_dtype with dtype and fixing tensor slicing to use tuples. These are valuable maintenance improvements. I've included a couple of suggestions to further enhance code quality. One is a refactoring to reduce code duplication in areal/engine/base_hf_engine.py, and another is to use explicit keyword arguments for better readability in examples/docs/debug/cmp_rollout.py. Overall, the changes are solid.
… and transformers's dtype
c6d3639 to
81a168a
Compare
garrett4wade
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
This pull request updates several model-loading and tensor manipulation functions across the codebase to improve consistency and correctness. The main change is standardizing the use of the
dtypeargument instead oftorch_dtypewhen loading models, which aligns with recent library updates and prevents potential type mismatches. Additionally, tensor slicing operations are made more robust by ensuring slices are always passed as tuples.Model loading and initialization updates
torch_dtypeargument withdtypein all calls to HuggingFace model loading functions (from_pretrainedandfrom_config) inareal/engine/base_hf_engine.py,evaluation/model_utils.py, andexamples/docs/debug/cmp_rollout.py, ensuring compatibility with the latest library conventions. [1] [2] [3] [4]dtypeinstead oftorch_dtypeeverywhere! huggingface/transformers#39782Tensor slicing improvements
areal/utils/ulysses.pyto always pass slices as tuples, preventing indexing errors and improving code clarity. [1] [2]Logging configuration
wandb.Settings(start_method="fork")argument in theinitfunction ofareal/utils/stats_logger.py, simplifying the logging configuration.start_methodsetting wandb/wandb#9837