Skip to content

Conversation

@Zzhiter
Copy link
Contributor

@Zzhiter Zzhiter commented Jan 19, 2025

See: #1555

@danielhanchen danielhanchen changed the base branch from main to nightly January 20, 2025 09:24
@danielhanchen danielhanchen merged commit cdb3259 into unslothai:nightly Jan 20, 2025
danielhanchen added a commit that referenced this pull request Jan 20, 2025
* use exact model name

* Update save.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* print

* Update _utils.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update vision.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update loader.py

* accurate_accumulation

* Update loader.py

* Update loader.py

* Update _utils.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update pyproject.toml

* Update __init__.py

* Update pyproject.toml

* Update __init__.py

* Update __init__.py

* Fix Triton heuristics

triton-lang/triton#5224

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Xformers

* Update loader.py

* Update loader.py

* Rewind

* Update _utils.py

* Update _utils.py

* requires grad

* Update loader.py

* Update _utils.py

* Update loader.py

* changing model to base_model if peft model is already used

* Improve debugging experience (#1512)

* Create CONTRIBUTING.md (#1472)

Creating contributing guidelines

* Update CONTRIBUTING.md

improved sentence

* Improve logging control in `unsloth_compile_transformers` by conditionally redirecting stdout based on UNSLOTH_DISABLE_LOGGER environment variable

---------

Co-authored-by: Michael Han <[email protected]>
Co-authored-by: Nino Risteski <[email protected]>

* Update loader.py

* Update llama.py

* Update llama.py

* Revert "Update llama.py"

This reverts commit b7ddf96.

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Auto change is_bfloat16_supported

* Update llama.py

* Force data-type

* Update llama.py

* All attention refactor fix (#1491)

* change initilization of n_heads, n_kv_heads, hidden_size in llama.py

* do the same for cohere, mistral, gemma2, granite

* do the same for flexattention,cohere, mistral, granite

* Update llama.py

* Update llama.py

* Update granite to work with latest post_patch methods (#1502)

* Update granite to work with latest post_patch methods

* Pass position_embeddings for granite even if transformers<4.47

* Update llama.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Minor fixes for granite models (#1503)

* Update granite.py

Grab residual multiplier directly from layer

* Update llama.py

Version should read >= 4.47.1 as that is the version requiring the changes

* Update granite.py

* Update llama.py

---------

Co-authored-by: Daniel Han <[email protected]>

* support modelscope models and datasets (#1481)

* support modelscope

* change modelscope args

* remove useless import

* remove useless import

* fix

* wip

* fix

* remove useless code

* add readme

* add some comments

* change print to raise error

* update comment

* Update loader.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Merge branch 'main' into nightly

* Phi 4

* Update llama.py

* Torch.Cuda Is Available Condition and Warning (#1545)

* check for torch.cuda and triton if available
on my machine(mac m3) the cuda were not available

* Update pyproject.toml

* Update __init__.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Update mistral.py

* Update mistral.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Fix

* Bug fixes

* Update mapper.py

* Add dropout to granite to match HF's implementation (#1557)

Signed-off-by: datta0 <[email protected]>

* Update llama.py

* Update llama.py

* Bug fixes

* fix: flash_attn_detection_error (#1556)

* fix: flash_attn_detection_error

* Update _utils.py

---------

Co-authored-by: Daniel Han <[email protected]>

---------

Signed-off-by: datta0 <[email protected]>
Co-authored-by: Itsuro Tajima <[email protected]>
Co-authored-by: Muhammad Osama <[email protected]>
Co-authored-by: Edd <[email protected]>
Co-authored-by: Michael Han <[email protected]>
Co-authored-by: Nino Risteski <[email protected]>
Co-authored-by: Kareem <[email protected]>
Co-authored-by: Datta Nimmaturi <[email protected]>
Co-authored-by: Z <[email protected]>
Co-authored-by: tastelikefeet <[email protected]>
Co-authored-by: AminWhat <[email protected]>
Co-authored-by: Zhe Zhang <[email protected]>
danielhanchen added a commit that referenced this pull request Jan 31, 2025
* use exact model name

* Update save.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* print

* Update _utils.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update vision.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update loader.py

* accurate_accumulation

* Update loader.py

* Update loader.py

* Update _utils.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update pyproject.toml

* Update __init__.py

* Update pyproject.toml

* Update __init__.py

* Update __init__.py

* Fix Triton heuristics

triton-lang/triton#5224

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Xformers

* Update loader.py

* Update loader.py

* Rewind

* Update _utils.py

* Update _utils.py

* requires grad

* Update loader.py

* Update _utils.py

* Update loader.py

* changing model to base_model if peft model is already used

* Improve debugging experience (#1512)

* Create CONTRIBUTING.md (#1472)

Creating contributing guidelines

* Update CONTRIBUTING.md

improved sentence

* Improve logging control in `unsloth_compile_transformers` by conditionally redirecting stdout based on UNSLOTH_DISABLE_LOGGER environment variable

---------

Co-authored-by: Michael Han <[email protected]>
Co-authored-by: Nino Risteski <[email protected]>

* Update loader.py

* Update llama.py

* Update llama.py

* Revert "Update llama.py"

This reverts commit b7ddf96.

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Auto change is_bfloat16_supported

* Update llama.py

* Force data-type

* Update llama.py

* All attention refactor fix (#1491)

* change initilization of n_heads, n_kv_heads, hidden_size in llama.py

* do the same for cohere, mistral, gemma2, granite

* do the same for flexattention,cohere, mistral, granite

* Update llama.py

* Update llama.py

* Update granite to work with latest post_patch methods (#1502)

* Update granite to work with latest post_patch methods

* Pass position_embeddings for granite even if transformers<4.47

* Update llama.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Minor fixes for granite models (#1503)

* Update granite.py

Grab residual multiplier directly from layer

* Update llama.py

Version should read >= 4.47.1 as that is the version requiring the changes

* Update granite.py

* Update llama.py

---------

Co-authored-by: Daniel Han <[email protected]>

* support modelscope models and datasets (#1481)

* support modelscope

* change modelscope args

* remove useless import

* remove useless import

* fix

* wip

* fix

* remove useless code

* add readme

* add some comments

* change print to raise error

* update comment

* Update loader.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Merge branch 'main' into nightly

* Phi 4

* Update llama.py

* Torch.Cuda Is Available Condition and Warning (#1545)

* check for torch.cuda and triton if available
on my machine(mac m3) the cuda were not available

* Update pyproject.toml

* Update __init__.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Update mistral.py

* Update mistral.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Fix

* Bug fixes

* Update mapper.py

* Add dropout to granite to match HF's implementation (#1557)

Signed-off-by: datta0 <[email protected]>

* Update llama.py

* Update llama.py

* Bug fixes

* fix: flash_attn_detection_error (#1556)

* fix: flash_attn_detection_error

* Update _utils.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Update mapper.py

---------

Signed-off-by: datta0 <[email protected]>
Co-authored-by: Itsuro Tajima <[email protected]>
Co-authored-by: Muhammad Osama <[email protected]>
Co-authored-by: Edd <[email protected]>
Co-authored-by: Michael Han <[email protected]>
Co-authored-by: Nino Risteski <[email protected]>
Co-authored-by: Kareem <[email protected]>
Co-authored-by: Datta Nimmaturi <[email protected]>
Co-authored-by: Z <[email protected]>
Co-authored-by: tastelikefeet <[email protected]>
Co-authored-by: AminWhat <[email protected]>
Co-authored-by: Zhe Zhang <[email protected]>
danielhanchen added a commit that referenced this pull request Feb 6, 2025
* use exact model name

* Update save.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* print

* Update _utils.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update vision.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update loader.py

* accurate_accumulation

* Update loader.py

* Update loader.py

* Update _utils.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update pyproject.toml

* Update __init__.py

* Update pyproject.toml

* Update __init__.py

* Update __init__.py

* Fix Triton heuristics

triton-lang/triton#5224

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Xformers

* Update loader.py

* Update loader.py

* Rewind

* Update _utils.py

* Update _utils.py

* requires grad

* Update loader.py

* Update _utils.py

* Update loader.py

* changing model to base_model if peft model is already used

* Improve debugging experience (#1512)

* Create CONTRIBUTING.md (#1472)

Creating contributing guidelines

* Update CONTRIBUTING.md

improved sentence

* Improve logging control in `unsloth_compile_transformers` by conditionally redirecting stdout based on UNSLOTH_DISABLE_LOGGER environment variable

---------

Co-authored-by: Michael Han <[email protected]>
Co-authored-by: Nino Risteski <[email protected]>

* Update loader.py

* Update llama.py

* Update llama.py

* Revert "Update llama.py"

This reverts commit b7ddf96.

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Auto change is_bfloat16_supported

* Update llama.py

* Force data-type

* Update llama.py

* All attention refactor fix (#1491)

* change initilization of n_heads, n_kv_heads, hidden_size in llama.py

* do the same for cohere, mistral, gemma2, granite

* do the same for flexattention,cohere, mistral, granite

* Update llama.py

* Update llama.py

* Update granite to work with latest post_patch methods (#1502)

* Update granite to work with latest post_patch methods

* Pass position_embeddings for granite even if transformers<4.47

* Update llama.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Minor fixes for granite models (#1503)

* Update granite.py

Grab residual multiplier directly from layer

* Update llama.py

Version should read >= 4.47.1 as that is the version requiring the changes

* Update granite.py

* Update llama.py

---------

Co-authored-by: Daniel Han <[email protected]>

* support modelscope models and datasets (#1481)

* support modelscope

* change modelscope args

* remove useless import

* remove useless import

* fix

* wip

* fix

* remove useless code

* add readme

* add some comments

* change print to raise error

* update comment

* Update loader.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Merge branch 'main' into nightly

* Phi 4

* Update llama.py

* Torch.Cuda Is Available Condition and Warning (#1545)

* check for torch.cuda and triton if available
on my machine(mac m3) the cuda were not available

* Update pyproject.toml

* Update __init__.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Update mistral.py

* Update mistral.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Fix

* Bug fixes

* Update mapper.py

* Add dropout to granite to match HF's implementation (#1557)

Signed-off-by: datta0 <[email protected]>

* Update llama.py

* Update llama.py

* Bug fixes

* fix: flash_attn_detection_error (#1556)

* fix: flash_attn_detection_error

* Update _utils.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Update mapper.py

* Update gemma.py

* Update gemma.py

* Update gemma.py

* Update gemma.py

* dim fix

* Update _utils.py

* Torch 2.6 support

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Faster inference?

* Update llama.py

* Update llama.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update mapper.py

* Fast Inference via vLLM

* Update llama.py

* Update llama.py

* Update utils.py

* Create rl.py

* PatchRL

* Update rl.py

* Update rl.py

* Update rl.py

* PatchRLStatistics

* Update rl.py

* Update rl.py

* Update rl.py

* Update utils.py

* Update utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* RL metrics

* Update rl.py

* RL metrics

* Update __init__.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update chat_templates.py

* Update mapper.py

* Fp8 cache

* Update llama.py

* Update llama.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update __init__.py

* Update loader.py

---------

Signed-off-by: datta0 <[email protected]>
Co-authored-by: Itsuro Tajima <[email protected]>
Co-authored-by: Muhammad Osama <[email protected]>
Co-authored-by: Edd <[email protected]>
Co-authored-by: Michael Han <[email protected]>
Co-authored-by: Nino Risteski <[email protected]>
Co-authored-by: Kareem <[email protected]>
Co-authored-by: Datta Nimmaturi <[email protected]>
Co-authored-by: Z <[email protected]>
Co-authored-by: tastelikefeet <[email protected]>
Co-authored-by: AminWhat <[email protected]>
Co-authored-by: Zhe Zhang <[email protected]>
danielhanchen added a commit that referenced this pull request Feb 6, 2025
* use exact model name

* Update save.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* print

* Update _utils.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update vision.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update loader.py

* accurate_accumulation

* Update loader.py

* Update loader.py

* Update _utils.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update pyproject.toml

* Update __init__.py

* Update pyproject.toml

* Update __init__.py

* Update __init__.py

* Fix Triton heuristics

triton-lang/triton#5224

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Xformers

* Update loader.py

* Update loader.py

* Rewind

* Update _utils.py

* Update _utils.py

* requires grad

* Update loader.py

* Update _utils.py

* Update loader.py

* changing model to base_model if peft model is already used

* Improve debugging experience (#1512)

* Create CONTRIBUTING.md (#1472)

Creating contributing guidelines

* Update CONTRIBUTING.md

improved sentence

* Improve logging control in `unsloth_compile_transformers` by conditionally redirecting stdout based on UNSLOTH_DISABLE_LOGGER environment variable

---------

Co-authored-by: Michael Han <[email protected]>
Co-authored-by: Nino Risteski <[email protected]>

* Update loader.py

* Update llama.py

* Update llama.py

* Revert "Update llama.py"

This reverts commit b7ddf96.

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Auto change is_bfloat16_supported

* Update llama.py

* Force data-type

* Update llama.py

* All attention refactor fix (#1491)

* change initilization of n_heads, n_kv_heads, hidden_size in llama.py

* do the same for cohere, mistral, gemma2, granite

* do the same for flexattention,cohere, mistral, granite

* Update llama.py

* Update llama.py

* Update granite to work with latest post_patch methods (#1502)

* Update granite to work with latest post_patch methods

* Pass position_embeddings for granite even if transformers<4.47

* Update llama.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Minor fixes for granite models (#1503)

* Update granite.py

Grab residual multiplier directly from layer

* Update llama.py

Version should read >= 4.47.1 as that is the version requiring the changes

* Update granite.py

* Update llama.py

---------

Co-authored-by: Daniel Han <[email protected]>

* support modelscope models and datasets (#1481)

* support modelscope

* change modelscope args

* remove useless import

* remove useless import

* fix

* wip

* fix

* remove useless code

* add readme

* add some comments

* change print to raise error

* update comment

* Update loader.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Merge branch 'main' into nightly

* Phi 4

* Update llama.py

* Torch.Cuda Is Available Condition and Warning (#1545)

* check for torch.cuda and triton if available
on my machine(mac m3) the cuda were not available

* Update pyproject.toml

* Update __init__.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Update mistral.py

* Update mistral.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Fix

* Bug fixes

* Update mapper.py

* Add dropout to granite to match HF's implementation (#1557)

Signed-off-by: datta0 <[email protected]>

* Update llama.py

* Update llama.py

* Bug fixes

* fix: flash_attn_detection_error (#1556)

* fix: flash_attn_detection_error

* Update _utils.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Update mapper.py

* Update gemma.py

* Update gemma.py

* Update gemma.py

* Update gemma.py

* dim fix

* Update _utils.py

* Torch 2.6 support

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Faster inference?

* Update llama.py

* Update llama.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update mapper.py

* Fast Inference via vLLM

* Update llama.py

* Update llama.py

* Update utils.py

* Create rl.py

* PatchRL

* Update rl.py

* Update rl.py

* Update rl.py

* PatchRLStatistics

* Update rl.py

* Update rl.py

* Update rl.py

* Update utils.py

* Update utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* RL metrics

* Update rl.py

* RL metrics

* Update __init__.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update chat_templates.py

* Update mapper.py

* Fp8 cache

* Update llama.py

* Update llama.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update __init__.py

* Update loader.py

* Update rl.py

* Update rl.py

* Update _utils.py

---------

Signed-off-by: datta0 <[email protected]>
Co-authored-by: Itsuro Tajima <[email protected]>
Co-authored-by: Muhammad Osama <[email protected]>
Co-authored-by: Edd <[email protected]>
Co-authored-by: Michael Han <[email protected]>
Co-authored-by: Nino Risteski <[email protected]>
Co-authored-by: Kareem <[email protected]>
Co-authored-by: Datta Nimmaturi <[email protected]>
Co-authored-by: Z <[email protected]>
Co-authored-by: tastelikefeet <[email protected]>
Co-authored-by: AminWhat <[email protected]>
Co-authored-by: Zhe Zhang <[email protected]>
danielhanchen added a commit that referenced this pull request Feb 13, 2025
* Phi 4

* Update llama.py

* Torch.Cuda Is Available Condition and Warning (#1545)

* check for torch.cuda and triton if available
on my machine(mac m3) the cuda were not available

* Update pyproject.toml

* Update __init__.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Update mistral.py

* Update mistral.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Fix

* Bug fixes

* Update mapper.py

* Add dropout to granite to match HF's implementation (#1557)

Signed-off-by: datta0 <[email protected]>

* Update llama.py

* Update llama.py

* Bug fixes

* fix: flash_attn_detection_error (#1556)

* fix: flash_attn_detection_error

* Update _utils.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Update mapper.py

* Update gemma.py

* Update gemma.py

* Update gemma.py

* Update gemma.py

* dim fix

* Update _utils.py

* Torch 2.6 support

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Faster inference?

* Update llama.py

* Update llama.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update mapper.py

* Fast Inference via vLLM

* Update llama.py

* Update llama.py

* Update utils.py

* Create rl.py

* PatchRL

* Update rl.py

* Update rl.py

* Update rl.py

* PatchRLStatistics

* Update rl.py

* Update rl.py

* Update rl.py

* Update utils.py

* Update utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* RL metrics

* Update rl.py

* RL metrics

* Update __init__.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update chat_templates.py

* Update mapper.py

* Fp8 cache

* Update llama.py

* Update llama.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update __init__.py

* Update loader.py

* Update rl.py

* Update rl.py

* Update _utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Better TRL handling

* Update rl.py

* Update tokenizer_utils.py

* Auto patching

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update rl.py

* Update tokenizer_utils.py

* Update rl.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update tokenizer_utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* max seq length

* Update rl.py

* Update rl.py

* Patching

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* NEFTune

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Extra replacements

* Update rl_replacements.py

* Update rl.py

* extra RL replacements

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update _utils.py

* Update loader_utils.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* autocast

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update pyproject.toml

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update _utils.py

---------

Signed-off-by: datta0 <[email protected]>
Co-authored-by: AminWhat <[email protected]>
Co-authored-by: Datta Nimmaturi <[email protected]>
Co-authored-by: Zhe Zhang <[email protected]>
danielhanchen added a commit that referenced this pull request Feb 14, 2025
* Bug fixes

* fix: flash_attn_detection_error (#1556)

* fix: flash_attn_detection_error

* Update _utils.py

---------

Co-authored-by: Daniel Han <[email protected]>

* Update mapper.py

* Update gemma.py

* Update gemma.py

* Update gemma.py

* Update gemma.py

* dim fix

* Update _utils.py

* Torch 2.6 support

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Faster inference?

* Update llama.py

* Update llama.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update mapper.py

* Fast Inference via vLLM

* Update llama.py

* Update llama.py

* Update utils.py

* Create rl.py

* PatchRL

* Update rl.py

* Update rl.py

* Update rl.py

* PatchRLStatistics

* Update rl.py

* Update rl.py

* Update rl.py

* Update utils.py

* Update utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* RL metrics

* Update rl.py

* RL metrics

* Update __init__.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update chat_templates.py

* Update mapper.py

* Fp8 cache

* Update llama.py

* Update llama.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update __init__.py

* Update loader.py

* Update rl.py

* Update rl.py

* Update _utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Better TRL handling

* Update rl.py

* Update tokenizer_utils.py

* Auto patching

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update rl.py

* Update tokenizer_utils.py

* Update rl.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update tokenizer_utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* max seq length

* Update rl.py

* Update rl.py

* Patching

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* NEFTune

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Extra replacements

* Update rl_replacements.py

* Update rl.py

* extra RL replacements

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update _utils.py

* Update loader_utils.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* autocast

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update pyproject.toml

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

---------

Co-authored-by: Zhe Zhang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants