Skip to content

Conversation

@MengAiDev
Copy link

PR Description

Fixes #3376

This PR resolves an AttributeError that occurs when loading Qwen3 models with LoRA adapters using FastLanguageModel.from_pretrained(). The error was caused by unsafe attribute access during model compilation.

Problem

When compiling certain model types (particularly Qwen3), the code attempted to access attributes that don't exist in the transformers module, specifically:

AttributeError: module 'transformers.models.bit.modeling_bit' has no attribute 'Linear'

This prevented users from directly loading models with LoRA adapters, forcing them to use a two-stage loading process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] Qwen3-4b-Instuct-2507-bnb-4bit : AttributeError: module 'transformers.models.bit.modeling_bit' has no attribute 'Linear'

1 participant