Skip to content

Qwen3

Choose a tag to compare

@danielhanchen danielhanchen released this 02 May 16:13
· 1089 commits to nightly since this release

Qwen 3 support + bug fixes

Please update Unsloth via pip install --upgrade --force-reinstall unsloth unsloth_zoo

Qwen3 notebook: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Reasoning-Conversational.ipynb
GRPO with Qwen3 notebook: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(4B)-GRPO.ipynb

There are also many bug fixes in this release!

The 30B MoE is also fine-tunable in Unsloth!

from unsloth import FastModel
import torch
model, tokenizer = FastModel.from_pretrained(
    model_name = "unsloth/Qwen3-30B-A3B",
    max_seq_length = 2048, # Choose any for long context!
    load_in_4bit = True,  # 4 bit quantization to reduce memory
    load_in_8bit = False, # [NEW!] A bit more accurate, uses 2x memory
    full_finetuning = False, # [NEW!] We have full finetuning now!
    # token = "hf_...", # use one if using gated models
)

What's Changed

New Contributors

Full Changelog: 2025-03...May-2025