Open-Source Whisper.net
Dotnet bindings for OpenAI Whisper made possible by whisper.cpp
| Build type | Build Status |
|---|---|
| CI Status (Native + dotnet) |
To install Whisper.net with all the available runtimes, run the following command in the Package Manager Console:
PM> Install-Package Whisper.net.AllRuntimes
Or add a package reference in your .csproj file:
<PackageReference Include="Whisper.net.AllRuntimes" Version="1.8.1" />
Whisper.net is the main package that contains the core functionality but does not include any runtimes. Whisper.net.AllRuntimes includes all available runtimes for Whisper.net.
To install a specific runtime, you can install them individually and combine them as needed. For example, to install the CPU runtime, add the following package references:
<PackageReference Include="Whisper.net" Version="1.8.1" />
<PackageReference Include="Whisper.net.Runtime" Version="1.8.1" />
We also have a custom-built GPT inside ChatGPT, which can help you with information based on this code, previous issues, and releases. Available here.
Please try to ask it before publishing a new question here, as it can help you a lot faster.
Whisper.net comes with multiple runtimes to support different platforms and hardware acceleration. Below are the available runtimes:
The default runtime that uses the CPU for inference. It is available on all platforms and does not require any additional dependencies.
- Simple usage example
- Simple usage example (without Async processing)
- NAudio integration for mp3
- NAudio integration for resampled wav
- Simple channel diarization
- Blazor example
- Windows: Microsoft Visual C++ Redistributable for at least Visual Studio 2022 (x64) Download Link
- Windows 11 or Windows Server 2022 (or newer) is required
- Linux:
libstdc++6,glibc 2.31 - macOS: TBD
- For x86/x64 platforms, the CPU must support AVX, AVX2, FMA and F16C instructions. If your CPU does not support these instructions, you'll need to use the
Whisper.net.Runtime.NoAvxruntime instead.
- Windows x86, x64, ARM64
- Linux x64, ARM64, ARM
- macOS x64, ARM64 (Apple Silicon)
- Android
- iOS
- MacCatalyst
- tvOS
- WebAssembly
For CPUs that do not support AVX instructions.
- Windows: Microsoft Visual C++ Redistributable for at least Visual Studio 2022 (x64) Download Link
- Windows 11 or Windows Server 2022 (or newer) is required
- Linux:
libstdc++6,glibc 2.31 - macOS: TBD
- Windows x86, x64, ARM64
- Linux x64, ARM64, ARM
Contains the native whisper.cpp library with NVidia CUDA support enabled.
- Everything from Whisper.net.Runtime pre-requisites
- NVidia GPU with CUDA support
- CUDA Toolkit (>= 13.0.1)
- Windows x64
- Linux x64
Contains the native whisper.cpp library with Apple CoreML support enabled.
- macOS x64, ARM64 (Apple Silicon)
- iOS
- MacCatalyst
Contains the native whisper.cpp library with Intel OpenVino support enabled.
- Everything from Whisper.net.Runtime pre-requisites
- OpenVino Toolkit (>= 2024.4)
- Windows x64
- Linux x64
Contains the native whisper.cpp library with Vulkan support enabled.
- Everything from Whisper.net.Runtime pre-requisites
- Vulkan Toolkit (>= 1.4.321.1)]
- Windows x64
You can install and use multiple runtimes in the same project. The runtime will be automatically selected based on the platform you are running the application on and the availability of the native runtime.
The following order of priority will be used by default:
Whisper.net.Runtime.Cuda(NVidia devices with all drivers installed)Whisper.net.Runtime.Vulkan(Windows x64 with Vulkan installed)Whisper.net.Runtime.CoreML(Apple devices)Whisper.net.Runtime.OpenVino(Intel devices)Whisper.net.Runtime(CPU inference)Whisper.net.Runtime.NoAvx(CPU inference without AVX support)
To change the order or force a specific runtime, set the RuntimeLibraryOrder on the RuntimeOptions:
RuntimeOptions.RuntimeLibraryOrder =
[
RuntimeLibrary.CoreML,
RuntimeLibrary.OpenVino,
RuntimeLibrary.Cuda,
RuntimeLibrary.Cpu
];graph LR
A[Your App / Tests] --> B[Whisper.net (managed)]
B --> C[NativeLibraryLoader]
C --> D{Probe runtimes in priority}
D -->|Cuda available| R1[Whisper.net.Runtime.Cuda]
D -->|Vulkan available| R2[Whisper.net.Runtime.Vulkan]
D -->|CoreML available| R3[Whisper.net.Runtime.CoreML]
D -->|OpenVINO available| R4[Whisper.net.Runtime.OpenVino]
D -->|CPU (AVX)| R5[Whisper.net.Runtime]
D -->|CPU (No AVX)| R6[Whisper.net.Runtime.NoAvx]
R1 --> E[whisper.cpp + ggml + CUDA]
R2 --> E
R3 --> E
R4 --> E
R5 --> E
R6 --> E
E --> F[(OS / Drivers / Hardware)]
Notes
- The loader selects the first compatible runtime it can find, based on the default priority or your overridden RuntimeOptions.RuntimeLibraryOrder.
- The native libraries can come from any source as long as they are compatible and placed in the expected runtimes folder layout (see "Building The Runtime").
- Whisper.net can run with any compatible compilation of the native whisper.cpp libraries; the package Whisper.net.Runtime is just one of the possible builds we publish.
- You may build your own native binaries (CPU, CUDA, CoreML, OpenVINO, Vulkan, NoAvx) and use them with Whisper.net as long as their files are arranged under ./runtimes in the same layout as our NuGet packages. The NativeLibraryLoader will probe them at runtime.
- For reproducible builds, you can use the attached GitHub workflows as references or entry points to produce artifacts: .github/workflows/ (e.g., dotnet.yml, dotnet-noavx.yml, dotnet-maui.yml). These workflows compile and package native libraries across platforms and can be adapted for your needs.
Whisper.net follows semantic versioning.
Starting from version 1.8.0, Whisper.net does not follow the same versioning scheme as whisper.cpp, which creates releases based on specific commits in their master branch (e.g., b2254, b2255).
To track the whisper.cpp version used in a specific Whisper.net release, you can check the whisper.cpp submodule. The commit hash for the tag associated with the release will indicate the corresponding whisper.cpp version.
Whisper.net uses Ggml models to perform speech recognition and translation. You can find more about Ggml models here.
For easier integration, Whisper.net provides a Downloader using Hugging Face.
var modelName = "ggml-base.bin";
if (!File.Exists(modelName))
{
using var modelStream = await WhisperGgmlDownloader.Default.GetGgmlModelAsync(GgmlType.Base);
using var fileWriter = File.OpenWrite(modelName);
await modelStream.CopyToAsync(fileWriter);
}- HF_TOKEN
- Optional. If set, Whisper.net will add an Authorization header when downloading models from Hugging Face to avoid rate limiting.
- Example:
- Bash:
export HF_TOKEN=hf_xxx - PowerShell:
$env:HF_TOKEN = "hf_xxx"
- Bash:
using var whisperFactory = WhisperFactory.FromPath("ggml-base.bin");
using var processor = whisperFactory.CreateBuilder()
.WithLanguage("auto")
.Build();
using var fileStream = File.OpenRead(wavFileName);
await foreach (var result in processor.ProcessAsync(fileStream))
{
Console.WriteLine($"{result.Start}->{result.End}: {result.Text}");
}You can find the documentation and code samples here.
- Development environment setup notes are available in DEVELOPMENT.md.
For instructions on running the test suites locally (including required .NET SDKs, optional environment variables like HF_TOKEN), see tests/README.md.
- Offline/local alternative: You can run tests fully locally without network by pre-downloading all ggml models required by tests and pointing tests to them via WHISPER_TEST_MODEL_PATH.
- MAUI tests use the Dotnet XHarness CLI to drive emulators/simulators. Docs: https://github.com/dotnet/xharness
- Native runtimes: By default, tests and are using the locally built native binaries instead, see “Building The Runtime” in DEVELOPMENT.md and ensure the output matches the expected runtimes layout.
MIT License. See LICENSE for details.