From 7a3c5bcba9a3c95f0d1c6f08f8b860f5d0ed6257 Mon Sep 17 00:00:00 2001 From: "Artyom V. Peredery" Date: Thu, 19 Feb 2026 19:09:06 +0500 Subject: [PATCH 1/2] docs: add Homebrew installation to README --- README.md | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/README.md b/README.md index 238a0c958..ce822e12b 100644 --- a/README.md +++ b/README.md @@ -40,6 +40,7 @@ length of 512 tokens: - [Distributed Tracing](#distributed-tracing) - [gRPC](#grpc) - [Local Install](#local-install) + - [Apple Silicon (Homebrew)](#apple-silicon-homebrew) - [Docker Build](#docker-build) - [Apple M1/M2 Arm](#apple-m1m2-arm64-architectures) - [Examples](#examples) @@ -492,6 +493,22 @@ grpcurl -d '{"inputs": "What is Deep Learning"}' -plaintext 0.0.0.0:8080 tei.v1. ## Local install +### Apple Silicon (Homebrew) + +On Apple Silicon (M1/M2/M3/M4), you can install a prebuilt binary via Homebrew: + +```shell +brew install text-embeddings-inference +``` + +Then launch Text Embeddings Inference with Metal acceleration: + +```shell +model=Qwen/Qwen3-Embedding-0.6B + +text-embeddings-router --model-id $model --port 8080 +``` + ### CPU You can also opt to install `text-embeddings-inference` locally. From 1a5e8461f9563f0df2f6eac94596deabe15dc894 Mon Sep 17 00:00:00 2001 From: "Artyom V. Peredery" Date: Sat, 21 Feb 2026 21:05:00 +0500 Subject: [PATCH 2/2] docs: add Homebrew to local_metal.md --- docs/source/en/local_metal.md | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/docs/source/en/local_metal.md b/docs/source/en/local_metal.md index b9e110b26..ce7316c53 100644 --- a/docs/source/en/local_metal.md +++ b/docs/source/en/local_metal.md @@ -17,7 +17,26 @@ rendered properly in your Markdown viewer. # Using TEI locally with Metal You can install `text-embeddings-inference` locally to run it on your own Mac with Metal support. -Here are the step-by-step instructions for installation: + +## Homebrew (Apple Silicon) + +On Apple Silicon (M1/M2/M3/M4), you can install a prebuilt binary via Homebrew: + +```shell +brew install text-embeddings-inference +``` + +Then launch Text Embeddings Inference: + +```shell +model=Qwen/Qwen3-Embedding-0.6B + +text-embeddings-router --model-id $model --port 8080 +``` + +## Build from source + +Alternatively, you can build from source. Here are the step-by-step instructions: ## Step 1: Install Rust