diff --git a/README.md b/README.md index 238a0c958..ce822e12b 100644 --- a/README.md +++ b/README.md @@ -40,6 +40,7 @@ length of 512 tokens: - [Distributed Tracing](#distributed-tracing) - [gRPC](#grpc) - [Local Install](#local-install) + - [Apple Silicon (Homebrew)](#apple-silicon-homebrew) - [Docker Build](#docker-build) - [Apple M1/M2 Arm](#apple-m1m2-arm64-architectures) - [Examples](#examples) @@ -492,6 +493,22 @@ grpcurl -d '{"inputs": "What is Deep Learning"}' -plaintext 0.0.0.0:8080 tei.v1. ## Local install +### Apple Silicon (Homebrew) + +On Apple Silicon (M1/M2/M3/M4), you can install a prebuilt binary via Homebrew: + +```shell +brew install text-embeddings-inference +``` + +Then launch Text Embeddings Inference with Metal acceleration: + +```shell +model=Qwen/Qwen3-Embedding-0.6B + +text-embeddings-router --model-id $model --port 8080 +``` + ### CPU You can also opt to install `text-embeddings-inference` locally. diff --git a/docs/source/en/local_metal.md b/docs/source/en/local_metal.md index b9e110b26..ce7316c53 100644 --- a/docs/source/en/local_metal.md +++ b/docs/source/en/local_metal.md @@ -17,7 +17,26 @@ rendered properly in your Markdown viewer. # Using TEI locally with Metal You can install `text-embeddings-inference` locally to run it on your own Mac with Metal support. -Here are the step-by-step instructions for installation: + +## Homebrew (Apple Silicon) + +On Apple Silicon (M1/M2/M3/M4), you can install a prebuilt binary via Homebrew: + +```shell +brew install text-embeddings-inference +``` + +Then launch Text Embeddings Inference: + +```shell +model=Qwen/Qwen3-Embedding-0.6B + +text-embeddings-router --model-id $model --port 8080 +``` + +## Build from source + +Alternatively, you can build from source. Here are the step-by-step instructions: ## Step 1: Install Rust