From 8f8a72ea36583e0a051ebc27688af2ab07b97508 Mon Sep 17 00:00:00 2001 From: Konosuke Sakai Date: Wed, 1 Jan 2025 00:53:42 +0900 Subject: [PATCH] docs: replace Core ML with OpenVINO --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 079c73b7c04..c268fd0cd2b 100644 --- a/README.md +++ b/README.md @@ -293,7 +293,7 @@ This can result in significant speedup in encoder performance. Here are the inst The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get cached for the next run. -For more information about the Core ML implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037). +For more information about the OpenVINO implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037). ## NVIDIA GPU support