Google has announced that, starting in 2026/2027, all apps on certified Android devices
will require the developer to submit personal identity details directly to Google.
Since the developers of this app do not agree to this requirement, this app will no longer
work on certified Android devices after that time.
Send a coffee to [email protected]Or via this link (with fees)
Whisper+ is an input method editor (IME) that leverages voice recognition technology based on the Whisper engine. It offers a seamless user experience, functioning both as a standalone application and an integrated IME that can be activated, e.g. via the microphone button in HeliBoard. As a standalone app Whisper+ can also translate any supported language to English.
Besides providing an IME, Whisper+ can also be selected as system-wide voice input (RecognitionService) and it supports calls via intent (RecognizerIntent.ACTION_RECOGNIZE_SPEECH).
Upon launching Whisper+ for the first time, you will need to download the Whisper model from Hugging Face and install it. Voice recognition works entirely offline, ensuring your privacy and convenience.
Please note that for use as voice input (not as IME) there is a separate settings activity which can be accessed from Android settings (System > Languages > Speech > Voice Input). There you can activate the app as voice input and then click the settings button.
To get the most out of Whisper+, follow these simple tips:
- Press and hold the button while speaking or use automatic mode where available
- Pause briefly before starting to speak
- Speak clearly, loudly, and at a moderate pace
- Please note that there is a limit of 30s for each recording
By following these guidelines, you'll be able to enjoy accurate and efficient voice recognition with Whisper.
This work is licensed under GPLv3 license, © woheller69
- This app is based on whisperIME, which is published under MIT license
- It uses code and the Whisper ONNX models from RTranslator
- It uses code from Whisper-Android project, published under MIT license
- It uses OpenAI Whisper published under MIT license. Details on Whisper are found here.
- It uses Android VAD, which is published under MIT license
- It uses Opencc4j, for Chinese conversions, published under Apache-2.0 license
- At first start you need to download the Whisper model from HuggingFace, which is published under MIT license



