-
-
Notifications
You must be signed in to change notification settings - Fork 6
Home
DeepShell was initially inspired by the lack of support for Ollama in Shell_GPT. Over time, it evolved into something more than just a command-line productivity tool.
With ability to self-host models such as DeepSeek-R1 we are making significant step toward privacy-oriented AI assistants.
In this mode, users can interact with an LLM. DeepShell supports reading files and folders with simple commands like:
open /file_pathopen this folder
If a folder is opened, DeepShell will read every possible file and pass its contents to the LLM, appending the user's prompt with "and".
Prompt:
open LICENSE and translate it to Chinese
DeepShell will process:
File content: <LICENSE content>, User prompt: translate it to Chinese
- DeepShell uses embedding to manage history and maintain context awareness. If a file or folder is opened, it will store a link to it for the duration of the session. Ensure specific prompts mention file names for accurate retrieval.
- By default, "thoughts" are filtered out, causing a slight delay in rendering (Only relevant for deepseek models).
One can change mode for the prompt while staying in a default mode, in order to chat about the produced output.
In default mode type:
@shell last 20 messages from kernel log
To trigger shell generator and execute command.
Or use bypass to execute command directly:
!sudo dmesg | tail -n 20
After that one can prompt chatbot about the output.
Additionally one can use:
@code override to generate code snippets without leaving default mode.
- Generates code based on the user’s prompt.
- Displays only the code, filtering out everything else.
This mode has:
- Command Generator – Suggests shell commands based on the prompt.
- Output Analyzer – Processes and explains terminal output.
User Prompt:
update system packages
Process:
- The Shell Generator creates the command and places it in the input field for user validation.
- After pressing Enter, the command executes in the terminal.
- If it starts with
sudo, the program requests a password, storing it only for the session. - On exit, the password is overwritten with random garbage (using Python’s
secretsmodule). - After execution, the user is asked whether to display the output.
- If displayed, the Output Analyzer processes it before rendering.
This cycle repeats until the user quits.
Users can bypass the command generator by starting a prompt with !.
Example:
!sudo ufw status
In chat mode, users can also ask questions about command results.
DeepShell relies on Textual for the user interface.
- Copying Output: Hold the Shift key.
-
Exiting: Type
exitor press Ctrl + C.
DeepShell supports piping both input and output.
However, when piping data in, TUI input is disabled.
To work around this, execute commands within chat mode using the ! prefix.
Example:
!cat file.txt
Then, ask follow-up questions about the result.
Make sure Ollama is installed.
curl -fsSL https://ollama.com/install.sh | shIf Ollama is running on the same machine as DeepShell, the models from config/settings.py will be installed automatically.
If not planning to use vision model, empty the VISION_MODEL = "" string, to avoid downloading.
-
Change DEFAULT_HOST variable at
config/settings.py -
Make sure host is properly configured and accessible. Refer to Ollama documentation
-
Then install all the unique models located at
config/settings.pymanually:
ollama pull deepseek-r1:8b
...
ollama pull nomic-embed-text:latest
ollama serve DeepSeek performs best when using different models for different tasks.
Keep in mind that if thinking is disabled and user is running a reasoning model, there will be a delay in response, as DeepShell will wait for the tag to begin rendering actual response.
However one is not restricted to use only deepseek models, as long as model is capable of producing required output.
- For SHELL we suggest using non-reasoning models such as:
deepseek-coder-v2:16borgranite3.2:8b - For EMBEDDING, we recommend
nomic-embed-text:latest. - For HELPER we need something light that is capable of producing json output. For example
deepseek-r1:1.5orgranite3.2:2b -
SYSTEM model will be used for function calling (Currently in development). Our recommendation:
granite3.2:8b - Optionally, DeepShell can read images (set
PROCESS_IMAGES=Trueinsettings.py). The recommended models areminicpm-v:8borllava:13b.
DeepShell is in early development, but we strive to keep each commit stable.
- Code contributions, suggestions, and bug reports are welcome.
- Core functionality will always be free and open-source under the GPL-3.0 License.