An intelligent desktop application that assists security professionals with penetration testing by leveraging Large Language Models (LLMs) to interpret high-level security goals and suggest specific commands for execution.
The application features a dark-themed, professional interface with:
- Command Input Area: Enter high-level security goals at the bottom
- Output Display: Real-time command output and LLM reasoning in the main window
- Menu Bar: Access to session management, export, and settings
- Status Messages: Helpful tips and keyboard shortcuts displayed at startup
This tool executes real security testing commands on your system. Use with extreme caution!
- Only use on systems you own or have explicit permission to test
- Carefully review every command before approving execution
- This tool is for authorized security testing and educational purposes only
- Misuse of this tool may be illegal in your jurisdiction
- AI-Powered Command Suggestions: Uses LLMs to break down high-level security goals into specific executable commands
- Web Search Integration: Can query the web for information about security tools and techniques
- User Approval Required: All commands require explicit user approval before execution
- Real-time Output: Streams command output in real-time for immediate feedback
- Conversation Context: Maintains conversation history for context-aware suggestions
- Session Management: Save and load conversation sessions for later review or continuation
- Export Functionality: Export conversations to text files for documentation
- Dark Theme UI: Professional hacker-themed interface built with PyQt6
- Keyboard Shortcuts: Efficient workflow with keyboard shortcuts (Ctrl+S to save, Ctrl+L to load, etc.)
- Configuration Management: Easy setup with .env file configuration
The application consists of four main components:
- Orchestrator: Manages LLM interactions and decision-making
- Toolbelt: Executes approved shell commands with safety measures
- Worker: Handles background tasks to keep UI responsive
- MainWindow: PyQt6 GUI for user interaction and command approval
User Input → LLM Query → Decision (Command/Search)
↓ ↓
└─→ Search? → Web Search → LLM Query → Command Suggestion
↓
User Approval Required
↓
Execute Command → Display Output
- Python 3.8 or higher
- Operating System: Linux, macOS, or Windows
- LLM Server: Local or remote OpenAI-compatible API endpoint
- Examples: LM Studio, Ollama, OpenAI API, or similar
- API Keys:
- Tavily API key (for web search functionality) - Get one at tavily.com
- Or use the included Tavily Clone setup (requires Node.js)
-
Clone the repository:
git clone https://github.com/GizzZmo/Ai-pentester.git cd Ai-pentester -
Install Python dependencies:
pip install -r requirements.txt
-
Configure API endpoints: Copy the example configuration file and edit it:
cp .env.example .env
Edit
.envand set your configuration:TAVILY_API_KEY=your_tavily_api_key_here LLM_API_URL=http://localhost:1234/v1/chat/completions LLM_API_KEY=not-needed-for-local
-
Start your LLM server (if using local model):
- For LM Studio: Start the server and load your preferred model
- For Ollama: Run
ollama serveand load a model - For OpenAI: Use
https://api.openai.com/v1/chat/completionsas URL
-
Run the application:
python main.py
If you prefer to run without a Tavily API key, you can use the included Tavily Clone server:
-
Run the installation script:
chmod +x install.sh ./install.sh
-
Configure the backend:
- Open
tavily_clone_project/backend/server.js - Add your SerpApi API key (get one at serpapi.com)
- Open
-
Start the Tavily Clone server (Terminal 1):
cd tavily_clone_project/backend node server.jsNote: First run will download AI models (may take several minutes)
-
Start the frontend (Terminal 2):
cd tavily_clone_project/frontend npm start -
Run the main application (Terminal 3):
python main_test.py
-
Start the application: Run
python main.py -
Enter your security goal: Type a high-level objective in the input box, such as:
- "Scan localhost for open web ports"
- "Check for SQL injection vulnerabilities on example.com"
- "Enumerate subdomains of target.com"
-
Review LLM reasoning: The AI will explain its decision-making process
-
Approve or deny the command:
- ✅ Execute: Runs the command and displays output
- ❌ Deny: Asks the LLM for an alternative approach
-
View results: Command output appears in real-time in the log display
-
Continue conversation: The LLM maintains context for follow-up actions
- Ctrl+Enter: Submit input
- Ctrl+S: Save session to file
- Ctrl+L: Load session from file
- Ctrl+E: Export conversation to text
- Ctrl+K: Clear conversation
- Ctrl+C: Copy log to clipboard
- Ctrl+Q: Quit application
Save a Session:
- Click
File > Save Sessionor pressCtrl+S - Choose a location and filename (automatically timestamped)
- Session includes conversation history and executed commands
Load a Session:
- Click
File > Load Sessionor pressCtrl+L - Select a previously saved session file
- Conversation history will be restored
Export Conversation:
- Click
File > Export Conversationor pressCtrl+E - Save as text file for documentation or review
Port Scanning Example:
User: "Scan localhost for open web ports"
LLM: "I will run an Nmap scan with service version detection..."
Command: nmap -sV -p 80,443,8000-8100 localhost
[User approves]
Output: [Nmap scan results...]
Information Gathering Example:
User: "Find subdomains for example.com"
LLM: "I need to search for the best subdomain enumeration tools"
[Performs web search]
LLM: "I will use Subfinder for subdomain enumeration..."
Command: subfinder -d example.com
The application uses a .env file for configuration. This is the recommended approach:
-
Copy
.env.exampleto.env:cp .env.example .env
-
Edit
.envwith your settings:TAVILY_API_KEY=your_tavily_api_key_here LLM_API_URL=http://localhost:1234/v1/chat/completions LLM_API_KEY=not-needed-for-local
The application works with any OpenAI-compatible API. Configure in the .env file:
LLM_API_URL=your_endpoint_here
LLM_API_KEY=your_key_here # May not be needed for local modelsPopular LLM Options:
- LM Studio (Local):
http://localhost:1234/v1/chat/completions - Ollama (Local):
http://localhost:11434/v1/chat/completions - OpenAI (Cloud):
https://api.openai.com/v1/chat/completions
Recommended Models:
- WhiteRabbitNeo (security-focused)
- GPT-4 or GPT-3.5-turbo (general purpose)
- Llama 2 70B or higher (local option)
- Mixtral 8x7B (good balance of performance and resource usage)
Using Tavily API:
TAVILY_API_KEY=tvly-xxxxx # Get from tavily.comUsing Tavily Clone:
TAVILY_CLONE_URL=http://localhost:3001/api/search
## 🎨 UI Customization
The application uses a dark hacker theme by default. To customize, edit the `dark_theme_stylesheet` variable in the source file.
## 🔧 Troubleshooting
### Common Issues
**Problem**: "Could not connect to LLM"
- **Solution**: Ensure your LLM server is running and the URL is correct
- Check firewall settings
- Verify the model is loaded in your LLM server
**Problem**: "Tavily API error"
- **Solution**: Check your API key is valid
- Ensure you have API credits remaining
- Try using the Tavily Clone alternative
**Problem**: "Command not found" errors
- **Solution**: Install the required security tools (nmap, etc.)
- Ensure tools are in your system PATH
- On Windows, use WSL or install Windows versions of tools
**Problem**: Application freezes
- **Solution**: The LLM might be taking too long to respond
- Try a faster model or increase timeout in the code
- Check your internet connection if using remote API
### Debug Mode
For detailed debugging, check the console output where you launched the application. All API calls and responses are logged.
## 🔒 Security Best Practices
1. **Review All Commands**: Never blindly approve commands without understanding them
2. **Isolated Environment**: Test in a VM or isolated network when possible
3. **Legal Authorization**: Only test systems you own or have permission to test
4. **Keep Updated**: Regularly update security tools and the application
5. **Audit Logs**: Review the conversation history and command outputs
6. **API Key Security**: Never commit API keys to version control
7. **Network Security**: Be cautious when using cloud-based LLM APIs with sensitive data
## 📚 Additional Documentation
- [ARCHITECTURE.md](ARCHITECTURE.md) - Detailed system design
- [ROADMAP.md](ROADMAP.md) - Future development plans
- [CONTRIBUTING.md](CONTRIBUTING.md) - Contribution guidelines
- [WORKFLOWS.md](.github/WORKFLOWS.md) - GitHub Actions workflow documentation
## 🤖 CI/CD & Automation
This project uses GitHub Actions for comprehensive automation:
- **Continuous Integration**: Automated testing across Python 3.8-3.12 on Linux, macOS, and Windows
- **Build & Release**: Automated builds with PyInstaller and distribution package generation
- **UI Testing**: Automated screenshot capture for visual regression testing
- **Asset Management**: Dependency auditing, license compliance, and automated backups
- **Security Scanning**: Automated vulnerability scanning with safety and bandit
For detailed information about workflows, artifacts, and screenshots, see [.github/WORKFLOWS.md](.github/WORKFLOWS.md).
### Artifacts Available
The workflow system generates various artifacts:
- **Build Artifacts**: Platform-specific executables and distribution packages
- **Screenshots**: UI captures from all supported platforms (Linux, macOS, Windows)
- Screenshots are automatically captured during CI runs
- View the latest screenshots in the [Actions tab](https://github.com/GizzZmo/Ai-pentester/actions) under workflow artifacts
- Stored in the repository's `/screenshots` directory
- **Test Reports**: Coverage reports and test results
- **Security Reports**: Vulnerability scans and dependency audits
- **Asset Inventories**: Complete project asset catalogs
- **Documentation Bundles**: Comprehensive documentation packages
Access artifacts from the [Actions tab](https://github.com/GizzZmo/Ai-pentester/actions) after workflow runs.
## 🤝 Contributing
Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on:
- Code style guidelines
- Pull request process
- Bug reporting
- Feature requests
## 📜 License
This project is licensed under the GNU General Public License v3.0 - see the [LICENSE](LICENSE) file for details.
## ⚖️ Legal Disclaimer
This tool is provided for educational and authorized security testing purposes only. Users are responsible for complying with all applicable laws and regulations. The authors and contributors are not responsible for any misuse or damage caused by this tool.
## 🙏 Acknowledgments
- PyQt6 for the excellent GUI framework
- The open-source security tools community
- LLM providers for making AI accessible
## 📞 Support
- **Issues**: [GitHub Issues](https://github.com/GizzZmo/Ai-pentester/issues)
- **Discussions**: [GitHub Discussions](https://github.com/GizzZmo/Ai-pentester/discussions)
---
**Made with ❤️ for the cybersecurity community**
