Skip to content

AI Penetration Testing Assistant License: GPL v3 Python 3.8+ An intelligent desktop application that assists security professionals with penetration testing by leveraging Large Language Models (LLMs) to interpret high-level security goals and suggest specific commands for execution.

License

Notifications You must be signed in to change notification settings

GizzZmo/Ai-pentester

 
 

Repository files navigation

AI Penetration Testing Assistant

License: GPL v3 Python 3.8+ CI Status Build Status UI Tests

An intelligent desktop application that assists security professionals with penetration testing by leveraging Large Language Models (LLMs) to interpret high-level security goals and suggest specific commands for execution.

📸 Screenshots

Main Application Interface

AI Pentester Main Window

The application features a dark-themed, professional interface with:

  • Command Input Area: Enter high-level security goals at the bottom
  • Output Display: Real-time command output and LLM reasoning in the main window
  • Menu Bar: Access to session management, export, and settings
  • Status Messages: Helpful tips and keyboard shortcuts displayed at startup

⚠️ Security Warning

This tool executes real security testing commands on your system. Use with extreme caution!

  • Only use on systems you own or have explicit permission to test
  • Carefully review every command before approving execution
  • This tool is for authorized security testing and educational purposes only
  • Misuse of this tool may be illegal in your jurisdiction

🌟 Features

  • AI-Powered Command Suggestions: Uses LLMs to break down high-level security goals into specific executable commands
  • Web Search Integration: Can query the web for information about security tools and techniques
  • User Approval Required: All commands require explicit user approval before execution
  • Real-time Output: Streams command output in real-time for immediate feedback
  • Conversation Context: Maintains conversation history for context-aware suggestions
  • Session Management: Save and load conversation sessions for later review or continuation
  • Export Functionality: Export conversations to text files for documentation
  • Dark Theme UI: Professional hacker-themed interface built with PyQt6
  • Keyboard Shortcuts: Efficient workflow with keyboard shortcuts (Ctrl+S to save, Ctrl+L to load, etc.)
  • Configuration Management: Easy setup with .env file configuration

🏗️ Architecture

The application consists of four main components:

  1. Orchestrator: Manages LLM interactions and decision-making
  2. Toolbelt: Executes approved shell commands with safety measures
  3. Worker: Handles background tasks to keep UI responsive
  4. MainWindow: PyQt6 GUI for user interaction and command approval

Workflow

User Input → LLM Query → Decision (Command/Search)
    ↓                            ↓
    └─→ Search? → Web Search → LLM Query → Command Suggestion
                                                ↓
                                        User Approval Required
                                                ↓
                                        Execute Command → Display Output

📋 Prerequisites

  • Python 3.8 or higher
  • Operating System: Linux, macOS, or Windows
  • LLM Server: Local or remote OpenAI-compatible API endpoint
    • Examples: LM Studio, Ollama, OpenAI API, or similar
  • API Keys:
    • Tavily API key (for web search functionality) - Get one at tavily.com
    • Or use the included Tavily Clone setup (requires Node.js)

🚀 Installation

Option 1: Basic Installation (Using Tavily API)

  1. Clone the repository:

    git clone https://github.com/GizzZmo/Ai-pentester.git
    cd Ai-pentester
  2. Install Python dependencies:

    pip install -r requirements.txt
  3. Configure API endpoints: Copy the example configuration file and edit it:

    cp .env.example .env

    Edit .env and set your configuration:

    TAVILY_API_KEY=your_tavily_api_key_here
    LLM_API_URL=http://localhost:1234/v1/chat/completions
    LLM_API_KEY=not-needed-for-local
  4. Start your LLM server (if using local model):

    • For LM Studio: Start the server and load your preferred model
    • For Ollama: Run ollama serve and load a model
    • For OpenAI: Use https://api.openai.com/v1/chat/completions as URL
  5. Run the application:

    python main.py

Option 2: Using Tavily Clone (No API Key Required)

If you prefer to run without a Tavily API key, you can use the included Tavily Clone server:

  1. Run the installation script:

    chmod +x install.sh
    ./install.sh
  2. Configure the backend:

    • Open tavily_clone_project/backend/server.js
    • Add your SerpApi API key (get one at serpapi.com)
  3. Start the Tavily Clone server (Terminal 1):

    cd tavily_clone_project/backend
    node server.js

    Note: First run will download AI models (may take several minutes)

  4. Start the frontend (Terminal 2):

    cd tavily_clone_project/frontend
    npm start
  5. Run the main application (Terminal 3):

    python main_test.py

📖 Usage

Basic Workflow

  1. Start the application: Run python main.py

  2. Enter your security goal: Type a high-level objective in the input box, such as:

    • "Scan localhost for open web ports"
    • "Check for SQL injection vulnerabilities on example.com"
    • "Enumerate subdomains of target.com"
  3. Review LLM reasoning: The AI will explain its decision-making process

  4. Approve or deny the command:

    • Execute: Runs the command and displays output
    • Deny: Asks the LLM for an alternative approach
  5. View results: Command output appears in real-time in the log display

  6. Continue conversation: The LLM maintains context for follow-up actions

Keyboard Shortcuts

  • Ctrl+Enter: Submit input
  • Ctrl+S: Save session to file
  • Ctrl+L: Load session from file
  • Ctrl+E: Export conversation to text
  • Ctrl+K: Clear conversation
  • Ctrl+C: Copy log to clipboard
  • Ctrl+Q: Quit application

Session Management

Save a Session:

  • Click File > Save Session or press Ctrl+S
  • Choose a location and filename (automatically timestamped)
  • Session includes conversation history and executed commands

Load a Session:

  • Click File > Load Session or press Ctrl+L
  • Select a previously saved session file
  • Conversation history will be restored

Export Conversation:

  • Click File > Export Conversation or press Ctrl+E
  • Save as text file for documentation or review

Example Sessions

Port Scanning Example:

User: "Scan localhost for open web ports"
LLM: "I will run an Nmap scan with service version detection..."
Command: nmap -sV -p 80,443,8000-8100 localhost
[User approves]
Output: [Nmap scan results...]

Information Gathering Example:

User: "Find subdomains for example.com"
LLM: "I need to search for the best subdomain enumeration tools"
[Performs web search]
LLM: "I will use Subfinder for subdomain enumeration..."
Command: subfinder -d example.com

🛠️ Configuration

Environment-Based Configuration

The application uses a .env file for configuration. This is the recommended approach:

  1. Copy .env.example to .env:

    cp .env.example .env
  2. Edit .env with your settings:

    TAVILY_API_KEY=your_tavily_api_key_here
    LLM_API_URL=http://localhost:1234/v1/chat/completions
    LLM_API_KEY=not-needed-for-local

LLM Configuration

The application works with any OpenAI-compatible API. Configure in the .env file:

LLM_API_URL=your_endpoint_here
LLM_API_KEY=your_key_here  # May not be needed for local models

Popular LLM Options:

  • LM Studio (Local): http://localhost:1234/v1/chat/completions
  • Ollama (Local): http://localhost:11434/v1/chat/completions
  • OpenAI (Cloud): https://api.openai.com/v1/chat/completions

Recommended Models:

  • WhiteRabbitNeo (security-focused)
  • GPT-4 or GPT-3.5-turbo (general purpose)
  • Llama 2 70B or higher (local option)
  • Mixtral 8x7B (good balance of performance and resource usage)

Search Configuration

Using Tavily API:

TAVILY_API_KEY=tvly-xxxxx  # Get from tavily.com

Using Tavily Clone:

TAVILY_CLONE_URL=http://localhost:3001/api/search

## 🎨 UI Customization

The application uses a dark hacker theme by default. To customize, edit the `dark_theme_stylesheet` variable in the source file.

## 🔧 Troubleshooting

### Common Issues

**Problem**: "Could not connect to LLM"
- **Solution**: Ensure your LLM server is running and the URL is correct
- Check firewall settings
- Verify the model is loaded in your LLM server

**Problem**: "Tavily API error"
- **Solution**: Check your API key is valid
- Ensure you have API credits remaining
- Try using the Tavily Clone alternative

**Problem**: "Command not found" errors
- **Solution**: Install the required security tools (nmap, etc.)
- Ensure tools are in your system PATH
- On Windows, use WSL or install Windows versions of tools

**Problem**: Application freezes
- **Solution**: The LLM might be taking too long to respond
- Try a faster model or increase timeout in the code
- Check your internet connection if using remote API

### Debug Mode

For detailed debugging, check the console output where you launched the application. All API calls and responses are logged.

## 🔒 Security Best Practices

1. **Review All Commands**: Never blindly approve commands without understanding them
2. **Isolated Environment**: Test in a VM or isolated network when possible
3. **Legal Authorization**: Only test systems you own or have permission to test
4. **Keep Updated**: Regularly update security tools and the application
5. **Audit Logs**: Review the conversation history and command outputs
6. **API Key Security**: Never commit API keys to version control
7. **Network Security**: Be cautious when using cloud-based LLM APIs with sensitive data

## 📚 Additional Documentation

- [ARCHITECTURE.md](ARCHITECTURE.md) - Detailed system design
- [ROADMAP.md](ROADMAP.md) - Future development plans
- [CONTRIBUTING.md](CONTRIBUTING.md) - Contribution guidelines
- [WORKFLOWS.md](.github/WORKFLOWS.md) - GitHub Actions workflow documentation

## 🤖 CI/CD & Automation

This project uses GitHub Actions for comprehensive automation:

- **Continuous Integration**: Automated testing across Python 3.8-3.12 on Linux, macOS, and Windows
- **Build & Release**: Automated builds with PyInstaller and distribution package generation
- **UI Testing**: Automated screenshot capture for visual regression testing
- **Asset Management**: Dependency auditing, license compliance, and automated backups
- **Security Scanning**: Automated vulnerability scanning with safety and bandit

For detailed information about workflows, artifacts, and screenshots, see [.github/WORKFLOWS.md](.github/WORKFLOWS.md).

### Artifacts Available

The workflow system generates various artifacts:
- **Build Artifacts**: Platform-specific executables and distribution packages
- **Screenshots**: UI captures from all supported platforms (Linux, macOS, Windows)
  - Screenshots are automatically captured during CI runs
  - View the latest screenshots in the [Actions tab](https://github.com/GizzZmo/Ai-pentester/actions) under workflow artifacts
  - Stored in the repository's `/screenshots` directory
- **Test Reports**: Coverage reports and test results
- **Security Reports**: Vulnerability scans and dependency audits
- **Asset Inventories**: Complete project asset catalogs
- **Documentation Bundles**: Comprehensive documentation packages

Access artifacts from the [Actions tab](https://github.com/GizzZmo/Ai-pentester/actions) after workflow runs.

## 🤝 Contributing

Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on:
- Code style guidelines
- Pull request process
- Bug reporting
- Feature requests

## 📜 License

This project is licensed under the GNU General Public License v3.0 - see the [LICENSE](LICENSE) file for details.

## ⚖️ Legal Disclaimer

This tool is provided for educational and authorized security testing purposes only. Users are responsible for complying with all applicable laws and regulations. The authors and contributors are not responsible for any misuse or damage caused by this tool.

## 🙏 Acknowledgments

- PyQt6 for the excellent GUI framework
- The open-source security tools community
- LLM providers for making AI accessible

## 📞 Support

- **Issues**: [GitHub Issues](https://github.com/GizzZmo/Ai-pentester/issues)
- **Discussions**: [GitHub Discussions](https://github.com/GizzZmo/Ai-pentester/discussions)

---

**Made with ❤️ for the cybersecurity community**

About

AI Penetration Testing Assistant License: GPL v3 Python 3.8+ An intelligent desktop application that assists security professionals with penetration testing by leveraging Large Language Models (LLMs) to interpret high-level security goals and suggest specific commands for execution.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 82.8%
  • Shell 17.2%