Ethically hack any website
with the power of AI.
Open source. One command. Runs on any AI model.
Scroll ↓
Open source. One command. Runs on any AI model.
Scroll ↓Paste this in your terminal. It clones Mantis, installs the right files into your current project, and asks once which AI harness you want (Claude Code, OpenCode, or both).
curl -fsSL https://mantishack.com/install.sh | bash
Then open your AI tool in that folder (claude or opencode) and type /mantis target.example.com.
Paste this into Claude Code, OpenCode, Cursor, or any agent with shell access:
Install Mantis here: curl -fsSL https://mantishack.com/install.sh | bash
Want the agent to remember how to install Mantis next time? Drop the
install-mantis skill into ~/.claude/skills/install-mantis/SKILL.md — one curl away.
You give Mantis a website. It explores the site, tries known attack patterns, double-checks every finding, and hands you a submission-ready report. You don't need to know hacking, prompts, or which model to use.
One command: /mantis example.com. No config files, no prompt-engineering, no security knowledge required.
Mantis maps the website, picks the most promising spots, and runs the same techniques professional bug-bounty hunters use.
Every finding is re-checked three times to filter out false alarms. You get a polished report you can paste straight into HackerOne, Bugcrowd, or Intigriti.
From one command to a finished report. No babysitting, no manual prompting.
Cover more programs with less manual grind. Mantis handles recon and the first pass — you focus on high-leverage finds.
Run it against your own apps in staging. Find issues before someone else does. Plug it into CI to catch regressions.
Ever wondered how secure your own site is? Run it on your own domain — Mantis writes plain-language explanations.
The pipeline mirrors a praying-mantis hunt: watch quietly, plan the strike, attack, secure the catch.
Map every subdomain, page, API, and login. Understand the target before touching it.
Pick which bugs are most likely. Choose techniques based on the site's tech stack.
Run attacks in parallel. Re-check every hit three times so you only see real bugs.
Score severity, write reproduction steps, draft a fix. Paste into the bug-bounty platform.
AI models hallucinate. Mantis is engineered around that — every finding has to survive three skeptical re-checks before it reaches you.
Skeptical, balanced, and fresh-eyes reviewers re-run every PoC. False positives get filtered out automatically.
Seven clearly-defined phases. Each writes its progress to disk, so you can pause, resume, or inspect at any time.
Multiple hunters work different parts of the site simultaneously. Hours of one-at-a-time testing finish in minutes.
Built-in playbooks for Cloudflare, AWS WAF, GraphQL, JWT, OAuth, Firebase, Next.js, WordPress. Mantis picks the right tricks for the stack it sees.
A scope guard refuses to touch anything outside the target you specify. Self-defense rules mean a malicious site can't trick Mantis into running attacker code.
Claude, GPT-5, Gemini, DeepSeek, Llama, or local models on your laptop. Swap models with one line in a config file.
Mantis isn't a new app to learn. It plugs into the AI coding tools you already have.
Use OpenAI, Google, OpenRouter, or local models. Pick a different model for each step if you want.
Model picker →Any tool that supports MCP works. Point it at Mantis and it just works.
Adapter docs →For the curious: here's the full system on one diagram. You don't need to understand this to use Mantis.
No. Mantis writes the reports in plain language and explains what each bug means and how to fix it. If you can run one terminal command, you can use it.
Only run Mantis against websites you own, or against bug-bounty programs that have explicitly invited testing (HackerOne, Bugcrowd, Intigriti, etc.). Running it on someone else's site without permission is illegal in most countries. Mantis includes a scope guard, but you are responsible for what you point it at.
Mantis itself is free and open source. You'll pay the AI provider for the model you use (Claude, OpenAI, Gemini, etc.) — typically a few cents to a few dollars per scan. Use local Ollama models and the AI cost is zero.
For best results, Claude Opus or GPT-5. For cheaper runs, Claude Haiku or Gemini Flash work well. The model picker docs have specific recommendations per phase.
Scanners like Nuclei or Burp run a fixed list of checks. Mantis uses an AI model to reason about each specific website — picking attack paths based on what tech the site uses, chaining bugs together, and writing the report with full context. Think "automated junior pentester" not "rule-based scanner".
No — no tool will. Mantis catches the most common high-impact classes: broken access control, authentication issues, business logic flaws, known framework bugs. Subtle bugs that require deep domain knowledge still need a human. Mantis is the first pass, not the last word.
Each finding is re-tested three times by separate skeptical reviewers before it reaches you. False positives still happen, but they're rare. Every report includes a step-by-step reproduction you can verify yourself in 30 seconds.