Mantis

Ethically hack any website
with the power of AI.

Open source. One command. Runs on any AI model.

Get started → View on GitHub
Scroll ↓
0
Hacks completed
Real bug-bounty hunts run with Mantis — and counting.
Install in one command

One line. That's the whole install.

Paste this in your terminal. It clones Mantis, installs the right files into your current project, and asks once which AI harness you want (Claude Code, OpenCode, or both).

curl -fsSL https://mantishack.com/install.sh | bash

Then open your AI tool in that folder (claude or opencode) and type /mantis target.example.com.

Or let an AI do it for you

Paste this into Claude Code, OpenCode, Cursor, or any agent with shell access:

Install Mantis here: curl -fsSL https://mantishack.com/install.sh | bash

Want the agent to remember how to install Mantis next time? Drop the install-mantis skill into ~/.claude/skills/install-mantis/SKILL.md — one curl away.

In plain English

What is Mantis?

You give Mantis a website. It explores the site, tries known attack patterns, double-checks every finding, and hands you a submission-ready report. You don't need to know hacking, prompts, or which model to use.

01

You point it at a site

One command: /mantis example.com. No config files, no prompt-engineering, no security knowledge required.

02

It hunts for real bugs

Mantis maps the website, picks the most promising spots, and runs the same techniques professional bug-bounty hunters use.

03

It writes the report

Every finding is re-checked three times to filter out false alarms. You get a polished report you can paste straight into HackerOne, Bugcrowd, or Intigriti.

A typical run

What it actually looks like.

From one command to a finished report. No babysitting, no manual prompting.

You
/mantis target.example.com
Mantis
Mapping the site… 14 subdomains, 47 API endpoints, 3 login forms. Picking the most promising surfaces.
Mantis
Spawning 6 hunters in parallel. Testing auth, access control, business logic, and known framework bugs.
Mantis
Found 3 candidates. Re-running each three times to confirm they're real.
Mantis
2 confirmed, 1 rejected as a false positive. Writing reports with reproduction steps, impact, and severity.
You get
Two markdown reports ready to submit. CVSS, reproduction, screenshots, and a suggested fix for each.
Is this for me?

Who Mantis is built for.

Bug-bounty hunters

Cover more programs with less manual grind. Mantis handles recon and the first pass — you focus on high-leverage finds.

Security engineers

Run it against your own apps in staging. Find issues before someone else does. Plug it into CI to catch regressions.

Curious developers

Ever wondered how secure your own site is? Run it on your own domain — Mantis writes plain-language explanations.

Only test sites you own or are authorized to test. Mantis includes a scope guard, but you are responsible for what you point it at. Stick to your own systems or programs with explicit permission (HackerOne, Bugcrowd, etc.).
How it hunts

Four phases. Like a real mantis.

The pipeline mirrors a praying-mantis hunt: watch quietly, plan the strike, attack, secure the catch.

Discover phase
Phase 01 · Stalk

Discover

Map every subdomain, page, API, and login. Understand the target before touching it.

Reason phase
Phase 02 · Wait

Reason

Pick which bugs are most likely. Choose techniques based on the site's tech stack.

Strike phase
Phase 03 · Strike

Test

Run attacks in parallel. Re-check every hit three times so you only see real bugs.

Hold phase
Phase 04 · Hold

Learn

Score severity, write reproduction steps, draft a fix. Paste into the bug-bounty platform.

Why it works

Built so you can trust the output.

AI models hallucinate. Mantis is engineered around that — every finding has to survive three skeptical re-checks before it reaches you.

Triple-checked findings

Skeptical, balanced, and fresh-eyes reviewers re-run every PoC. False positives get filtered out automatically.

Step-by-step pipeline

Seven clearly-defined phases. Each writes its progress to disk, so you can pause, resume, or inspect at any time.

Runs in parallel

Multiple hunters work different parts of the site simultaneously. Hours of one-at-a-time testing finish in minutes.

Knows the modern web

Built-in playbooks for Cloudflare, AWS WAF, GraphQL, JWT, OAuth, Firebase, Next.js, WordPress. Mantis picks the right tricks for the stack it sees.

Safe by default

A scope guard refuses to touch anything outside the target you specify. Self-defense rules mean a malicious site can't trick Mantis into running attacker code.

Works with any AI

Claude, GPT-5, Gemini, DeepSeek, Llama, or local models on your laptop. Swap models with one line in a config file.

Use your favourite tools

Drop it into the editor you already use.

Mantis isn't a new app to learn. It plugs into the AI coding tools you already have.

Claude Code

The best-tested setup. Type /mantis target.com and watch it work.

Quickstart →

OpenCode

Use OpenAI, Google, OpenRouter, or local models. Pick a different model for each step if you want.

Model picker →

Cursor, Aider, Cline, anything MCP

Any tool that supports MCP works. Point it at Mantis and it just works.

Adapter docs →
The whole picture

How the pieces fit together.

For the curious: here's the full system on one diagram. You don't need to understand this to use Mantis.

Mantis system architecture
Common questions

FAQ

Do I need to know hacking?

No. Mantis writes the reports in plain language and explains what each bug means and how to fix it. If you can run one terminal command, you can use it.

Is this legal?

Only run Mantis against websites you own, or against bug-bounty programs that have explicitly invited testing (HackerOne, Bugcrowd, Intigriti, etc.). Running it on someone else's site without permission is illegal in most countries. Mantis includes a scope guard, but you are responsible for what you point it at.

What does it cost?

Mantis itself is free and open source. You'll pay the AI provider for the model you use (Claude, OpenAI, Gemini, etc.) — typically a few cents to a few dollars per scan. Use local Ollama models and the AI cost is zero.

Which AI model should I pick?

For best results, Claude Opus or GPT-5. For cheaper runs, Claude Haiku or Gemini Flash work well. The model picker docs have specific recommendations per phase.

How is this different from a regular vulnerability scanner?

Scanners like Nuclei or Burp run a fixed list of checks. Mantis uses an AI model to reason about each specific website — picking attack paths based on what tech the site uses, chaining bugs together, and writing the report with full context. Think "automated junior pentester" not "rule-based scanner".

Will it find every bug?

No — no tool will. Mantis catches the most common high-impact classes: broken access control, authentication issues, business logic flaws, known framework bugs. Subtle bugs that require deep domain knowledge still need a human. Mantis is the first pass, not the last word.

Can I trust the output?

Each finding is re-tested three times by separate skeptical reviewers before it reaches you. False positives still happen, but they're rare. Every report includes a step-by-step reproduction you can verify yourself in 30 seconds.