Use any AI model
Mantis is provider-agnostic. The MCP server is plain JSON-RPC. The agent prompts are plain markdown. Pick any model on any harness.
How it works
Three things separate the model from the framework:
- The MCP server (
mcp/server.js) is pure Node, zero dependencies, no LLM API calls inside it. - The agent prompts in
.claude/agents/*.mdare markdown files. Any agent runner can load them as system prompts. - The harness (Claude Code, OpenCode, Aider, etc.) is what decides which LLM provider to call. Mantis ships configs for the main ones.
OpenCode (recommended for non-Anthropic)
OpenCode is a provider-agnostic agent runner. Mantis ships a ready-to-go opencode.json at the repo root that registers the MCP server and defines all 12 named agents with per-agent model preferences.
# 1. Install OpenCode
curl -fsSL https://opencode.ai/install | bash
# 2. Install Mantis with the OpenCode harness
git clone https://github.com/deonmenezes/bountyhunter.git mantis
cd mantis
./install.sh /path/to/your/project --harness=opencode
# 3. Set whichever provider keys you want
export ANTHROPIC_API_KEY=...
export OPENAI_API_KEY=...
export GOOGLE_API_KEY=...
export OPENROUTER_API_KEY=...
# 4. Run
cd /path/to/your/project
opencode
# inside OpenCode:
@mantis-orchestrator target.com
Swap models per agent
Open opencode.json at the repo root. Every agent has its own model: line.
{
"model": "openai/gpt-5",
"agent": {
"hunter-agent": { "model": "openai/gpt-5" },
"brutalist-verifier": { "model": "anthropic/claude-opus-4-5" },
"balanced-verifier": { "model": "anthropic/claude-opus-4-5" },
"final-verifier": { "model": "google/gemini-2.5-pro" },
"grader": { "model": "openai/gpt-5-mini" },
"triage-agent": { "model": "groq/llama-3.3-8b-versatile" }
}
}
Mixed providers are encouraged. Use your strongest model for the three verifiers and the chain-builder (that's where the evidence-not-alerts contract is enforced). Downgrade the recon and report-writing roles freely.
Cross-provider model matrix
Recommended picks per role across the main providers. Bold is the project default.
| Role | Anthropic | OpenAI | Open-weight | |
|---|---|---|---|---|
mantis-orchestrator | Opus 4.5 | GPT-5 | Gemini 2.5 Pro | DeepSeek-V3 R1 |
recon-agent | Sonnet 4.6 | GPT-5 mini | Gemini 2.5 Flash | Llama 3.3 70B |
triage-agent | Haiku 4.5 | GPT-5 nano | Gemini 2.5 Flash-Lite | Llama 3.3 8B |
hunter-agent | Opus 4.5 | GPT-5 | Gemini 2.5 Pro | DeepSeek-V3 R1 |
chain-builder | Opus 4.5 | o3 / GPT-5 | Gemini 2.5 Pro | DeepSeek-V3 R1 |
brutalist-verifier | Opus 4.5 | GPT-5 | Gemini 2.5 Pro | DeepSeek-V3 R1 |
balanced-verifier | Opus 4.5 | GPT-5 | Gemini 2.5 Pro | DeepSeek-V3 R1 |
final-verifier | Opus 4.5 | GPT-5 | Gemini 2.5 Pro | DeepSeek-V3 R1 |
grader | Sonnet 4.6 | GPT-5 mini | Gemini 2.5 Flash | Llama 3.3 70B |
report-writer | Sonnet 4.6 | GPT-5 mini | Gemini 2.5 Flash | Llama 3.3 70B |
patch-writer | Sonnet 4.6 | GPT-5 mini | Gemini 2.5 Flash | Qwen3 Coder 480B |
disclosure-sender | Sonnet 4.6 | GPT-5 mini | Gemini 2.5 Flash | Llama 3.3 70B |
Other harnesses
Mantis also runs on chat-driven harnesses (Aider, Cline) and any MCP client (Cursor, Continue, Goose, custom runners). You lose the parallel-wave dispatch, but the typed MCP tools all work identically.
| Harness | How to invoke | FSM driving |
|---|---|---|
| Claude Code | /mantis target.com | Automatic (parallel waves) |
| OpenCode | @mantis-orchestrator target.com | Automatic (sequential sub-agents) |
| Aider | Paste orchestrator prompt | Manual |
| Cline (VS Code) | Paste orchestrator prompt | Manual |
| Cursor / Continue / Goose / raw MCP | Register the MCP server | DIY (or use prompt) |
The full per-harness install guides live in adapters/ in the repo.