Use any AI model

Mantis is provider-agnostic. The MCP server is plain JSON-RPC. The agent prompts are plain markdown. Pick any model on any harness.

How it works

Three things separate the model from the framework:

  1. The MCP server (mcp/server.js) is pure Node, zero dependencies, no LLM API calls inside it.
  2. The agent prompts in .claude/agents/*.md are markdown files. Any agent runner can load them as system prompts.
  3. The harness (Claude Code, OpenCode, Aider, etc.) is what decides which LLM provider to call. Mantis ships configs for the main ones.

OpenCode (recommended for non-Anthropic)

OpenCode is a provider-agnostic agent runner. Mantis ships a ready-to-go opencode.json at the repo root that registers the MCP server and defines all 12 named agents with per-agent model preferences.

# 1. Install OpenCode
curl -fsSL https://opencode.ai/install | bash

# 2. Install Mantis with the OpenCode harness
git clone https://github.com/deonmenezes/bountyhunter.git mantis
cd mantis
./install.sh /path/to/your/project --harness=opencode

# 3. Set whichever provider keys you want
export ANTHROPIC_API_KEY=...
export OPENAI_API_KEY=...
export GOOGLE_API_KEY=...
export OPENROUTER_API_KEY=...

# 4. Run
cd /path/to/your/project
opencode
# inside OpenCode:
@mantis-orchestrator target.com

Swap models per agent

Open opencode.json at the repo root. Every agent has its own model: line.

{
  "model": "openai/gpt-5",
  "agent": {
    "hunter-agent":        { "model": "openai/gpt-5" },
    "brutalist-verifier":  { "model": "anthropic/claude-opus-4-5" },
    "balanced-verifier":   { "model": "anthropic/claude-opus-4-5" },
    "final-verifier":      { "model": "google/gemini-2.5-pro" },
    "grader":              { "model": "openai/gpt-5-mini" },
    "triage-agent":        { "model": "groq/llama-3.3-8b-versatile" }
  }
}

Mixed providers are encouraged. Use your strongest model for the three verifiers and the chain-builder (that's where the evidence-not-alerts contract is enforced). Downgrade the recon and report-writing roles freely.

Cross-provider model matrix

Recommended picks per role across the main providers. Bold is the project default.

RoleAnthropicOpenAIGoogleOpen-weight
mantis-orchestratorOpus 4.5GPT-5Gemini 2.5 ProDeepSeek-V3 R1
recon-agentSonnet 4.6GPT-5 miniGemini 2.5 FlashLlama 3.3 70B
triage-agentHaiku 4.5GPT-5 nanoGemini 2.5 Flash-LiteLlama 3.3 8B
hunter-agentOpus 4.5GPT-5Gemini 2.5 ProDeepSeek-V3 R1
chain-builderOpus 4.5o3 / GPT-5Gemini 2.5 ProDeepSeek-V3 R1
brutalist-verifierOpus 4.5GPT-5Gemini 2.5 ProDeepSeek-V3 R1
balanced-verifierOpus 4.5GPT-5Gemini 2.5 ProDeepSeek-V3 R1
final-verifierOpus 4.5GPT-5Gemini 2.5 ProDeepSeek-V3 R1
graderSonnet 4.6GPT-5 miniGemini 2.5 FlashLlama 3.3 70B
report-writerSonnet 4.6GPT-5 miniGemini 2.5 FlashLlama 3.3 70B
patch-writerSonnet 4.6GPT-5 miniGemini 2.5 FlashQwen3 Coder 480B
disclosure-senderSonnet 4.6GPT-5 miniGemini 2.5 FlashLlama 3.3 70B

Other harnesses

Mantis also runs on chat-driven harnesses (Aider, Cline) and any MCP client (Cursor, Continue, Goose, custom runners). You lose the parallel-wave dispatch, but the typed MCP tools all work identically.

HarnessHow to invokeFSM driving
Claude Code/mantis target.comAutomatic (parallel waves)
OpenCode@mantis-orchestrator target.comAutomatic (sequential sub-agents)
AiderPaste orchestrator promptManual
Cline (VS Code)Paste orchestrator promptManual
Cursor / Continue / Goose / raw MCPRegister the MCP serverDIY (or use prompt)
Note

The full per-harness install guides live in adapters/ in the repo.