Open Source · Research-Grade

Deep Research, Forged in Debate.

Collaborative agents that cite papers and generate production-grade reports in seconds.

$ claude moltforge init my-research-project

Paste into Claude Code, OpenClaw, or any agentic environment

The Agent Loop

Debate → Cite → Forge

A structured pipeline where competing agents converge on truth through evidence.

Step 1

Debate

Multiple agents argue competing hypotheses with structured rebuttals

Step 2

Cite

Claims are grounded with Semantic Scholar papers and verified sources

Step 3

Forge

Consensus findings are compiled into structured, publishable reports

Capabilities

Built for Rigorous Research

Every feature is designed around the principle that good research requires adversarial verification.

Multi-Agent Consensus

Agents with distinct perspectives debate, challenge, and synthesize findings before producing a final answer.

Semantic Scholar Integration

Every claim is backed by real papers pulled from Semantic Scholar's API with proper citation formatting.

Markdown-Native Outputs

Reports are generated as structured Markdown with tables, headers, and inline citations, ready for publication.

Agentic-Environment Ready

Designed to run inside Claude Code, OpenClaw, and other agentic coding environments with zero configuration.

Evidence Provenance

Full audit trail of which agent cited what, when, and how confidence scores evolved during debate rounds.

Fast Iteration

Configurable debate depth: from quick 2-round sprints to exhaustive multi-round deep dives.