Paper Research Agent - Autonomous Multi-Agent Research System
name: paper-research-agent
by changer-changer · published 2026-03-22
$ claw add gh:changer-changer/changer-changer-paper-research-agent---
name: paper-research-agent
description: |
Autonomous multi-agent paper research system. When user wants to research a topic, find related papers,
or analyze academic literature, use this skill to orchestrate the full pipeline: intelligent search →
PDF download → parallel agent analysis → comprehensive report generation.
Triggers on: "research papers on X", "find related literature", "analyze papers",
"调研论文", "查找相关文献", "分析论文", "帮我调研XXX领域"
---
# Paper Research Agent - Autonomous Multi-Agent Research System
When to Use
Use this skill when the user wants to:
Core Workflow
The system autonomously executes the full research pipeline:
User Query → Research Probe → PDF Download → Parallel Agent Analysis → Integrated ReportPhase 1: Research Probe (Automated)
Phase 2: PDF Download (Automated)
Phase 3: Parallel Agent Analysis (Automated - Key)
Phase 4: Report Integration (Automated)
Agent Analysis Requirements
Each sub-agent MUST generate a 6-section report following the detailed standards in:
**`references/analysis_standards.md`**
SubAgent MUST read this reference file before starting analysis to understand:
Summary of 6 Required Sections
Section 1: Research Background
Section 2: Research Problem
Section 3: Core Innovation
Section 4: Experimental Design
Section 5: Key Insights
Section 6: Future Work
**For full details, sub-section hints, and quality standards - READ `references/analysis_standards.md`**
Quality Enforcement
Agents MUST:
Usage
Agent Execution (When User Requests Research)
**Trigger phrases**:
**Agent Action**:
Step 1: Execute main pipeline
import subprocess
result = subprocess.run([
"python3",
"~/.openclaw/workspace/skills/paper-research-agent/scripts/research_pipeline.py",
"--query", "{user_topic}",
"--mode", "vertical",
"--max-papers", "10",
"--output", "./research_{topic}"
], capture_output=True, text=True)
print(result.stdout)Step 2: Read generated agent tasks
import json
with open("./research_{topic}/_agent_tasks.json") as f:
tasks = json.load(f)Step 3: Spawn parallel sub-agents for analysis (CRITICAL)
# Spawn multiple agents in parallel for each paper
for task_info in tasks:
sessions_spawn(
agentId="main",
mode="run",
runtime="subagent",
task=task_info['task'],
timeoutSeconds=600 # 10 minutes per paper
)**Important**: Launch as many agents in parallel as possible for speed.
Step 4: After all agents complete, integrate results
# Collect all analysis reports
# Generate integrated survey
# Present to userOutput Structure
research_output/
├── _research_summary.json # Research metadata
├── probe/
│ ├── _probe_results.json # Search results
│ └── _probe_report.md # Human-readable probe report
├── papers/
│ ├── {title}-{arxiv_id}.pdf # Downloaded PDFs
│ └── ...
├── analysis/
│ ├── {title}-{arxiv_id}_analysis.md # 6-section agent reports
│ └── ...
└── _integrated_survey.md # Final integrated surveyKey Scripts
Report Format Standards
Each sub-agent analysis report MUST follow this exact 6-section structure:
# 📄 {Paper Title}
> **ArXiv ID**: {id}
> **Authors**: {authors}
> **Published**: {date}
---
## Section 1: Research Background
- Domain context
- Key prior works (3-5 papers with citations)
- Technical state at publication time
- Citations: [Section X.Y]
## Section 2: Research Problem
- SPECIFIC problem being solved
- SPECIFIC limitations of existing methods (quote original)
- Core assumptions
- Citations: [Section X.Y, "exact quote"]
## Section 3: Core Innovation
- Method/system architecture (detailed)
- Technical details (network structure, dimensions)
- Key formulas in LaTeX: $...$
- Comparison table:
| Aspect | Prior Work | This Paper | Advantage |
|--------|-----------|------------|-----------|
- What is genuinely new
## Section 4: Experimental Design
- Dataset: Name, size, characteristics
- Baseline methods: Specific names
- Metrics: Formulas, units
- Results table (REAL data):
| Method | Metric1 | Metric2 |
|--------|---------|---------|
| This | X.XX | X.XX |
| Baseline | X.XX | X.XX |
- Ablation study results
## Section 5: Key Insights
- Core findings from experiments
- What works/doesn't work
- Design choices and impact
- Practical recommendations
## Section 6: Future Work
- Limitations acknowledged by authors
- Unsolved problems
- Future directions (3+)
---
*Analysis by Paper Research Agent*
*Date: {timestamp}***Quality Requirements**:
Error Handling
If paper download fails:
If agent analysis fails:
Best Practices
1. **For deep research**: Use `--mode vertical` (searches 4 levels deep)
2. **For exploration**: Use `--mode iterative` (progressive discovery)
3. **For specific paper**: Use `--mode horizontal` (find related work)
4. **Parallel agents**: System auto-spawns optimal number based on paper count
5. **Quality check**: Always verify a few random citations manually
Example Session
**User**: "帮我调研扩散策略在机器人操作中的应用"
**Agent**:
1. Executes research probe with query "扩散策略 机器人操作"
2. Finds 30 related papers across 4 levels
3. Downloads PDFs for top 10 papers
4. Spawns 10 sub-agents in parallel
5. Each agent analyzes one paper with 6-section format
6. Collects all analyses
7. Generates integrated survey with comparison tables
8. Presents final report to user
**Output**: Complete research package with all papers analyzed and integrated survey.
Dependencies
Required Python packages (auto-installed):
Notes
More tools from the same signal band
Order food/drinks (点餐) on an Android device paired as an OpenClaw node. Uses in-app menu and cart; add goods, view cart, submit order (demo, no real payment).
Sign plugins, rotate agent credentials without losing identity, and publicly attest to plugin behavior with verifiable claims and authenticated transfers.
The philosophical layer for AI agents. Maps behavior to Spinoza's 48 affects, calculates persistence scores, and generates geometric self-reports. Give your...