Grant Mock Reviewer
name: grant-mock-reviewer
by aipoch-ai · published 2026-04-01
$ claw add gh:aipoch-ai/aipoch-ai-grant-mock-reviewer---
name: grant-mock-reviewer
description: Simulates NIH study section peer review for grant proposals. Triggers
when user wants mock review, critique, or evaluation of a grant proposal before
submission. Generates structured critique using official NIH scoring rubric (1-9
scale), identifies weaknesses, provides actionable revision recommendations, and
produces a comprehensive review summary similar to actual NIH Summary Statement.
version: 1.0.0
category: Grant
tags: []
author: AIPOCH
license: MIT
status: Draft
risk_level: Medium
skill_type: Tool/Script
owner: AIPOCH
reviewer: ''
last_updated: '2026-02-06'
---
# Grant Mock Reviewer
A simulated NIH study section reviewer that provides structured, rigorous critique of grant proposals using the official NIH scoring criteria and methodology.
Capabilities
1. **NIH Scoring Rubric Application**: Official 1-9 scale scoring across all 5 criteria
2. **Weakness Identification**: Systematic detection of common proposal flaws
3. **Critique Generation**: Structured written critiques for each review criterion
4. **Summary Statement**: Complete mock Summary Statement output
5. **Revision Guidance**: Prioritized, actionable recommendations for improvement
Usage
Command Line
# Full mock review with Summary Statement
python3 scripts/main.py --input proposal.pdf --format pdf --output review.md
# Review Specific Aims only
python3 scripts/main.py --input aims.pdf --section aims --output aims_review.md
# Targeted review (specific criterion focus)
python3 scripts/main.py --input proposal.pdf --focus approach --output approach_critique.md
# Generate NIH-style scores only
python3 scripts/main.py --input proposal.pdf --scores-only --output scores.json
# Compare before/after revision
python3 scripts/main.py --original original.pdf --revised revised.pdf --compareAs Library
from scripts.main import GrantMockReviewer
reviewer = GrantMockReviewer()
result = reviewer.review(
proposal_text=proposal_content,
grant_type="R01",
section="full"
)
print(result.summary_statement)
print(result.scores)Parameters
| Parameter | Type | Default | Required | Description |
|-----------|------|---------|----------|-------------|
| `--input` | string | - | Yes | Path to proposal file (PDF, DOCX, TXT, MD) |
| `--format` | string | auto | No | Input file format (pdf, docx, txt, md) |
| `--section` | string | full | No | Section to review (full, aims, significance, innovation, approach) |
| `--grant-type` | string | R01 | No | Grant mechanism (R01, R21, R03, K99, F32) |
| `--focus` | string | - | No | Focus on specific criterion (significance, investigator, innovation, approach, environment) |
| `--scores-only` | flag | false | No | Output scores only (JSON) |
| `--output`, `-o` | string | stdout | No | Output file path |
| `--original` | string | - | No | Original proposal for comparison |
| `--revised` | string | - | No | Revised proposal for comparison |
| `--compare` | flag | false | No | Enable comparison mode |
NIH Scoring System
Overall Impact Score (1-9)
The single most important score reflecting the likelihood of the project to exert a sustained, powerful influence on the research field.
| Score | Descriptor | Likelihood of Funding |
|-------|------------|----------------------|
| 1 | Exceptional | Very High |
| 2 | Outstanding | High |
| 3 | Excellent | Good |
| 4 | Very Good | Moderate |
| 5 | Good | Low-Moderate |
| 6 | Satisfactory | Low |
| 7 | Fair | Very Low |
| 8 | Marginal | Unlikely |
| 9 | Poor | Not Fundable |
Individual Criteria (1-9 each)
1. **Significance**: Does the project address an important problem? Will scientific knowledge be advanced?
2. **Investigator(s)**: Are the PIs well-suited? Adequate experience and training?
3. **Innovation**: Does it challenge current paradigms? Novel concepts, approaches, methods?
4. **Approach**: Sound research design? Appropriate methods? Adequate controls? Address pitfalls?
5. **Environment**: Adequate institutional support? Scientific environment conducive to success?
Score Interpretation
Review Output Format
1. Score Summary
Overall Impact: [Score] - [Descriptor]
Criterion Scores:
- Significance: [Score]
- Investigator(s): [Score]
- Innovation: [Score]
- Approach: [Score]
- Environment: [Score]2. Strengths
Bullet-point list of major strengths by criterion
3. Weaknesses
Bullet-point list of major weaknesses by criterion
4. Detailed Critique
Paragraph-form critique for each criterion following NIH style
5. Summary Statement
Complete narrative synthesis of the review
6. Revision Recommendations
Prioritized, actionable suggestions for improvement
Common Weaknesses Detected
Significance
Investigator
Innovation
Approach
Environment
Technical Difficulty
**High** - Requires deep understanding of NIH peer review processes, ability to apply standardized scoring rubrics consistently, and generation of clinically/scientifically accurate critique across diverse research domains.
**Review Required**: Human verification recommended before deployment in production settings.
References
Best Practices for Users
1. **Provide Complete Proposals**: The tool works best with full Research Strategy sections
2. **Include Preliminary Data**: Approach critique depends on feasibility evidence
3. **Review Multiple Times**: Use iteratively as you revise
4. **Compare Versions**: Track improvement between drafts
5. **Consider Multiple Perspectives**: Supplement with human reviewer feedback
Limitations
1. Cannot access external literature to verify claims
2. May not capture domain-specific methodological nuances
3. Scoring is simulated and may not match actual study section scores
4. Best used as preparatory tool, not replacement for human review
Version
1.0.0 - Initial release with NIH R01/R21/R03 support
Risk Assessment
| Risk Indicator | Assessment | Level |
|----------------|------------|-------|
| Code Execution | Python/R scripts executed locally | Medium |
| Network Access | No external API calls | Low |
| File System Access | Read input files, write output files | Medium |
| Instruction Tampering | Standard prompt guidelines | Low |
| Data Exposure | Output files saved to workspace | Low |
Security Checklist
Prerequisites
# Python dependencies
pip install -r requirements.txtEvaluation Criteria
Success Metrics
Test Cases
1. **Basic Functionality**: Standard input → Expected output
2. **Edge Case**: Invalid input → Graceful error handling
3. **Performance**: Large dataset → Acceptable processing time
Lifecycle Status
- Performance optimization
- Additional feature support
More tools from the same signal band
Order food/drinks (点餐) on an Android device paired as an OpenClaw node. Uses in-app menu and cart; add goods, view cart, submit order (demo, no real payment).
Sign plugins, rotate agent credentials without losing identity, and publicly attest to plugin behavior with verifiable claims and authenticated transfers.
The philosophical layer for AI agents. Maps behavior to Spinoza's 48 affects, calculates persistence scores, and generates geometric self-reports. Give your...