MARL Enhance — Brain Upgrade for Your Agent
name: marl-middleware
by cutechicken99 · published 2026-03-22
$ claw add gh:cutechicken99/cutechicken99-marl-middleware---
name: marl-middleware
description: Multi-stage multi-agent reasoning middleware that reduces LLM hallucination by 70%+. 9 specialized emergence engines for invention, creative, pharma, genomics, chemistry, ecology, law, recipe, and document generation.
tags: [reasoning, hallucination, multi-agent, metacognition, emergence, middleware]
metadata:
clawdbot:
config:
requiredEnv: []
example: |
llm:
baseURL: "http://localhost:8080/v1"
model: "gpt-5.4::create"
---
# MARL Enhance — Brain Upgrade for Your Agent
**The 3rd approach after fine-tuning & RAG.** MARL restructures how LLMs reason at runtime — not their weights. One line to integrate, 70%+ hallucination reduction, 9 domain-specific emergence engines.
[](https://pypi.org/project/marl-middleware/)
[](https://github.com/Vidraft/MARL)
[](https://huggingface.co/spaces/VIDraft/MARL)
[](https://huggingface.co/spaces/FINAL-Bench/Leaderboard)
What It Does
Before MARL: Your agent calls the LLM once → gets an answer (might hallucinate).
After MARL: Your agent calls MARL → MARL runs a multi-stage expert pipeline → hypothesis, solving, auditing, adversarial verification, synthesis → returns a deeply verified answer.
Your Agent → MARL → Multi-stage Pipeline → Any LLM → Verified Answer**Results:** 70%+ hallucination reduction · 94.8% of improvement from self-correction · Verified on FINAL Bench (HuggingFace Global Top 5 dataset).
Setup
Option A: Docker (Recommended — all platforms)
docker run -p 8080:8080 vidraft/marlOption B: pip (Linux x86_64)
pip install marl-middleware
python -m marl serve --port 8080Option C: HuggingFace Space (No install — try instantly)
Use `https://huggingface.co/spaces/VIDraft/MARL` directly in your browser.
Connect to OpenClaw
Set your `config.json`:
{
"llm": {
"baseURL": "http://localhost:8080/v1",
"model": "gpt-5.4"
}
}That's it. Every LLM call now passes through MARL's multi-stage reasoning pipeline.
9 Emergence Modes
Switch modes by appending `::mode` to any model name:
| model value | Engine | What it does |
|-------------|--------|-------------|
| `gpt-5.4` | 🔬 Insight | Default — fact-check, strategy, deep analysis |
| `gpt-5.4::invent` | 🔧 Invent | Patent-level invention via TRIZ + bio-inspired + contradiction resolution |
| `gpt-5.4::create` | ✨ Create | Cliché inversion, paradox, genre fusion, sensory collision |
| `gpt-5.4::recipe` | 🍳 Recipe | Culinary emergence with taste chemistry validation |
| `gpt-5.4::pharma` | 💊 Pharma | Drug repositioning, mechanism crossing, multi-target design |
| `gpt-5.4::genomics` | 🧬 Genomics | Pathway crosstalk, synthetic lethality, phenotype bridging |
| `gpt-5.4::chemistry` | 🧪 Chemistry | Contradictory properties, biomimicry, waste-to-value |
| `gpt-5.4::ecology` | 🌍 Ecology | Conservation transfer, threat inversion, service stacking |
| `gpt-5.4::law` | ⚖️ Law | Cross-jurisdiction transplant, tech-law collision resolution |
| `gpt-5.4::document` | 📄 Document | Metacognitive report and document generation |
Replace `gpt-5.4` with any model — Claude, Gemini, DeepSeek, Llama, etc.
Example: Switch to Pharma mode
{
"llm": {
"baseURL": "http://localhost:8080/v1",
"model": "gpt-5.4::pharma"
}
}Then chat: *"Find drug repositioning candidates for Alzheimer's using immune checkpoint mechanisms"*
Example: Creative ideation
{
"llm": {
"model": "claude-sonnet::create"
}
}Then chat: *"Generate 10 movie loglines that have never existed before"*
How It Works
┌─ OpenClaw ────────────────────────────────────┐
│ "Analyze this complex question" │
└──────────────┬─────────────────────────────────┘
│ HTTP (OpenAI API format)
▼
┌─ MARL Middleware ─────────────────────────────┐
│ Multi-stage Multi-agent Reasoning Pipeline │
│ 9 Emergence Engines · 70%+ Hallucination ↓ │
└──────────────┬─────────────────────────────────┘
│ API calls to your chosen LLM
▼
┌─ Any LLM ─────────────────────────────────────┐
│ GPT-5.4 · Claude · Gemini · DeepSeek · Llama │
└────────────────────────────────────────────────┘MARL works with **every LLM** that supports OpenAI API format. It runs locally on your machine — your data never leaves your infrastructure.
Works With Any LLM
Links
About
Built by **VIDRAFT** (Seoul AI Hub). MARL's core engine is delivered as compiled binaries to protect proprietary technology. Interface code is open for integration.
Apache 2.0 · Contact: arxivgpt@gmail.com
More tools from the same signal band
Order food/drinks (点餐) on an Android device paired as an OpenClaw node. Uses in-app menu and cart; add goods, view cart, submit order (demo, no real payment).
Sign plugins, rotate agent credentials without losing identity, and publicly attest to plugin behavior with verifiable claims and authenticated transfers.
The philosophical layer for AI agents. Maps behavior to Spinoza's 48 affects, calculates persistence scores, and generates geometric self-reports. Give your...