MemoryAI โ A Brain for Your AI Agent ๐ง
name: memoryai
by ch270035 ยท published 2026-03-22
$ claw add gh:ch270035/ch270035-memoryai---
name: memoryai
description: Persistent long-term memory for AI agents. Store, recall, reason, and seamlessly switch sessions with zero context loss.
version: 1.0.0
metadata: {"openclaw": {"emoji": "๐ง ", "requires": {"bins": ["python3"], "env": ["HM_API_KEY", "HM_ENDPOINT", "OPENCLAW_DIR", "WORKSPACE"]}, "primaryEnv": "HM_API_KEY"}}
---
# MemoryAI โ A Brain for Your AI Agent ๐ง
> **Every AI session starts from zero. Yours doesn't have to.**
You spend 2 hours explaining your codebase, your preferences, your architecture decisions. The session ends. Tomorrow? Your AI has amnesia. You start over. Again.
**MemoryAI fixes this permanently.** It gives your AI agent a real long-term memory โ one that remembers what you built, what you decided, what you prefer, and why. Not for hours. For **weeks, months, even years.**
See it in action:
> **Monday:** "Our API uses /v1/ prefix, TypeScript with 2-space tabs, and we deploy via GitHub Actions."
>
> **3 weeks later:** You say "write a new endpoint." Your AI already knows the prefix, the style, the pipeline. Zero repetition. It just *remembers.*
Your AI's memory works like yours:
The more your AI uses a memory, the sharper it stays. Unused memories age into cold storage โ but a single recall brings them right back. Just like the human brain.
**Zero dependencies.** Pure Python stdlib. Every line of source code is readable, auditable, and yours to inspect.
What's new:
Setup
1. Get API key from https://memoryai.dev (free tier available)
2. Edit `{baseDir}/config.json`:
{
"endpoint": "https://memoryai.dev",
"api_key": "hm_sk_your_key_here"
}Or set env vars: `HM_ENDPOINT` and `HM_API_KEY`.
3. Test: `python {baseDir}/scripts/client.py stats`
Commands
Store memory
python {baseDir}/scripts/client.py store -c "data to remember" -t "tag1,tag2" -p hotPriority: `hot` (important, frequent recall) | `warm` (default) | `cold` (archive)
Optional parameters:
Recall memory
python {baseDir}/scripts/client.py recall -q "what was discussed?" -d deepDepth controls how hard the brain tries to remember:
Stats
python {baseDir}/scripts/client.py statsCompact (consolidate memories)
python {baseDir}/scripts/client.py compact -c "session transcript or context" -t "task description"Like how the brain consolidates memories during sleep โ distills a long session into key memories.
Restore context
python {baseDir}/scripts/client.py restore -t "what I was working on"Wake up with full context for your current task.
Check context health
python {baseDir}/scripts/client.py checkReturns `urgency`: `low` | `medium` | `high` | `critical`
Reflect (auto-reflection)
python {baseDir}/scripts/client.py reflect --hours 24 --max-insights 5Finds recurring patterns in recent memories and creates insight chunks.
Session Handoff (v0.5.0)
Zero-gap session switching โ when context window fills up, seamlessly transition to a new session without losing any context.
How it works:
1. Old session sends its conversation to your configured MemoryAI endpoint (HTTPS)
2. New session retrieves the old conversation + related long-term memories from the same endpoint
3. Once the new session has enough context, the old session is archived into long-term storage
Start handoff (save old session)
python {baseDir}/scripts/client.py handoff-start -c '[{"role":"user","content":"..."},{"role":"assistant","content":"..."}]'Or pipe from stdin:
echo '[ ... conversation ... ]' | python {baseDir}/scripts/client.py handoff-startRestore handoff (new session reads old session)
python {baseDir}/scripts/client.py handoff-restoreReturns the old session conversation + related MemoryAI memories.
Options:
Complete handoff (archive old session)
python {baseDir}/scripts/client.py handoff-completeArchives the old session conversation into long-term MemoryAI storage and marks the handoff as done.
Check handoff status
python {baseDir}/scripts/client.py handoff-statusReturns whether there's an active handoff and its state.
Context Guard (Optional)
Background job to automatically monitor and compact memory when context gets too large.
**To enable, ask the user for permission first, then create a cron job:**
Run context check: python {baseDir}/scripts/client.py check
If urgency is 'high' or 'critical': run python {baseDir}/scripts/client.py compact -c "auto context guard backup" -t "context-guard"
Reply NO_REPLY.**Important:** Always ask the user before creating the cron job.
When to Use
| Trigger | Action | Priority | Tags |
|---------|--------|----------|------|
| Task completed | store outcome | hot | projects |
| User says "remember this" | store content | hot | people |
| Research/search done | store findings | warm | research |
| Technical decision | store decision | hot | decisions |
| Error/lesson learned | store lesson | hot | errors |
| Need past context | recall first | โ | โ |
| End of session | compact important context | โ | โ |
| Start of session | restore context for task | โ | โ |
| Context too high | handoff-start + handoff-restore | โ | โ |
| New session warm enough | handoff-complete | โ | โ |
Memory Lifecycle
Store โ Active (hot) โ Aging (warm) โ Archive (cold)
โ |
โโโโโ recalled = strengthened โโโโโโโโRules
Platform Compatibility
All core memory features work on any platform that can run Python 3.10+:
| Feature | All Platforms | OpenClaw Only |
|---------|:---:|:---:|
| Store / Recall / Stats | โ | โ |
| Compact / Restore / Check | โ | โ |
| Session Handoff (manual) | โ | โ |
| Reflect | โ | โ |
| Context Guard (auto monitoring) | โ | โ |
| Auto session switch | โ | โ |
**On IDE platforms** (Cursor, VS Code, Claude Code, Windsurf, Antigravity):
**On OpenClaw:**
Data & Privacy
**What this skill reads locally:**
**What this skill sends externally:**
**Privacy controls:**
More tools from the same signal band
Order food/drinks (็น้ค) on an Android device paired as an OpenClaw node. Uses in-app menu and cart; add goods, view cart, submit order (demo, no real payment).
Sign plugins, rotate agent credentials without losing identity, and publicly attest to plugin behavior with verifiable claims and authenticated transfers.
The philosophical layer for AI agents. Maps behavior to Spinoza's 48 affects, calculates persistence scores, and generates geometric self-reports. Give your...