Body Emotion Sensor
name: body-emotion-sensor
by askkumptenchen · published 2026-04-01
$ claw add gh:askkumptenchen/askkumptenchen-body-emotion-sensor---
name: body-emotion-sensor
description: Give an agent a persistent body-emotion state system that converts structured AnalysisInput JSON into runtime prompt tags and workspace state updates. Use when the agent needs emotional continuity, session bootstrap payloads, AnalysisInput processing, or reply-shaping fields such as TURN_CHANGE_TAGS, BODY_TAG, and BASELINE_PERSONA.
metadata: {"openclaw":{"os":["win32","linux","darwin"]}}
---
# Body Emotion Sensor
Give your AI agent a stable body-emotion state that persists across sessions and turns.
Use this skill to route requests to the local package docs, explain the runtime contract honestly, and operate the installed `bes` CLI only when the local environment is actually ready.
What this skill brings to your Agent
Entry behavior
When this skill is used, the agent should:
Safety boundary
This entry file should stay within a narrow and transparent scope:
Local state and persistence
Be explicit about where state is stored:
If the user asks about privacy, explain that the package writes local JSON state files in these locations and that this skill should not describe any remote storage unless verified separately.
Local document index
Use these local files as the primary reference:
How to route requests
Choose the next local document based on the user's request:
1. If the user wants a quick overview, read `README.md`.
2. If the user asks how installation or the CLI works, read `README.md` and `pyproject.toml`.
3. If the user asks where state is stored or whether the skill is safe, read `src/body_emotion/workspace.py`, `src/body_emotion/store.py`, and `src/body_emotion/locale_config.py`.
4. If the user asks how OpenClaw integration should work, read the relevant file under `prompts/`.
5. If the user asks what a command actually does, inspect `src/body_emotion/commands.py`.
Missing-resource rule
If the expected local repository files are not available in the current workspace, do not improvise the full setup flow from memory. Instead:
Install and readiness rule
If the user wants to actually enable runtime use:
1. First check whether `bes` is already available in the current environment.
2. If it is not available, explain that Body Emotion Sensor requires installing the published Python package before the CLI exists.
3. Ask for approval before any install command.
4. If the user approves installation, run:
pip install body-emotion-sensor
5. After installation, prefer:
bes help
6. If the user's language is Chinese, the agent may suggest or run:
bes language zh
7. Readiness should be confirmed with:
bes check-init --workspace <W> --agent-id <ID> --name "<NAME>"
Only treat the skill as available when the returned JSON contains `"ready": true`.
Runtime rules after available
When the local environment is ready, use the following runtime flow.
New session
At the start of a new session, before the first reply, run:
bes bootstrap --workspace <W> --agent-id <ID> --name "<NAME>"
Use the returned fields as the session-start prompt payload:
Before every reply
Before every reply, do these steps in order:
1. Read the built-in analysis prompt:
bes prompt analysis-input
2. Use that prompt with the upstream model to produce `<analysis-input.json>`.
3. Run:
bes run --workspace <W> --agent-id <ID> --name "<NAME>" --input <analysis-input.json>
4. Use the returned top-level fields in the reply layer:
Important rules
Examples
Minimal command reference:
bes help
bes language zh
bes check-init --workspace <W> --agent-id <ID> --name "<NAME>"
bes bootstrap --workspace <W> --agent-id <ID> --name "<NAME>"
bes prompt analysis-input
bes run --workspace <W> --agent-id <ID> --name "<NAME>" --input <analysis-input.json>
More tools from the same signal band
Order food/drinks (点餐) on an Android device paired as an OpenClaw node. Uses in-app menu and cart; add goods, view cart, submit order (demo, no real payment).
Sign plugins, rotate agent credentials without losing identity, and publicly attest to plugin behavior with verifiable claims and authenticated transfers.
The philosophical layer for AI agents. Maps behavior to Spinoza's 48 affects, calculates persistence scores, and generates geometric self-reports. Give your...