Robotics VLA Skill
name: robotics-vla
by arden2010 · published 2026-04-01
$ claw add gh:arden2010/arden2010-robotics-vla---
name: robotics-vla
description: >
Expert guidance for Vision-Language-Action (VLA) robot foundation models — covering architecture design, training pipelines, data strategy, deployment, and evaluation. Use when (1) designing or implementing a generalist robot policy (VLA model), (2) setting up pre-training or fine-tuning pipelines for robot manipulation, (3) choosing action representations (flow matching vs. diffusion vs. autoregressive), (4) structuring multi-embodiment robot datasets, (5) evaluating dexterous manipulation tasks, (6) implementing action chunking or high-level policy decomposition. Based on the pi0 architecture (Physical Intelligence, 2024).
---
# Robotics VLA Skill
Expert guidance for building generalist robot policies using Vision-Language-Action (VLA) flow models, based on the π0 architecture.
Core Architecture
**π0 model = VLM backbone + action expert + flow matching**
| Component | Detail |
|---|---|
| VLM backbone | PaliGemma (3B) — provides visual + language understanding |
| Action expert | Separate transformer weights (~300M) for robot state + actions |
| Total params | ~3.3B |
| Action output | Chunks of H=50 actions; 50Hz or 20Hz robots |
| Inference speed | ~73ms on RTX 4090 |
See `references/architecture.md` for full technical details (attention masks, flow matching math, MoE design).
Training Pipeline
**Two-phase approach (mirrors LLM training):**
1. **Pre-training** → broad physical capabilities + recovery behaviors across many tasks/robots
2. **Fine-tuning** → fluent, task-specific execution on target task
Key rule: combining both phases outperforms either alone. Pre-training gives robustness; fine-tuning gives precision.
See `references/training.md` for data mixture ratios, loss functions, and fine-tuning dataset sizing.
Action Representation
**Use flow matching, not autoregressive discretization.**
Multi-Embodiment Support
Single model handles 7+ robot configurations via:
See `references/embodiments.md` for robot platform specs and action space details.
High-Level Policy Integration
For long-horizon tasks, use a two-tier approach:
Analogous to SayCan. Intermediate language commands significantly boost performance vs. flat task descriptions.
Related & Complementary Research (2025)
π0 has been extended and complemented by several key works. See `references/related-work.md` for the full landscape, including:
Evaluation Checklist
When evaluating a robot manipulation policy:
More tools from the same signal band
Order food/drinks (点餐) on an Android device paired as an OpenClaw node. Uses in-app menu and cart; add goods, view cart, submit order (demo, no real payment).
Sign plugins, rotate agent credentials without losing identity, and publicly attest to plugin behavior with verifiable claims and authenticated transfers.
The philosophical layer for AI agents. Maps behavior to Spinoza's 48 affects, calculates persistence scores, and generates geometric self-reports. Give your...