AI Adoption Readiness Assessment
name: ai-adoption-readiness
by afrexai-cto · published 2026-04-01
$ claw add gh:afrexai-cto/afrexai-cto-afrexai-ai-adoption-readiness---
name: ai-adoption-readiness
description: >
Assess organizational readiness for AI adoption across 6 dimensions: culture, data maturity,
tech stack, leadership buy-in, skills/talent, and process maturity. Generates a scored readiness
report with gap analysis and a prioritized action plan. Use before building a change management
plan to understand where an organization actually stands. Built by AfrexAI.
metadata:
version: 1.0.0
author: AfrexAI
tags: [ai-adoption, readiness-assessment, digital-transformation, enterprise, strategy]
---
# AI Adoption Readiness Assessment
Score how prepared an organization is to adopt AI agents and automation. Identifies gaps before they become failed implementations. Pairs with the `change-management-plan` skill — run this first, then feed results into the change plan.
When to Use
How to Use
The user describes their organization. The agent conducts the assessment.
Input Format
Organization: [Company name, size, industry]
AI Initiative: [What they want to do with AI]
Department/Scope: [Which teams are involved]
Current Tools: [Existing tech stack, any AI tools already in use]
Budget Range: [Approximate budget for AI initiatives]
Timeline Pressure: [When do they need this working?]
Known Blockers: [Anything they already know is a problem]If the user provides partial info, ask for missing critical fields (Organization, AI Initiative, and Scope at minimum). Infer reasonable defaults for the rest.
Assessment Framework
Scoring System
Each dimension scores 1-5:
**Overall Readiness** = weighted average of all 6 dimensions.
Readiness Thresholds
---
Dimension 1: Culture & Mindset (Weight: 20%)
Assess openness to change, experimentation, and technology adoption.
Questions to Evaluate
Scoring Criteria
| Score | Description |
|-------|-------------|
| 1 | Strong resistance to change. "We've always done it this way." Fear-based culture. |
| 2 | Passive resistance. Leadership wants change but teams don't. No experimentation culture. |
| 3 | Mixed — some teams innovate, others resist. No consistent change approach. |
| 4 | Generally open to change. Past tech adoptions went OK. Some experimentation happening. |
| 5 | Innovation culture. Teams actively seek better tools. Failure is treated as learning. |
Red Flags
---
Dimension 2: Data Maturity (Weight: 20%)
Assess data quality, accessibility, and governance — AI is only as good as its data.
Questions to Evaluate
Scoring Criteria
| Score | Description |
|-------|-------------|
| 1 | Data lives in spreadsheets and email. No standards. No governance. |
| 2 | Some databases exist but siloed. Manual data entry. No quality checks. |
| 3 | Central data store exists. Some governance. Quality is inconsistent. |
| 4 | Clean, accessible data. Governance in place. Teams use data for decisions. |
| 5 | Data platform with automated quality checks. Real-time access. Strong governance. |
Red Flags
---
Dimension 3: Technical Infrastructure (Weight: 15%)
Assess whether the tech stack can support AI tools and integrations.
Questions to Evaluate
Scoring Criteria
| Score | Description |
|-------|-------------|
| 1 | Legacy systems, no APIs, manual deployments. On-prem only. |
| 2 | Mix of legacy and modern. Some APIs. Basic cloud usage. |
| 3 | Mostly modern stack. APIs for major systems. Cloud infrastructure. |
| 4 | Cloud-native. API-first architecture. CI/CD. Security controls in place. |
| 5 | Modern platform with integration layer. Infrastructure as code. Zero-trust security. |
Red Flags
---
Dimension 4: Leadership & Sponsorship (Weight: 20%)
Assess executive commitment — AI adoption without leadership backing fails 90% of the time.
Questions to Evaluate
Scoring Criteria
| Score | Description |
|-------|-------------|
| 1 | No executive sponsor. AI is a curiosity, not a strategy. |
| 2 | Interested executive but no budget or authority allocated. |
| 3 | Sponsor exists with some budget. AI tied to vague "efficiency" goals. |
| 4 | Strong sponsor. Clear business case. Budget allocated. Willing to iterate. |
| 5 | C-suite aligned. AI is strategic priority. Multi-year commitment. Success metrics defined. |
Red Flags
---
Dimension 5: Skills & Talent (Weight: 15%)
Assess whether the team can use, manage, and maintain AI tools.
Questions to Evaluate
Scoring Criteria
| Score | Description |
|-------|-------------|
| 1 | No technical talent. Team can barely use current tools. |
| 2 | Some tech-savvy individuals but no AI knowledge. No training plan. |
| 3 | General technical competence. 1-2 people with AI awareness. Training possible. |
| 4 | Technical team capable of managing integrations. AI training underway. |
| 5 | In-house AI expertise. Team can evaluate, customize, and maintain AI tools. |
Red Flags
---
Dimension 6: Process Maturity (Weight: 10%)
Assess whether processes are documented and consistent enough for AI to augment.
Questions to Evaluate
Scoring Criteria
| Score | Description |
|-------|-------------|
| 1 | No documentation. Tribal knowledge. Inconsistent execution. |
| 2 | Some processes documented but outdated. Inconsistent across teams. |
| 3 | Key processes documented. Some KPIs tracked. Mostly consistent. |
| 4 | Well-documented processes with metrics. Clear candidates for AI. |
| 5 | Process excellence. Documented, measured, optimized. Ready for intelligent automation. |
Red Flags
---
Output: Readiness Report
Generate the full report in this structure:
1. Executive Summary
2. Dimension Scorecard
For each of the 6 dimensions:
3. Gap Analysis
4. Readiness Roadmap
Phased action plan based on overall score:
**If Red (< 2.0):** 6-month foundation phase
**If Orange (2.0–2.9):** 3-month preparation phase
**If Yellow (3.0–3.9):** Parallel track
**If Green (4.0+):** Accelerate
5. Quick Wins
3-5 actions that can start this week with no budget and minimal effort. These build momentum.
6. Risk Register
Top 5 risks to AI adoption success, each with:
7. Next Steps
---
Integration with Other Skills
This skill is designed to work in a pipeline:
1. **AI Adoption Readiness** (this skill) → Assess current state
2. **Compliance Readiness** → Check regulatory alignment
3. **Change Management Plan** → Build the rollout playbook
4. **Vendor Risk Assessment** → Evaluate AI vendor options
5. **Incident Response Plan** → Prepare for AI failures
6. **SLA Monitor** → Set up reliability guarantees
Recommend the next skill based on assessment results.
---
Tips for the Agent
More tools from the same signal band
Order food/drinks (点餐) on an Android device paired as an OpenClaw node. Uses in-app menu and cart; add goods, view cart, submit order (demo, no real payment).
Sign plugins, rotate agent credentials without losing identity, and publicly attest to plugin behavior with verifiable claims and authenticated transfers.
The philosophical layer for AI agents. Maps behavior to Spinoza's 48 affects, calculates persistence scores, and generates geometric self-reports. Give your...