EEAT Content Quality Audit
name: eeat-content-quality-audit
by 281862066-a11y · published 2026-04-01
$ claw add gh:281862066-a11y/281862066-a11y-eeat-content-quality-audit---
name: eeat-content-quality-audit
description: Systematic content quality audit based on 80 CORE-EEAT standards, evaluating content's GEO (Generative Engine Optimization) and SEO (Search Engine Optimization) potential. Features 8-dimension scoring, weighted total calculation, veto item detection, and priority improvement recommendations. Applicable for pre-publication checks, competitive analysis, and AI citation potential assessment.
---
# EEAT Content Quality Audit
> This skill is developed based on the CORE-EEAT Content Benchmark, providing 80 standardized content quality audit criteria.
Skill Overview
This skill evaluates content quality through 80 standardized criteria across 8 core dimensions. It generates comprehensive audit reports including item-level scores, dimension scores, system scores (GEO/SEO), content-type weighted total scores, veto item detection, and priority action plans.
Applicable Scenarios
Use this skill when users request the following:
Core Capabilities
This skill can:
1. **Complete 80-Item Audit**: Score each CORE-EEAT item as Pass/Partial/Fail
2. **Dimension Scoring**: Calculate scores for all 8 dimensions (0-100 points each)
3. **System Scoring**: Calculate GEO score (CORE) and SEO score (EEAT)
4. **Weighted Total Score**: Calculate final score based on content-type specific weights
5. **Veto Item Detection**: Flag critical credibility violations (T04, C01, R10)
6. **Priority Ranking**: Identify top 5 improvement recommendations by impact
7. **Action Plan**: Generate specific, actionable improvement steps
Content Types
This skill supports the following content types, each with different dimension weights:
Usage
Basic Audit
Please audit the quality of the following content: [Content text or URL]Perform content quality audit on [URL]Specify Content Type
Audit this content as a product review: [Content]Score this tutorial based on 80 criteria: [Content]Comparison Audit
Audit the differences between my content and competitor's: [Your content] vs [Competitor content]Data Input Requirements
**Manual Data Input** (Currently recommended):
Request users to provide:
1. Content text, URL, or file path
2. Content type (if cannot auto-detect): Product Review, How-To Guide, Comparison Review, Landing Page, Blog Post, FAQ, Alternative Recommendation, Best Recommendation, User Review
3. Optional: Competitor content for comparative assessment
**Note**: Explicitly mark in the output which items cannot be fully evaluated due to lack of access (e.g., backlink data, Schema markup, site-level signals).
Execution Steps
When users request content quality audit, follow these steps:
Step 1: Audit Preparation
### Audit Preparation
**Content**: [Title or URL]
**Content Type**: [Auto-detected or user-specified]
**Dimension Weights**: [Load from content type weight table]
#### Veto Item Check (Emergency Brake)
| Veto Item | Status | Action |
|-----------|--------|--------|
| T04: Disclosure Statement | ✅ Pass / ⚠️ Triggered | [If triggered: "Immediately add disclosure banner at top of page"] |
| C01: Intent Alignment | ✅ Pass / ⚠️ Triggered | [If triggered: "Rewrite title and first paragraph"] |
| R10: Content Consistency | ✅ Pass / ⚠️ Triggered | [If triggered: "Verify all data before publication"] |If any veto item is triggered, prominently mark it at the top of the report and recommend immediate action before continuing with the full audit.
Step 2: CORE Audit (40 Items)
Evaluate each item according to standards in references/core-eeat-benchmark.md.
Score each item:
### C — Contextual Clarity
| ID | Check Item | Score | Notes |
|----|------------|-------|-------|
| C01 | Intent Alignment | Pass/Partial/Fail | [Specific observation] |
| C02 | Direct Answer | Pass/Partial/Fail | [Specific observation] |
| ... | ... | ... | ... |
| C10 | Semantic Closure | Pass/Partial/Fail | [Specific observation] |
**C Dimension Score**: [X]/100Evaluate **O** (Organization), **R** (Referenceability), and **E** (Exclusivity) in the same table format, 10 items per dimension.
Step 3: EEAT Audit (40 Items)
### Exp — Experience
| ID | Check Item | Score | Notes |
|----|------------|-------|-------|
| Exp01 | First-Person Narrative | Pass/Partial/Fail | [Specific observation] |
| ... | ... | ... | ... |
**Exp Dimension Score**: [X]/100Evaluate **Ept** (Expertise), **A** (Authority), and **T** (Trust) in the same table format, 10 items per dimension.
For detailed 80-item ID lookup table and site-level item handling instructions, see references/item-reference.md.
Step 4: Scoring and Reporting
Calculate scores and generate final report:
## CORE-EEAT Audit Report
### Overview
- **Content**: [Title]
- **Content Type**: [Type]
- **Audit Date**: [Date]
- **Total Score**: [Score]/100 ([Rating])
- **GEO Score**: [Score]/100 | **SEO Score**: [Score]/100
- **Veto Item Status**: ✅ No triggers / ⚠️ [Item] triggered
### Dimension Scores
| Dimension | Score | Rating | Weight | Weighted Score |
|-----------|-------|--------|--------|----------------|
| C — Contextual Clarity | [X]/100 | [Rating] | [X]% | [X] |
| O — Organization | [X]/100 | [Rating] | [X]% | [X] |
| R — Referenceability | [X]/100 | [Rating] | [X]% | [X] |
| E — Exclusivity | [X]/100 | [Rating] | [X]% | [X] |
| Exp — Experience | [X]/100 | [Rating] | [X]% | [X] |
| Ept — Expertise | [X]/100 | [Rating] | [X]% | [X] |
| A — Authority | [X]/100 | [Rating] | [X]% | [X] |
| T — Trust | [X]/100 | [Rating] | [X]% | [X] |
| **Weighted Total Score** | | | | **[X]/100** |
**Score Calculation Formulas**:
- GEO Score = (C + O + R + E) / 4
- SEO Score = (Exp + Ept + A + T) / 4
- Weighted Score = Σ (Dimension Score × Content Type Weight)
**Rating Standards**: 90-100 Excellent | 75-89 Good | 60-74 Fair | 40-59 Poor | 0-39 Very Poor
### Unavailable Item Handling
When an item cannot be evaluated (e.g., A01 backlink profile requires site-level data, inaccessible):
1. Mark the item as "N/A" and note the reason
2. Exclude N/A items from dimension score calculation
3. Dimension Score = (Sum of scored items) / (Number of scored items × 10) × 100
4. If a dimension has >50% items as N/A, mark that dimension as "Insufficient Data" and exclude from weighted total score
5. Recalculate weighted total score using only dimensions with sufficient data, renormalizing weights to total 100%
**Example**: Authority dimension has 8 N/A items and 2 scored items (A05=8, A07=5):
- Dimension Score = (8+5) / (2 × 10) × 100 = 65
- But 8/10 items are N/A (>50%), so mark as "Insufficient Data -- Authority"
- Exclude A dimension from weighted total; redistribute its weight proportionally to remaining dimensions
### Item-Level Scores
#### CORE — Content Body (40 Items)
| ID | Check Item | Score | Notes |
|----|------------|-------|-------|
| C01 | Intent Alignment | [Pass/Partial/Fail] | [Observation] |
| C02 | Direct Answer | [Pass/Partial/Fail] | [Observation] |
| ... | ... | ... | ... |
#### EEAT — Source Credibility (40 Items)
| ID | Check Item | Score | Notes |
|----|------------|-------|-------|
| Exp01 | First-Person Narrative | [Pass/Partial/Fail] | [Observation] |
| ... | ... | ... | ... |
### Top 5 Priority Improvements
Sorted by: Weight × Points Lost (by impact from high to low)
1. **[ID] [Name]** — [Specific improvement suggestion]
- Current Status: [Fail/Partial] | Potential Gain: [X] weighted points
- Action: [Specific steps]
2. **[ID] [Name]** — [Specific improvement suggestion]
- Current Status: [Fail/Partial] | Potential Gain: [X] weighted points
- Action: [Specific steps]
3–5. [Same format]
### Action Plan
#### Quick Wins (Less than 30 minutes each)
- [ ] [Action 1]
- [ ] [Action 2]
#### Medium Investment (1-2 hours)
- [ ] [Action 3]
- [ ] [Action 4]
#### Strategic (Requires Planning)
- [ ] [Action 5]
- [ ] [Action 6]
### Recommended Next Steps
- Complete content rewrite: Rewrite with CORE-EEAT constraints
- GEO optimization: Optimize for failed GEO-First items
- Content refresh: Focus on weak dimensions
- Technical fixes: Check site-level issuesValidation Checkpoints
Input Validation
Output Validation
Success Points
1. **Start with Veto Items** — T04, C01, R10 are one-vote veto items; they affect overall evaluation regardless of total score
2. **Focus on High-Weight Dimensions** — Different content types prioritize different dimensions
3. **GEO-First Items are Critical for AI Visibility** — If goal is AI citation, prioritize items marked with GEO 🎯
4. **Some EEAT Items Require Site-Level Data** — Don't penalize content for things only observable at site level (backlinks, brand recognition)
5. **Use Weighted Scores, Not Just Raw Averages** — Product reviews with strong exclusivity are more important than strong authority
6. **Re-Audit After Improvements** — Run again to verify score improvements and catch regressions
Terminology
CORE (Content Quality)
EEAT (Source Credibility)
GEO (Generative Engine Optimization)
Reference Documents
Notes
More tools from the same signal band
Order food/drinks (点餐) on an Android device paired as an OpenClaw node. Uses in-app menu and cart; add goods, view cart, submit order (demo, no real payment).
Sign plugins, rotate agent credentials without losing identity, and publicly attest to plugin behavior with verifiable claims and authenticated transfers.
The philosophical layer for AI agents. Maps behavior to Spinoza's 48 affects, calculates persistence scores, and generates geometric self-reports. Give your...