Sign in Run Free Scan
Capability 2 — Recommendations + Execution

The fix path, then the fix.

AIVZ is a Command Center, not a measurement tool. Every scan produces a prioritized fix path — and where the adapter is Live, AIVZ runs the fix itself. Where deeper content work is needed, AIVZ delegates execution via MCP to Claude, GPT, or Gemini.

Stack-ordered prioritization Native execution on 6 surfaces Delegated execution via MCP Confidence labels on every recommendation
Run a Free Scan See the execution matrix
What you see after the scan

Every fix surfaced with the context to act on it.

A scan output without a fix path is a problem report. AIVZ surfaces fixes alongside the score — every factor that scored below threshold appears in the recommendation queue with the context needed to execute the work or delegate it.

Factor identification

Which of the 93 factors triggered the recommendation.

Confidence label

Established / Strongly Inferred / Indirect-Correlated / Emerging-Experimental.

Stack layer

Layer 1 / 2 / 3 — drives the prioritization order.

Current factor score

The 0–100 score for that specific factor at scan time.

Score-impact estimate

How much the composite score will move when this factor is fixed.

Recommendation text

What to do, in plain language. No vague directional advice.

Direct-execute button

Where the surface adapter is Live (one-click fix).

Generate-instructions

For fixes requiring manual implementation or content work.

Delegate-to-LLM

For content work that should go to Claude / GPT / Gemini via MCP.

What you won't see. AIVZ does not surface vague directional advice ("improve content quality"; "add more structure"). Every recommendation traces to a specific factor with a specific scoring dimension and a specific implementation path. If a recommendation can't be made concrete, it doesn't surface. This is the discipline that separates the recommendation surface from generic AEO consulting output.

Why the order matters

Bottom-up. In dependency order. Always.

The AI Visibility Stack establishes that Layer 1 (Access) precedes Layer 2 (Understanding) precedes Layer 3 (Extractability). The recommendation queue enforces this order — Layer 1 fixes appear above Layer 2 fixes above Layer 3 fixes, regardless of score-impact magnitude.

First

 Layer 1 — Access

Fix crawl access, bot policies, server response, render mode. Without L1, nothing downstream gets read.

Then

 Layer 2 — Understanding

Schema, structure, entity, schema accuracy, author E-E-A-T. Once read, content needs to parse cleanly.

Last

 Layer 3 — Extractability

FAQ, summary, content richness, content quality, speakable. Once understood, answers need extraction.

Why this is non-negotiable

A Layer 3 fix on a site with broken Layer 1 is wasted work. If AI bots can't crawl the page, beautifully formatted answer blocks don't get cited because they're not getting read. The score-impact of the L3 fix is real — but only after L1 is fixed.

Reordering by score-impact alone would generate fix queues that look maximally productive but produce minimum actual citation lift.

How priority conflicts surface

When score-impact and Stack layer disagree (a high-impact L3 fix and a low-impact L1 fix), the L1 fix takes priority by Stack discipline. AIVZ surfaces this explicitly: the L3 fix is visible but marked "deferred until Layer 1 fixes complete." Practitioners can see the full picture; the order enforces the methodology.

The Stack methodology behind this prioritization
The "We Do It" matrix

Where the adapter is Live, AIVZ runs the fix.

The execution capability surfaces through platform-specific adapters. Adapter coverage varies by platform; AIVZ discloses status honestly. Live adapters carry full execution claims; Beta and Roadmap adapters disclose their status visibly.

PlatformStatusWhat AIVZ executes natively
WordPress Live Schema markup (Organization, Article, FAQPage, Speakable, Person, HowTo, Product); meta descriptions; FAQ blocks; summary blocks; llms.txt manifests; robots.txt updates; sitemap optimization; structured data validation; AEO score widgets; content rewrites for L3 factors; per-page recommendations panel in editor.
Shopify Beta Product schema; collection schema; FAQ blocks for product detail pages; meta description optimization; structured data validation. (Subset of WordPress capability; deeper coverage on roadmap.)
Wix Beta Schema markup; meta descriptions; structured content blocks via the Wix Studio integration. (Subset.)
Webflow Beta Schema markup via embedded JSON-LD; FAQ blocks via component substitution; meta description optimization. (Subset.)
BigCommerce Beta Product/category schema; meta description optimization; FAQ blocks for product pages. (Subset.)
Headless / Custom Beta Via MCP server + CLI integration; bring-your-own-stack execution path. Output is structured-data updates, content recommendations, and orchestration calls — applied by the consuming application.
Squarespace Roadmap Native adapter planned; not yet shipped. Recommendations available; native execution not yet operational.
Marketplaces Documented Marketplace-listing optimization framework documented; native adapter build pending. Recommendations surface; execution requires manual implementation.

The "We Do It" claim grammar is inviolable canon. AIVZ does not claim execution capability on a platform where the adapter is Beta — it discloses Beta status explicitly and surfaces the actual capability scope. This protects the credibility of every other claim AIVZ makes. When AIVZ says "Live" on WordPress, that means full execution; when AIVZ says "Beta" on Shopify, the disclosure is part of the claim.

Full platform integration directory
Beyond native scope

For longer content, AIVZ delegates.

Native execution covers structural fixes — schema, FAQ blocks, meta descriptions, summary blocks. Some AEO work requires deeper content production: long-form rewrites, brand-voice adjustments, complex content reorganization. AIVZ delegates this work via MCP server integration to your LLM of choice — Claude, GPT, or Gemini.

01

Identify

AIVZ identifies a content-rewrite or brand-voice fix triggered by a specific factor failure.

02

Package

The AEO factor, the original content, target output structure, brand voice constraints, style guide constraints.

03

Delegate via MCP

The package goes to your configured LLM (Claude / GPT / Gemini) via the MCP protocol.

04

Execute

The LLM produces the rewrite or content based on the constraints you supplied.

05

Verify

AIVZ validates the LLM output against the original AEO factor that triggered the work.

06

Approve

If the output passes validation, it's surfaced for user approval. If not, it loops back.

What gets delegated
  • Long-form content rewrites for L3 (Extractability) improvements
  • Brand-voice adjustments where deterministic suggestions need editorial polish
  • Content reorganization where the current structure has compounded issues
  • Per-platform content variants (voice-optimized vs. text-optimized)
Why MCP, not embedded LLM
  • User control over the model — your team picks the LLM based on existing licensing, security review, and quality preference
  • Brand-voice fidelity — your context lives with your LLM, not with AIVZ
  • Cost transparency — LLM tokens billed by your model provider, not by AIVZ
Discipline boundaries

Three things AIVZ won't generate. Even on request.

Capable doesn't mean unbounded. AIVZ explicitly refuses three categories of content work — even when the underlying scan suggests they'd technically improve scoring on some factor. These aren't capability gaps; they're discipline boundaries.

01

Social media post generation

AEO is structural-content optimization for AI citation. Social posts (LinkedIn, Twitter/X, Instagram, Facebook) are an adjacent content category with different optimization mechanics, different platform behaviors, and different brand-voice requirements. AIVZ doesn't cross into social-content work.

If you need social content with AEO awareness, dedicated social-content tools (Buffer, Hootsuite, Sprout Social, Later) are the right surface. AIVZ stays in its lane.

02

Fabricated statistics

A scan might note Statistics with Sources is failing — but the fix is to add real, attributed statistics, not to generate plausible-sounding numbers. AIVZ refuses to generate statistics, percentages, or quantitative claims even when delegated execution would technically produce them.

AI systems are increasingly able to detect fabricated stats. Even when fabricated stats temporarily improve a factor score, they degrade long-term citation likelihood (and credibility) once detected.

03

Pure ad copy

Marketing copy where the goal is sales conversion (paid ad headlines, landing-page sales prose, email-sequence pitch copy) is a distinct discipline. AEO factor optimization affects citation behavior; ad copy optimization affects conversion behavior. The two don't cleanly overlap.

If you need ad copy with AEO awareness in the underlying landing page, the AEO work happens on the landing page. The ad copy stays with copywriters.

Scan to execution

What the workflow actually looks like.

The end-to-end workflow from scan trigger to fix verification runs through six stages. Most users execute the workflow per-page; agencies and enterprises run it across batches of pages or entire domains.

01

Scan triggered

User initiates scan via dashboard, scheduled scan, API call, MCP request, or CLI invocation. Scan runs against a single URL or batched URLs.

02

Measurement complete

Scanner produces composite AI Visibility Score, three-layer breakdown, all 93 factor scores, six per-platform readiness scores, and (at Agency tier+) Authority Rank. Confidence labels attached.

03

Recommendations generated

Failed factors trigger recommendations. Each includes factor identity, confidence label, Stack layer, score-impact estimate, recommendation text, and implementation path. Queue sorted in Stack dependency order.

04

Fix execution

Live adapter present → direct-execute. Beta adapter or manual fix → generate-instructions. Content work → delegate-to-LLM via MCP.

05

Verification

AIVZ re-scans the affected URL(s) to verify the score-impact estimate matches actual outcome. Confirmed fixes move to the completed queue.

06

Citation tracking (Pro tier+)

Citation Event Monitoring continues running across the AI platforms. As citation events are detected on the fixed URLs, they surface in the dashboard.

Score impact, before and after
Before fixes
75
AI Extractable
After fixes applied
93
AI Authority
+18 composite · 5 fixes shipped · 42 minutes
What's available where

Recommendations + execution scope by tier.

CapabilityFreeProAgencyEnterprise
Recommendations queue (per scan)High-impact subsetFullFullFull
Stack-ordered prioritization
Score-impact estimates
Native execution: WordPress
Native execution: Beta surfaces
Delegated execution via MCP
Brand-voice prompt configuration
Auto-execute scheduled fixes
Verification re-scans
Bulk fix execution (cross-page)
Custom adapter development
Full tier pricing
Ready when you are

Run a scan. See the recommendations. Execute the fixes.

Top fixes prioritized in Stack order. Direct-execute buttons where the WordPress adapter is Live. Generate-instructions buttons for everything else.

Enter your domain
Free No signup Results in 60 seconds

Or see the white-label capability.

Product · Fix Recommendations