# Crazy Egg Product Proof Audit Plan v1

Date: 2026-05-10

## Purpose

Define exactly what product proof is needed to make the new Crazy Egg positioning shippable.

Current positioning:

> **Find what stops visitors from converting. Know what to test next.**

Current safe subhead:

> Crazy Egg helps your team turn heatmaps, recordings, surveys, and analytics into guided recommendations and conversion test ideas.

Current strong subhead:

> Crazy Egg turns heatmaps, recordings, surveys, and analytics into guided tasks, audience insights, and conversion tests your team can launch.

This audit answers one question:

> Can the product prove the strong version today, or should public launch use the safe version?

---

## Executive recommendation

Run the audit against real product UI before shipping the stronger homepage.

Until the audit passes, use the safe subhead.

The homepage can still be strategically strong without overclaiming. The danger is not being conservative. The danger is making the product feel like it promises a connected workflow, then showing disconnected tools.

---

## Pass/fail standard

### Strong positioning can ship if product proof shows:

1. Behavior evidence that reveals a conversion problem.
2. A Task or recommendation connected to that evidence.
3. An Audience or visitor group showing who is affected.
4. An A/B test idea, setup, or result connected to the recommendation.
5. A clear learning loop after the test.

If 1-3 pass but 4-5 are weak, use safe homepage copy.

If 1-4 pass, use strong homepage copy.

If all 5 pass, the roadmap can start moving toward the future category-leader message.

---

# Audit Checklist

## 1. Behavior evidence proof

### Question

Can Crazy Egg clearly show where visitors get stuck before conversion?

### Evidence to collect

- Heatmap screenshot.
- Scrollmap screenshot.
- Recording screenshot or clip.
- Survey response screenshot.
- Analytics/conversion goal screenshot.

### Best proof example

> Mobile visitors reach pricing details but rarely scroll to the trial CTA.

### Pass criteria

- Evidence is easy to understand in less than 10 seconds.
- Evidence is connected to a conversion goal or conversion moment.
- Evidence shows a specific page problem, not just generic activity.

### Fail signals

- Screenshot looks like a generic heatmap with no interpretation.
- No conversion context.
- Viewer has to infer the issue without help.

### Homepage implication

If pass:

> See where visitors get stuck.

If weak:

> See how visitors interact with your pages.

---

## 2. Task / recommendation proof

### Question

Can Crazy Egg turn behavior evidence into a next step?

### Evidence to collect

- Task dashboard screenshot.
- Task detail screenshot.
- Recommendation screenshot.
- Any UI linking source evidence to suggested action.

### Best proof example

> Task: Test a sticky mobile CTA on pricing.
>
> Source: Scrollmap shows most mobile visitors never reach the current CTA.

### Pass criteria

- Task or recommendation has a source.
- Task or recommendation explains the finding.
- Task or recommendation suggests an action.
- User can understand why this next step exists.

### Fail signals

- Tasks are mostly onboarding/setup.
- Recommendations are generic.
- No link to evidence.
- No suggested test idea.

### Homepage implication

If pass:

> Get guided Tasks from real visitor behavior.

If weak:

> Turn behavior evidence into recommended next steps.

If fail:

> Use behavior evidence to prioritize what to review next.

---

## 3. Audience proof

### Question

Can Crazy Egg show which visitor group is affected by a finding?

### Evidence to collect

- Audience builder screenshot.
- Saved Audience screenshot.
- Filtered heatmap/recording screenshot.
- Segment comparison screenshot.
- Targeting setup screenshot if available.

### Best proof example

> Mobile visitors from paid search who viewed pricing but did not start a trial.

### Pass criteria

- Audience can be described in plain English.
- Audience is tied to behavior, source, device, path, conversion activity, survey response, or similar.
- Audience can be used for analysis, survey targeting, test targeting, or personalization.

### Fail signals

- Audience is just a technical filter.
- No clear use after creating the segment.
- No connection to test or recommendation.

### Homepage implication

If pass:

> See which visitor groups are affected.

If strong:

> Use Audiences to target surveys and A/B tests to the right visitors.

If weak:

> Filter behavior by visitor group.

---

## 4. A/B testing proof

### Question

Can Crazy Egg help users validate the next test idea?

### Evidence to collect

- A/B test setup screenshot.
- Control/variant screenshot.
- Goal setup screenshot.
- Audience/traffic allocation screenshot if available.
- Results screenshot.

### Best proof example

> Test: Sticky CTA vs current pricing page for mobile visitors.

### Pass criteria

- User can see control and variant.
- User can connect test to a goal.
- User can see status/results/learning.
- Test feels approachable for marketers.

### Fail signals

- A/B testing looks disconnected from behavior insights.
- Test setup feels technical or buried.
- No results/learning view.
- No connection to recommendation.

### Homepage implication

If pass:

> Use A/B testing to learn what works.

If connected to recommendations:

> Turn recommendations into conversion tests.

If weak:

> Validate your best ideas with A/B testing.

---

## 5. AI proof

### Question

Can AI be shown as evidence-grounded instead of generic?

### Evidence to collect

- AI summary screenshot.
- AI recommendation screenshot.
- AI test idea screenshot.
- AI variation draft screenshot if available.
- Source evidence citations in AI output.

### Best proof example

> Based on heatmap and scrollmap data, mobile visitors rarely reach the CTA. Test a sticky CTA or move the CTA higher on the page.

### Pass criteria

- AI output references source evidence.
- Recommendation is specific to the page and visitor behavior.
- Output is editable or actionable.
- Output does not look like generic CRO advice.

### Fail signals

- AI sounds like ChatGPT without context.
- No source evidence.
- No tie to visitor behavior.
- Overconfident language like "this will fix conversion."

### Homepage implication

If pass:

> AI helps surface test ideas from real visitor behavior.

If very strong:

> AI helps draft page variations grounded in real visitor behavior.

If weak:

Do not mention AI in hero.

---

## 6. Workflow proof

### Question

Does the product feel like one workflow or a collection of tools?

### Evidence to collect

- End-to-end product path.
- Navigation screenshots.
- Links between evidence, Tasks, Audiences, recommendations, and tests.
- Any dashboard or project view that connects steps.

### Best proof path

1. Open page report.
2. See conversion friction.
3. Open recommendation or Task.
4. Identify affected Audience.
5. Create or plan test.
6. Review result or next step.

### Pass criteria

- User can move naturally from evidence to next action.
- The workflow does not require a spreadsheet.
- The product language matches the marketing language.
- The demo can be explained in under 90 seconds.

### Fail signals

- User must jump between disconnected product areas.
- No object connects evidence to action.
- Tasks, Audiences, and A/B testing feel like separate tabs with no strategic relationship.

### Homepage implication

If pass:

> Crazy Egg turns visitor behavior into guided tasks, audience insights, and conversion tests.

If weak:

> Crazy Egg helps your team turn visitor behavior into guided recommendations and test ideas.

---

# Screenshot Shot List

## Must capture

1. Behavior evidence with interpretation.
2. Task or recommendation with source evidence.
3. Audience or visitor group.
4. A/B test setup or result.
5. End-to-end workflow composite.

## Nice to capture

6. Survey response connected to recommendation.
7. Analytics/conversion goal connected to behavior.
8. AI recommendation with evidence citation.
9. Pricing/plan proof for homepage or pricing page claims.
10. Team/collaboration proof if relevant.

---

# Recommended demo story

Use one page and one conversion problem throughout.

## Demo page

A pricing, signup, product detail, or landing page with meaningful conversion intent.

## Story

### 1. The problem

> This page gets traffic, but visitors are not starting the trial.

### 2. Behavior evidence

> Heatmap and scrollmap show mobile visitors engage with plan details but rarely reach the trial CTA.

### 3. Recommendation

> Crazy Egg recommends testing a sticky mobile CTA or moving the CTA higher on the page.

### 4. Audience

> The issue is strongest for mobile visitors from paid search.

### 5. Test

> Create an A/B test for sticky CTA vs current page.

### 6. Learning

> Review which version improves trial starts, then use the result to decide the next task.

## Why this story works

It makes the product loop concrete:

> evidence -> recommendation -> audience -> test -> learning

It also supports the positioning without overclaiming:

> know what to test next

---

# Product proof scorecard

Score each area 0-3.

## 0 = absent

Feature or proof is missing.

## 1 = present but weak

Feature exists, but does not support the positioning clearly.

## 2 = usable

Feature supports the positioning with some explanation.

## 3 = strong

Feature proves the positioning quickly and visually.

| Area | Score | Notes | Homepage implication |
|---|---:|---|---|
| Behavior evidence | TBD | TBD | TBD |
| Task/recommendation | TBD | TBD | TBD |
| Audience proof | TBD | TBD | TBD |
| A/B testing proof | TBD | TBD | TBD |
| AI proof | TBD | TBD | TBD |
| Workflow continuity | TBD | TBD | TBD |
| Pricing proof | TBD | TBD | TBD |

## Scoring interpretation

- 18-21: Strong positioning can ship.
- 13-17: Use safe homepage copy, strong product-page sections where proof exists.
- 8-12: Keep positioning internal, public copy should be softer.
- 0-7: Product does not yet support the new category story.

---

# What to do with audit results

## If proof is strong

Use homepage Variant B:

> Crazy Egg turns heatmaps, recordings, surveys, and analytics into guided tasks, audience insights, and conversion tests your team can launch.

Design the hero around the full workflow.

## If proof is mixed

Use homepage Variant A:

> Crazy Egg helps your team turn heatmaps, recordings, surveys, and analytics into guided recommendations and conversion test ideas.

Show the stronger claims lower on the page only where screenshots support them.

## If proof is weak

Keep the main headline:

> Find what stops visitors from converting. Know what to test next.

But soften the body copy:

> Crazy Egg helps teams understand visitor behavior and prioritize what to test next.

Then make product roadmap fixes before relaunching the stronger story.

---

## Final recommendation

Do the product proof audit before shipping the strong homepage.

The positioning is right. The only remaining risk is proof quality.

The page should not say the product is a connected behavior-to-experiment workflow unless the product experience, screenshots, and demo story make that obvious.

If the proof is there, go strong.

If not, ship the safe version now and use the audit to guide the product and design work needed to earn the stronger message.
