# Crazy Egg Real Buyer Validation Plan v1

Date: 2026-05-10

## Purpose

Validate the exact homepage headline and subhead with real or high-fidelity target buyers before locking the new Crazy Egg positioning.

Current recommendation to test:

> **Find what stops visitors from converting. Know what to test next.**

Subhead:

> **Crazy Egg turns heatmaps, recordings, surveys, and analytics into guided tasks, audience insights, and conversion tests your team can launch.**

---

## What we need to learn

1. Do buyers understand what Crazy Egg does in 5 seconds?
2. Do they understand why this is different from Clarity, Hotjar, GA4, VWO, Optimizely, and AI page builders?
3. Does the phrase "know what to test next" create enough urgency?
4. Does the subhead feel credible or too crowded?
5. Does "conversion tests" sound accessible or too advanced?
6. Which buyer segment responds strongest?
7. What proof do buyers need before believing the promise?

---

## Candidate messages

### A - Recommended v3

Headline:

> Find what stops visitors from converting. Know what to test next.

Subhead:

> Crazy Egg turns heatmaps, recordings, surveys, and analytics into guided tasks, audience insights, and conversion tests your team can launch.

### B - Punchier

Headline:

> Find what stops visitors from converting. Test what fixes it.

Subhead:

> Crazy Egg shows where visitors get stuck, guides your next task, and helps you turn real behavior into conversion tests.

### C - Growth/agency

Headline:

> Turn visitor behavior into conversion tests you can launch.

Subhead:

> Use clicks, scrolls, recordings, surveys, and analytics to find friction, understand who it affects, and launch better page tests.

### D - Heritage bridge

Headline:

> Turn clicks, scrolls, and recordings into tested improvements.

Subhead:

> Crazy Egg helps you turn visitor behavior into guided tasks, test ideas, and A/B tests for the pages you already have.

### E - Tasks-forward

Headline:

> See what is costing conversions. Get guided tasks to fix it.

Subhead:

> Crazy Egg turns website behavior into step-by-step tasks that help your team find friction, prioritize improvements, and test what works.

---

## Target respondents

Minimum n=20, better n=40.

### Primary segments

- 5 Growth / demand gen leaders
- 5 Ecommerce marketers
- 5 Agency / CRO consultants
- 5 Founder / operator / SMB owners

### Secondary segments if possible

- 3 Product managers
- 3 UX researchers
- 3 Web designers/developers
- 3 Enterprise analytics / experimentation leads

---

## Test format

### Step 1: Five-second comprehension

Show headline + subhead for 5 seconds.

Ask:

1. What do you think this product does?
2. Who do you think it is for?
3. What problem does it solve?
4. What would you expect to happen after signing up?

Score:

- 0 = wrong interpretation
- 1 = partially correct
- 2 = correct enough
- 3 = clear and specific

### Step 2: Differentiation

Ask:

1. How is this different from GA4 or Microsoft Clarity?
2. How is this different from Hotjar?
3. How is this different from VWO or Optimizely?
4. How is this different from ChatGPT or an AI page builder?

Score:

- 0 = no differentiation
- 1 = vague difference
- 2 = clear difference
- 3 = strong, memorable difference

### Step 3: Relevance and urgency

Ask:

1. When would you need this?
2. What pain does this speak to?
3. How urgent does this feel?
4. Would this make you more likely to try Crazy Egg?

Score each 1-10.

### Step 4: Credibility

Ask:

1. What sounds believable?
2. What sounds overpromised?
3. What proof would you need?
4. Which phrase makes you skeptical?

Mark objections:

- AI skepticism
- A/B testing credibility
- free tools objection
- unclear implementation
- unclear audience/targeting
- too broad
- too advanced
- not differentiated

### Step 5: Forced choice

Show all five variants in randomized order.

Ask:

1. Which is clearest?
2. Which is most compelling?
3. Which feels most different?
4. Which would you click?
5. Which would you forward to your team?
6. Which sounds least believable?

---

## Decision criteria

A headline wins if:

1. 70%+ can explain the product accurately after 5 seconds.
2. 60%+ can name a meaningful difference from free analytics or Hotjar.
3. Average urgency score is 7+ among primary buyers.
4. Skepticism is fixable with proof, not fatal.
5. It performs across at least 3 primary segments.

If no headline clears this bar, use the best comprehension winner and rewrite for stronger differentiation.

---

## Expected outcomes

### If A wins

Lock v3 homepage direction.

### If B wins

Use punchier headline, but legal/product review must approve "fixes it."

### If C wins

Shift homepage toward growth/agency buyer and make experimentation more explicit.

### If D wins

Use heritage bridge to transition old Crazy Egg perception into new workflow.

### If E wins

Make Tasks the hero and use "conversion tests" as the mechanism below.

---

## What to collect verbatim

Capture exact buyer language around:

- conversion loss
- not knowing what to change
- too many tools
- free analytics limits
- AI trust or distrust
- testing anxiety
- audience/segment language
- page change implementation
- reporting/buy-in

This language should feed directly into homepage copy.

---

## Output template

Save results as:

`06-message-tests/real-buyer-validation-results-YYYY-MM-DD.md`

Include:

- participant segment summary
- comprehension scores
- differentiation scores
- urgency scores
- forced choice winners
- objections by segment
- proof requirements
- recommended copy changes
- decision: lock / revise / retest
