Source doc

Product Readiness Review V1

Rendered from /artifacts/product-readiness-review-v1.md.

Crazy Egg Product Readiness Review v1

Date: 2026-05-10

Purpose

This review pressure-tests the current Crazy Egg positioning against product truth.

The goal is not to weaken the strategy. The goal is to make sure the public message is ambitious, believable, and provable.

Current recommended homepage:

Find what stops visitors from converting. Know what to test next.

Current strongest subhead:

Crazy Egg turns heatmaps, recordings, surveys, and analytics into guided tasks, audience insights, and conversion tests your team can launch.

Internal positioning:

Crazy Egg is the behavior-to-experiment workflow for marketers optimizing existing website pages.

Pricing/comparison positioning:

The conversion workflow between free analytics and enterprise experimentation.


Executive verdict

The positioning is strategically right.

But the launch copy must be tiered by proof.

The safest public promise is:

Find what stops visitors from converting. Know what to test next.

The strongest public promise is:

Crazy Egg turns visitor behavior into guided tasks, audience insights, and conversion tests your team can launch.

The difference between safe and strong is not copy. It is product evidence.

If the homepage can show a believable flow from behavior evidence to Task to Audience to A/B test, the stronger subhead is usable.

If that flow is not visible yet, use the safer subhead:

Crazy Egg helps your team turn heatmaps, recordings, surveys, and analytics into guided recommendations and conversion test ideas.

Do not publicly lead with AI drafting, personalization, or "test the fix" until the product proof is unmistakable.


The strategic tension

Crazy Egg needs to escape the old category:

heatmaps and recordings

But it cannot leap so far ahead of product proof that skeptical buyers feel the message is inflated.

The right move is a controlled category expansion:

from behavior analytics to behavior-led conversion workflow to behavior-to-experiment operating system

That means the homepage can imply the future, but it must prove the present.


Readiness categories

Green - safe now

Claims that are likely defensible if the feature exists in the current product and can be shown with a screenshot or short demo.

Yellow - safe with proof or softer language

Claims that are strategically useful but require visible support. Use softer language if proof is thin.

Red - do not use publicly yet

Claims that may be true directionally but create trust debt unless the product experience is shipped, obvious, and repeatable.


Claim readiness matrix

ClaimReadinessRiskSafer public versionProof required
Find what stops visitors from convertingYellowCan imply definitive causalityFind where visitors get stuck before they convertHeatmap, recording, survey, or analytics tied to a conversion moment
Know what to test nextGreen/YellowNeeds recommendation proofGet evidence-backed ideas for what to test nextTask or recommendation screenshot
Turns heatmaps, recordings, surveys, and analytics into guided tasksYellowRequires Tasks to be connected to evidence, not just a separate checklistUse heatmaps, recordings, surveys, and analytics to guide your next taskTask with linked source evidence
Audience insightsYellowAudiences may sound abstract or like personalization jargonUnderstand which visitors are affectedAudience/filter UI, segment summary, targeting example
Conversion tests your team can launchYellow/RedImplies a smooth handoff from insight to testConversion test ideas your team can launchA/B test creation flow tied to recommendation
AI recommendations grounded in behaviorYellowTrust risk if AI output looks genericAI helps surface test ideas from real visitor behaviorRecommendation with cited behavior evidence
AI drafts page fixesRed unless shippedSounds like an AI page builder and overpromises creation qualityAI helps draft test ideas grounded in behaviorEditable variation draft with source evidence
Behavior-to-experiment workflowYellowNeeds connected flow, not just feature listA guided workflow from visitor behavior to test ideasEnd-to-end product demo
Tasks guides the workYellowNeeds Tasks to be useful after onboardingTasks guides setup, analysis, and next actionsTasks dashboard and task detail view
Audiences shows who it affectsYellowNeeds cross-feature audience reuseSee which visitor groups are affectedAudience summary connected to evidence
A/B testing helps learn what worksGreen/YellowAvoid implying guaranteed lift or statistical rigor beyond product supportUse A/B testing to learn what works before permanent changesGoal, variant, traffic, and results UI
The conversion workflow between free analytics and enterprise experimentationGreenStrategic category claim needs explanationThe practical workflow between free analytics and enterprise experimentationComparison page and workflow visual
No overages / unlimited domains / unlimited team membersUnknownPricing trust riskOnly use verified plan languageCurrent pricing confirmation

The copy decision tree

Question 1: Can the product show a Task created from or clearly tied to behavior evidence?

If yes, use:

guided tasks

If no, use:

guided recommendations


Question 2: Can the product show who is affected by a finding?

If yes, use:

audience insights

If no, use:

visitor segments

or:

which visitors are affected


Question 3: Can the product turn a recommendation into an A/B test flow?

If yes, use:

conversion tests your team can launch

If no, use:

conversion test ideas

or:

ideas your team can validate with A/B testing


Question 4: Can AI produce credible, editable page/test variations from evidence?

If yes, use:

AI helps draft page variations grounded in real visitor behavior

If no, use:

AI helps surface recommendations from real visitor behavior


Question 5: Can pricing claims be verified from current plans?

If yes, use the specific pricing proof.

If no, avoid:


Tier 1: safest public launch

Use when the product proof is incomplete or the connected workflow is not visually obvious yet.

Hero

Find what stops visitors from converting. Know what to test next.

Subhead

Crazy Egg helps your team turn heatmaps, recordings, surveys, and analytics into guided recommendations and conversion test ideas.

Workflow line

See where visitors get stuck. Understand who is affected. Get the next recommended task. Use A/B testing to learn what works.

Why this works

Risk

Slightly less sharp than the strongest version.

Use if


Tier 2: strong public launch

Use when Tasks, Audiences, and A/B testing can be shown as a connected workflow.

Hero

Find what stops visitors from converting. Know what to test next.

Subhead

Crazy Egg turns heatmaps, recordings, surveys, and analytics into guided tasks, audience insights, and conversion tests your team can launch.

Workflow line

Find the friction. Understand who it affects. Get the next task. Launch the test. Learn what works.

Why this works

Required proof

Risk

If the product demo looks disconnected, buyers will feel the message is ahead of the product.


Tier 3: future category leadership

Use only when AI drafting and personalization are production-ready and strong.

Hero

Find what stops visitors from converting. Test what fixes it.

Subhead

Crazy Egg uses real visitor behavior to guide your next task, draft page variations, target the right audience, and test what improves conversion.

Workflow line

See the friction. Draft the better version. Target the right visitors. Test and learn.

Why this works

Required proof

Risk

High if used too early. This version can sound like magic unless the demo is excellent.


Product proof checklist

Before shipping the full positioning, the homepage, product page, or demo should include the following proof.

Must-have proof

1. Behavior evidence

Show one clear conversion issue from:

Best example:

Mobile visitors are not reaching the pricing CTA.

Why it matters:

This anchors the whole story in real behavior, not generic optimization advice.


2. Guided Task

Show a Task that recommends the next action from the finding.

Minimum task anatomy:

Best example:

Create a mobile CTA test for visitors who do not reach the pricing section.

Why it matters:

This proves Tasks are not generic checklists.


3. Recommendation card

Show a recommendation that connects evidence to action.

Minimum recommendation anatomy:

Best example:

Pricing page visitors from paid search click plan details but rarely reach the trial CTA. Test a sticky CTA or shorter plan comparison for this audience.

Why it matters:

This is the actual bridge between passive analytics and experimentation.


4. A/B test flow

Show either test creation or results tied to a recommendation.

Minimum proof:

Best example:

Test sticky mobile CTA vs current pricing page for mobile paid-search visitors.

Why it matters:

This makes "know what to test next" feel real.


5. Audience example

Show which visitor group is affected.

Best example:

Mobile visitors from paid search who viewed pricing but did not start a trial.

Why it matters:

Audiences becomes understandable. It is not abstract segmentation. It is who needs a different test or experience.


6. Workflow visual

Show the whole loop:

Evidence -> Task -> Audience -> Test -> Result

Why it matters:

The product is being positioned as workflow, not feature collection. The visual must prove that.


7. Case/demo narrative

Use one simple end-to-end story:

  1. Page has traffic but low conversion.
  2. Crazy Egg finds where visitors get stuck.
  3. Task recommends next action.
  4. Audience shows who is affected.
  5. Team launches a test.
  6. Team learns what worked.

Why it matters:

This is easier to believe than a feature grid.


8. Pricing proof

Verify before using:

Why it matters:

Pricing claims are trust claims. Do not imply plan value that is not real.


Future proof

Use only when shipped and demo-ready:


Product gap analysis

Gap 1: evidence-to-task traceability

The positioning requires Tasks to feel causally connected to website evidence.

If Tasks are generic, the strategy weakens.

Minimum viable product proof

A Task card should show:

Example task

Task: Test a sticky mobile CTA on the pricing page. Source: Scrollmap shows most mobile visitors do not reach the trial CTA. Why it matters: Visitors may be interested but never see the conversion path. Next step: Create an A/B test with a sticky trial CTA for mobile visitors.

Product implication

Tasks should not be positioned as project management. They are the guidance layer.


Gap 2: recommendation-to-test handoff

The promise is not only "see the issue." It is "know what to test next."

If users must manually recreate the test from scratch, the promise is weaker.

Minimum viable product proof

A recommendation should include a clear next action:

Copy if handoff exists

Turn recommendations into conversion tests.

Copy if handoff does not exist

Get conversion test ideas from real visitor behavior.


Gap 3: Audience clarity

Audiences is strategically important, but the word alone does not sell the value.

Buyers care less about segmentation and more about not treating every visitor the same.

Minimum viable product proof

Show an Audience in plain English:

Mobile visitors from paid search who reached pricing but did not start a trial.

Better product language

Use:

who is affected

Before:

Audiences

Copy pattern

See which visitors are affected, then save them as an Audience for surveys, analysis, or tests.


Gap 4: AI trust

AI is useful only if it is visibly grounded in behavior evidence.

The homepage should not lead with AI because generic AI is already crowded and low-trust.

Minimum viable product proof

AI output should show source evidence:

Copy if AI summarizes only

AI summarizes behavior and surfaces recommendations.

Copy if AI recommends tests

AI helps turn visitor behavior into test ideas.

Copy if AI drafts variants

AI helps draft page variations grounded in real visitor behavior.

Avoid

AI knows what to fix.


Gap 5: A/B testing credibility

Crazy Egg does not need to out-enterprise VWO or Optimizely. But it must look credible enough for practical conversion tests.

Minimum viable product proof

A/B test UI should show:

Strong copy

Learn what works before making permanent changes.

Avoid unless rigor supports it

Prove the lift.


Feature-by-feature copy guardrails

Tasks

If Tasks are mostly onboarding/setup

Avoid:

Tasks guides your full optimization workflow.

Use:

Guided setup and recommendations help you get to useful findings faster.

If Tasks include analysis and recommendations

Use:

Tasks guides your team from findings to next actions.

If Tasks can create or trigger tests

Use:

Tasks guides your team from findings to launched tests.


Audiences

If Audiences are mostly filters

Avoid:

Target the right audience with every test.

Use:

Filter behavior by visitor group to understand who is affected.

If Audiences support targeting in surveys or tests

Use:

Use Audiences to target surveys and A/B tests to the right visitors.

If Audiences support personalization

Use:

Use Audiences to test and personalize the right experience for the right visitors.


AI

If AI only summarizes

Avoid:

AI drafts page fixes.

Use:

AI summarizes behavior and surfaces recommendations.

If AI recommends tests

Use:

AI helps turn visitor behavior into test ideas.

If AI drafts variants

Use:

AI helps draft page variations grounded in real visitor behavior.


A/B Testing

If A/B Testing is standalone

Avoid:

Turn recommendations into tests.

Use:

Use A/B testing to validate your best ideas.

If A/B Testing connects to recommendations

Use:

Turn recommendations into conversion tests.


Personalization

If personalization is roadmap

Avoid public homepage emphasis.

Use internally:

Audience-aware optimization.

If personalization is live but lightweight

Use:

Test different experiences for different visitor groups.

If personalization is live and robust

Use:

Personalize page experiences based on real visitor behavior and test what improves conversion.


Launch-safe homepage package

Use this if there is uncertainty about product proof.

Hero

Find what stops visitors from converting. Know what to test next.

Subhead

Crazy Egg helps your team turn heatmaps, recordings, surveys, and analytics into guided recommendations and conversion test ideas.

Proof bullets

Workflow section

From visitor friction to your next test idea

  1. See where visitors get stuck.
  2. Review the evidence.
  3. Understand who is affected.
  4. Get a recommended next step.
  5. Validate your best idea with A/B testing.

Comparison line

Free analytics shows what happened. Enterprise tools are heavy. Crazy Egg helps your team decide what to test next.


Strong homepage package

Use this if Tasks, Audiences, and A/B testing are demonstrably connected.

Hero

Find what stops visitors from converting. Know what to test next.

Subhead

Crazy Egg turns heatmaps, recordings, surveys, and analytics into guided tasks, audience insights, and conversion tests your team can launch.

Proof bullets

Workflow section

From visitor friction to launched test

  1. Measure behavior.
  2. Find the friction.
  3. Understand who it affects.
  4. Get the next task.
  5. Launch the test.
  6. Learn what works.

Comparison line

Crazy Egg connects the missing middle between free analytics and enterprise experimentation.


Future category-leader package

Use only once AI variation drafting and personalization are production-ready.

Hero

Find what stops visitors from converting. Test what fixes it.

Subhead

Crazy Egg uses real visitor behavior to guide your next task, draft page variations, target the right audience, and test what improves conversion.

Proof bullets

Workflow section

From behavior evidence to tested page improvements

  1. Detect the friction.
  2. Identify the audience.
  3. Draft the variation.
  4. Target the experience.
  5. Test the change.
  6. Learn and repeat.

Product roadmap implications

If Crazy Egg chooses this positioning, roadmap priority should shift toward connective workflows, not isolated feature expansion.

The product must make the middle visible:

behavior evidence -> recommendation -> task -> audience -> test -> learning

Priority 1: evidence-backed Tasks

Tasks should show why they exist.

Recommended fields:

Why it matters:

This turns Tasks from checklist into guidance layer.


Priority 2: recommendation cards

Recommendations should become the core object in the product.

Recommended fields:

Why it matters:

This is the object that differentiates Crazy Egg from Clarity and Hotjar.


Priority 3: create test from recommendation

This is the most important workflow bridge.

Minimum viable action:

Create A/B test from this recommendation

Fallback action:

Save as test idea

Why it matters:

Without this, Crazy Egg still risks feeling like an insights product instead of an optimization workflow.


Priority 4: Audience handoff

Audiences should appear naturally in recommendations and test setup.

Example:

This issue affects mobile visitors from paid search. Create Audience?

Why it matters:

Audiences becomes a conversion tool, not a segmentation feature.


Priority 5: learning loop

After a test, Crazy Egg should suggest the next action.

Example:

Variant improved CTA clicks but not trial starts. Next task: review recordings from visitors who clicked CTA but abandoned signup.

Why it matters:

This completes the operating system loop.


Sales and CS implications

The sales story should not be:

We have heatmaps, recordings, surveys, analytics, audiences, AI, and A/B testing.

That is a feature list.

The sales story should be:

Your team already has traffic and data. The hard part is deciding what to do next. Crazy Egg turns visitor behavior into the next conversion test.

Discovery questions

Use these to qualify fit:

  1. Which pages get traffic but underperform?
  2. How do you decide what to test today?
  3. What tools do you use to understand visitor behavior?
  4. Where do test ideas come from?
  5. Who decides which changes get made?
  6. How often do insights become tests?
  7. What happens after a test ends?
  8. Do you treat all visitors the same, or segment by source, device, intent, or behavior?

Best-fit buyers

Poor-fit buyers


Comparison implications

Against Clarity

Do not compete on free heatmaps.

Position against the missing next step:

Clarity shows behavior for free. Crazy Egg helps your team decide what to test next.

Against Hotjar

Do not compete on recordings alone.

Position against workflow:

Hotjar helps you understand behavior. Crazy Egg connects behavior to guided tasks and conversion tests.

Against VWO and Optimizely

Do not pretend Crazy Egg is more enterprise.

Position against accessibility:

VWO and Optimizely are built for mature experimentation programs. Crazy Egg is the simpler path from visitor behavior to your next website conversion test.

Against AI tools

Do not lead with AI.

Position against evidence:

Generic AI starts from prompts. Crazy Egg starts from what visitors actually do on your site.


Final recommendation

Use this as the default public message unless product proof supports the stronger version:

Default headline

Find what stops visitors from converting. Know what to test next.

Default subhead

Crazy Egg helps your team turn heatmaps, recordings, surveys, and analytics into guided recommendations and conversion test ideas.

Stronger subhead if proof is ready

Crazy Egg turns heatmaps, recordings, surveys, and analytics into guided tasks, audience insights, and conversion tests your team can launch.

Internal strategic spine

Crazy Egg is the behavior-to-experiment workflow for marketers optimizing existing website pages.

Product loop

Guide -> Measure -> Segment -> Recommend -> Draft -> Target -> Test -> Learn

Public guardrail

Do not say:

test the fix

Until the product can visibly support:

Use instead:

know what to test next

That phrase is the strategic sweet spot: ambitious enough to escape heatmaps, safe enough to earn trust, and flexible enough to carry Tasks, Audiences, AI, personalization, and A/B testing as the product matures.