Analytics, Growth, & Iteration
Measure what compounds: funnels, cohorts, experimentation, and retention — without mistaking motion for progress.
Explainer
Launch is Day 0 of the honesty loop. Acquisition can paper over a retention problem; dashboards can look green while the cohort curve decays. Growth PMs obsess over identifying the bottleneck stage, validating that experiments move durable behavior rather than novelty clicks, and protecting guardrail metrics from silent harm. Analytics is how you falsify optimistic narratives.
From vanity metrics to decision metrics
Traffic, downloads, impressions, seat count, raw signups—these inflate egos faster than comprehension. Prefer metrics tied to repeatable user value creation: activation completeness, habitual usage, monetization by meaningful segments, cohort retention flattening.
- For every KPI, specify the decision it changes this week.
- Segment everything that matters — aggregate metrics hide betrayal in one segment.
- Treat absolute numbers as hypotheses until trend + cohort corroborates.
AARRR: find the leaky stage before you polish the landing page
Dave McClure's pirate funnel (Acquisition → Activation → Retention → Referral → Revenue) is a mnemonic, not magic. Growth work starts by identifying where the funnel drops fastest relative to a benchmark—not where it feels emotionally painful.
Cohort analysis & survivorship humility
A cohort freezes users who share a meaningful start trait (signup week, plan tier, channel). Comparing cohorts separates launch hype from habitual use. Survivorship biases arise when analyzing only retained users retroactively—they flatter onboarding changes while hiding drop-off cliffs.
Experimentation maturity
A/B tests decay when novelty effects swamp signal, ratios are tortured post-hoc, sample sizes chase significance instead of lifts that matter commercially, guardrails are skipped, or runtime is shortened the moment statistical significance pings.
Growth systems
Funnels diagnose; loops compound. Acquisition loops (paid, viral, SEO), retention loops (habit, network effects), and monetization bridges should be modeled explicitly—even as rough diagrams—because they change what you prioritize when the bottleneck moves.
Framework atlas
Reference cards for each method in this mission
Expand a card for when to deploy it, misuse patterns, sequencing guidance, and (where relevant) shorthand formulas.
Funnel diagnostics · Dave McClureAARRR (Pirate Metrics)(AARRR)
Five macro stages spanning how users arrive, activate, retain, amplify, and pay. Use them to localize the weakest transition before prescribing tactics.
When to use
- Growth triage workshops.
- Explaining bottleneck thinking to executives.
When not to
- Discrete enterprise pipeline stages unrelated to activation self-serve funnels.
- When acquisition is sales-led and never touches a product surface.
How to apply
- Define stage boundary events with timestamps.
- Measure conversion ratios between neighbouring stages weekly.
- Benchmark against analogous products or historic internal cohorts.
- Ship experiments only at the current limiting stage.
- Advance focus when ratios normalize.
Engagement diagnosticsCohort retention curve
Plot % of cohort still active versus days/weeks since start. Healthy products often plateau; leaky products decay toward zero smoothly.
When to use
- Evaluating onboarding or activation changes.
- Comparing seasonal launches.
When not to
- One-off enterprise pilots with n<20 — noise dominates signal.
- When you cannot define a consistent activation event across users.
How to apply
- Pick a meaningful recurring action (Weekly Active, habitual session).
- Align cohort boundaries (signup week vs first value event).
- Compare curves before/after product changes overlaid—not just single-day retention KPIs.
Learning velocityExperiment planning canvas
Hypothesis statement, metric, minimum detectable lift, statistical power assumptions, segmentation plan, rollout + kill thresholds—before instrumentation.
When to use
- Growth teams juggling multiple concurrent tests.
When not to
- Qualitative discovery where quant tests would be premature.
- When legal/compliance forbids holding out a control cell.
How to apply
- Write reversible hypothesis sentences.
- Lock primary + guardrail metrics.
- Pre-commit runtime or sample—not optional peeking tweaks.
- Peek-and-stop inflates false positives dramatically.
Product Psychology
Cognitive biases that distort product decisions
Metric cherry-picking
Choosing the slice of data that validates the rollout while ignoring regressions elsewhere.
Product Risk
Ship regressions flagged by guardrail metrics silently while celebrating an irrelevant north-star blip.
Research Countermove
Mandatory guardrail dashboards and pre-registration of slices + metrics before launch.
Novelty effect blindness
Short-term uplift from UI freshness mistaken for sustained behavior change.
Product Risk
Roadmaps reorder around cosmetic wins that evaporate.
Research Countermove
Hold out cells; extend bake time; cohort users by exposure count.
Interactive lab
These instruments implement the textbook formulas loosely—use them to stress‑test judgments, compare frameworks on the same backlog, then document evidence and decisions.
← swipe to see all frameworks →
AARRR
Funnel bottleneck finder
Enter absolute counts across your AARRR funnel. Rates are naive ratios—use cohort definitions aligned to your product semantics.
Stage conversions
- Top of funnel reach (sessions) → Account created / signup8.40% (8,400 / 1,00,000)Tightest transition — inspect this constraint first
- Account created / signup → Hit activation milestone38.10% (3,200 / 8,400)
- Hit activation milestone → Return within window (cohort habitual)56.25% (1,800 / 3,200)
- Return within window (cohort habitual) → Successful referral fired12.22% (220 / 1,800)
- Successful referral fired → Converted revenue events409.09% (900 / 220)
Resources / Case Studies
Curated reading for this mission
Reforge / Brian Balfour
Deep dives on retention, loops, funnel math, channels, growth models, PMF checkpoints.
The canonical textual growth syllabus many PM influencers reference implicitly.
Amplitude
North Star + metric taxonomy frameworks, onboarding analytics narratives, taxonomy cookbooks.
Closes the gap between 'we track events' and 'we steer strategy with behavioural truth'.
Elena Verna (Substack RSS)
RSS feed for Elena’s operating notes bridging growth, experimentation, SaaS benchmarks.
Up-to-date operator perspective on pragmatic metrics & org design—not theory-only academia.
Casey Winters
RSS essays on marketplace growth loops, onboarding quality, experimentation culture.
Connects pirate metrics narratives to nuanced network effects realities.
Operator-written essays on monetization loops, onboarding, experimentation programs, hiring growth PMs.
Benchmark thinking for experimentation velocity beyond generic funnel charts.
Product Coalition
Medium publication RSS aggregating pragmatic PM narratives (strategy, experimentation, stakeholder craft).
Broad surface area perspectives beyond a single author's lens.
Evan Miller
Readable explanation of variance, statistical power calculators, pitfalls of naive significance.
Stops debates where people quote p-values without sample planning literacy.
Ron Kohavi et al.
Industrial-scale A/B learnings across Microsoft/Bing pedigree—novelty bias, surrogate metrics.
The reference when your org grows past spreadsheets into experiment platforms.