The Messy Monetization Test:

  • price testing analysis

  • revenue optimization

  • influencing leadership decisions

The pricing strategy that drives our business today came from your analysis during one of the messiest testing periods we’ve ever run.
— Client Stakeholder

I was brought in to analyze conversion rates across pricing experiments for a health tech startup's first revenue model, but discovered that sequential testing, mid-stream product changes, and incomplete churn data made clean analysis nearly impossible.

My role was to extract actionable insights from imperfect data, model tradeoffs between price points despite methodological limitations, and deliver pricing recommendations that leadership could implement with confidence.

The Problem

The startup had operated as a free product for over two years before launching its first paid subscription model. Six months into monetization, leadership wanted to understand users' willingness to pay and optimize pricing across three subscription tiers: monthly, quarterly, and annual. I was tasked with analyzing conversion rates across 12 different price plans tested over 13 weeks to calculate customer acquisition costs and lifetime value. The challenge was that nothing about this testing was clean. Every price change happened sequentially rather than through A/B tests (partly due to low volume and partly due to technical constraints), which meant isolating causality was nearly impossible. The onboarding flow changed twice during the testing period. Free trials were eliminated midway through. Each change introduced new variables that confounded the data. Churn data was incomplete because users hadn't stayed long enough to observe full retention cycles, forcing me to project lifetime value with partial information. Leadership acknowledged the situation was messy but needed to move forward, operating under the assumption that users acquired week-over-week were roughly comparable and that we could extract enough signal to make informed decisions.

The Solution

I segmented cohorts by entry date and subscription plan, treating each pricing period as a mini-experiment. I triangulated partial churn data by analyzing both cohort-based retention curves and average subscription lengths, then projected 12-month lifetime value (LTV) using observed behavior and reasonable assumptions based on the six months’ data we had prior to these tests. Where product changes like free trial elimination showed no significant impact on conversion, I treated cohorts as comparable to maximize sample size. I modeled tradeoffs between price, conversion rate, and retention across the 12 plans. The analysis revealed that $7.99/month had the highest conversion (~6%) but poor LTV, while $14.99/month offered the best balance of conversion and retention among top performers. I recommended $14.99/month, $29.99/quarter, and $89.99/year. Leadership implemented the pricing structure, which remained in place for about a year, before changing the business model entirely.

My Approach

I knew from the start this would never be a textbook analysis, but I also knew that leadership needed to make a decision and that no decision was worse than an imperfect one. My focus was on being transparent about limitations while extracting the best signal possible from messy data. The incomplete churn data was the trickiest challenge as users hadn't been subscribed long enough to observe natural retention cycles. I made the pragmatic choice to combine early retention signals with projections, ensuring my methodology was consistent across all price points so that relative comparisons remained valid even if absolute figures were uncertain. When presenting my recommendation, I walked leadership through my thinking, explained the tradeoffs, and framed the decision as a strategic choice about what mattered most to the business at this stage. I knew that all we needed was clarity about the next step, and that we would continue to observe, experiment, iterate and pivot after this decision.

Core Skills Leveraged

  • This project required operating in conditions where traditional analytical best practices simply weren't possible. There was no control group, no randomization, no ability to isolate variables — just a series of sequential changes happening in real time as the business evolved. I recognized that perfect data was never coming and that the cost of inaction was higher than the risk of imperfect analysis, so I made deliberate choices about how to handle the messiness. When the onboarding flow changed or free trials were eliminated, I observed whether these shifts meaningfully affected conversion rates. I made the pragmatic call to pool data across those periods to increase sample size and improve the reliability of my estimates. When churn data was incomplete because users were too new to observe full retention cycles, I combined what I could observe with reasonable projections, making sure my methodology was consistent so that relative comparisons between price points remained valid. Throughout the analysis, I operated with a clear understanding of what I knew, what I didn't know, and what I could reasonably infer. I didn't let ambiguity paralyze decision-making. I found ways to extract signal from noise, made transparent assumptions, and delivered recommendations that leadership could act on despite the uncertainty.

  • When I saw that $7.99/month had the highest conversion rate, I could have simply recommended that price and called it a day. But I looked beyond the surface metric and recognized that conversion without retention would undermine long-term revenue. That's why I modeled the tradeoffs between acquisition and lifetime value, ultimately recommending $14.99/month despite its slightly lower conversion. This was a business judgment: I was prioritizing sustainable revenue over short-term signup rates.

    I also made judgment calls about how to communicate my findings. I was transparent about constraints while framing my recommendation as the best-supported path forward given the available evidence. I did so because I understood the business context (pricing impacts projections that support operational forecasting and fundraising), and what leadership needed to move forward with confidence.

  • One of the hardest parts of this project was explaining messy analysis to non-technical leadership in a way that built confidence rather than confusion. When I presented my findings to the CEO and CPO, I wanted to avoid burying them in caveats or overwhelming them with methodological detail. I walked them through my thinking in a clear, structured way. I started by acknowledging the limitations upfront to establish credibility and ensure we were on the same page about how to interpret the results. Then I explained how I'd handled each challenge: segmenting cohorts, treating comparable periods as pooled data, projecting LTV based on early retention signals. I made sure they understood not just what I did, but why those choices were reasonable given the constraints. I framed $7.99/month as the "high-volume, low-retention" option and $14.99/month as the "balanced growth" option, giving them a clear decision framework rather than expecting them to interpret raw data. I gave leadership what they needed: a clear recommendation with transparent reasoning, so they could make an informed decision and move forward without second-guessing whether they had enough information.

Previous
Previous

0-to-1 Setup for Business Analytics

Next
Next

Crafting an Adaptable Go-To-Market Plan