The Analytical Engine:
operational leadership
entrepreneurial execution
ambiguity navigation
“You took initiative on problems I didn’t even know we had. I appreciate how you push back and guide me towards better ways to think about the business.”
I was brought in to answer basic business questions for a four-person health tech startup.
My role was to build a scalable, self-serve analytics infrastructure that freed leadership and operational teams to make data-informed decisions in real time rather than spend precious engineering time on ad hoc queries.
The Problem
When I started working with the startup in 2021, there was no formal analytics function. The CEO relied entirely on the company's single engineer to pull metrics—whether for weekly standups, monthly all-hands, or strategic decision-making. These requests were technically accurate but often misaligned with business definitions or applied inconsistent filtering criteria, making instance-over-instance comparisons unreliable. There was no documented standard definitions for the metrics of interest, and no transparency into how numbers were calculated. The process was slow, non-repeatable, and created a bottleneck: the engineer couldn't focus on product development, and leadership couldn't move quickly on decisions. The Community team had no visibility into user behavior without going through this same request process. As the company grew from four employees to a seed-stage team of six employees, four contractors, and three interns, this ad hoc approach became unsustainable.
The Solution
I requested read-only access to the transactional database and began answering business questions directly using SQL. Since no documentation existed, I created a data dictionary by reverse-engineering table relationships and documenting business logic as I encountered it. After about a year of manual query work, I identified Metabase as a tool that could enable self-serve reporting and secured a $1,000 annual budget by demonstrating to the Chief Product Officer how it would free up both my time and engineering resources. I built interactive dashboards starting in late 2023, beginning with the Community team's most frequent requests—user search, activity summaries, and behavioral tracking. By January 2024, these dashboards were in active use for weekly standups and monthly reporting.
Throughout 2024, I developed reusable analytical layers—transformed tables with business rules pre-applied—so that future queries no longer required redundant filtering logic. This created consistency in how metrics were measured, dramatically sped up the process of updating business definitions, and ensured robustness: when business rules changed, I could update them once in the analytical layer rather than hunting down and modifying dozens of individual queries. The system enabled the CEO to make pricing decisions based on my analyses, supported partner reporting that strengthened external relationships, and allowed operational teams to access the data they needed without depending on engineering.
My Approach
My first priority was understanding what data we actually had—and what we didn't. I needed to know exactly what we could measure, what we couldn't, and where I might be able to construct approximate proxies for actions we weren't capturing. This meant systematically reverse-engineering the entire database schema on my own, occasionally asking the engineer clarifying questions when I hit ambiguities, but largely working independently to map out table relationships and document business logic. I leveraged this understanding to participate more confidently in conversations about metrics, set realistic expectations, and guide the CEO to rethink how things should be measured. Sometimes that meant pushing back on requests that weren't feasible or wouldn't answer the question she was really asking. I made a habit of clarifying why she wanted certain metrics because understanding the decision she was trying to make helped me find better ways to support her, even if it wasn't exactly what she'd originally asked for.
Once I had a solid grasp of the data landscape, I held an analytics summit with the CEO and Chief Product Officer. We spent dedicated time mapping out all the parts of the product we cared to measure—conversions, funnels, user behaviors—and jointly defined metrics: what was included, what wasn't, how calculations should be done. This collaborative session was critical because it aligned us on definitions and surfaced gaps in our data capture. In a few instances, the CPO added instrumentation work to the engineering roadmap so we could actually track what we needed. This wasn't just me dictating definitions—it was a joint effort to build a shared understanding of how we'd measure success.
When I proposed Metabase, I knew budget would be a concern for a pre-seed startup. I framed the investment not as a nice-to-have analytics tool, but as a way to unlock leverage: it would reduce my manual workload, eliminate engineering's reporting burden, and create transparency so teams could self-serve rather than waiting for answers. The CPO became my champion in securing approval, as I needed an internal advocate who understood both the technical value and the operational cost of the status quo. With the tool in place, I focused on adoption, not perfection. I started with the Community team's highest-volume needs and created dashboards that answered the questions they asked every single week. This created immediate value and built momentum. As the team saw what was possible, they came back with new requests, and I continued building incrementally.
The analytical layers emerged from frustration. I was writing increasingly complicated queries full of CTEs just to produce the views the team needed, and I realized this approach wouldn't scale — not for me, and certainly not for anyone else who might need to query the data in the future. I reached out to engineering friends at more mature companies and asked how they solved this problem. That's when I learned about analytical layers, and transforming data tables to apply business rules in one place. I adapted that concept for this startup context, building reusable views that centralized filtering logic and business definitions. This meant that when business rules changed, I could update them once in the analytical layer rather than tracking down every affected query. It was a strategic architecture choice: I was trading upfront complexity for long-term consistency, scalability, and maintainability.
Core Skills Leveraged
-
It was never far from my mind that whatever solution I came up with needed to both adapt to changes and scale as the startup grew. The Community team had been downloading “server reports” that included all activity history, and manually, repeatedly formatting them in Excel to get answers; this was clearly very inefficient and unsustainable. Having me manually run queries to answer everyone's questions was also unsustainable. I saw that Metabase would be a scalable solution that would streamline everyone’s data-related quests by cutting down on unnecessary manual work. When introducing it to the broader team, I drove adoption by focusing on the highest-frequency pain points, so that the users would immediately see the value of the tool and switch to using it because it made their lives easier.
I embarked on building analytical layers because I saw that this approach to managing the data would ensure metric consistency as the business evolved, product features changed, and new team members joined. I had already experienced many business rules changes and business model changes firsthand. Subscription models had been tested, user definitions had been refined, product launches had altered the data structure. I wanted to build a system that could absorb those changes without breaking downstream queries or requiring wholesale rewrites. I was already struggling to keep comprehensive track of which queries would be affected by changes, or did or didn’t include specific business rules, etc. I knew we needed to set up better data infrastructure sooner or later, and I wanted to lay the foundation with a solution that would allow us to continue to adapt as the business grew.
-
With no formal authority in this organization, I found my influence from credibility, strategic framing, and relationship-building. By studying the data schema independently and producing defensible metrics, I established myself as someone leadership could trust when they needed answers. I also showed that I was a thought partner in thinking about measurement by engaging in conversations with the CEO about her questions. When the CEO requested metrics that weren't feasible or wouldn't actually answer her underlying question, I pushed back and reframed the conversation around what decision she was trying to make.
To secure budget for Metabase, I knew having an internal champion who understood both the technical value and the operational cost of the status quo would strengthen my case. I invested time in demonstrating the tool's potential to the CPO, who then advocated on my behalf. Throughout this project, I was constantly translating technical concepts into business value, building coalitions, and creating visibility into the work I was doing so that stakeholders could see the impact and trust the system I was building.
-
This entire endeavor was born out of my pursuing the next most valuable step: I identified the problem, designed the solution, learned about ways I could improve upon that solution, and ultimately built an infrastructure that democratized data access. I was never asked to play a role in analytics; I simply saw how frustrating it was to ask the sole engineer for metrics, and how much time was wasted in each metric-related discussion clarifying filters and exceptions. I requested database access because I wanted to support the CEO’s decision-making with more transparent metrics.
Once we had adopted Metabase for self-serve dashboards, I continued to check in with the Community team so I could continue to improve their experience and eliminate more manual work for them as the business evolved. When I started to notice that I was repeatedly writing very complex (but sometimes similar) queries just to produce different views, and that I had to hunt down broken queries after each product change, I reached out to my network to learn about how mature companies handled this problem. I decided to undertake the painful process of redoing all our existing queries based on the new analytical layers I built because it was the right thing to do to ensure the scalability and sustainability of the business’ analytics.