18 Months. Never Shipped. Here's What That Taught Me About Product Management.

Somaditya Roy

Why the shipping decision is harder than building the product — and the three traps that stopped me.
Eighteen months. A mobile app. A freelance UI/UX designer. Google and YouTube ad campaigns. A Flutter frontend, a Node.js/Express.js backend, a Firebase layer. Thirty beta testers across A/B and closed testing cycles. Iteration after iteration — on the onboarding flow, on the core loop, on the notification logic, on things that probably didn't matter and things that definitely did.
The app never launched. Not because the market wasn't there. Not because the code didn't work. Because I could never decide it was good enough to ship.
If you've spent any time in product circles, you already know what the standard post-mortem looks like: "I should have shipped sooner." It's the canonical failure confession. And it's probably right. But that framing misses something — something I didn't understand until I started mapping what I'd actually done for those 18 months against the work product managers get paid to do.
The failure was real. The training was also real. This is what I learned.
The "Ship Fast" Orthodoxy Is Answering the Wrong Question
The dominant mantra in product development is speed. Ship early. Ship often. Get it in front of users. The data behind this instinct is real: 42% of startup failures trace back to "no market need" — not bad code, not poor execution, but building for a problem that wasn't large enough or real enough to generate demand (CB Insights, 2026). Ship faster, and you find out sooner.
But "ship fast" is answering one specific question: Do I have a market for this? That's the discovery question. Answer it early, with prototypes and experiments, before committing months of engineering time. That's sound advice.
The problem is that founders apply it to a completely different question: Is my built product ready to launch publicly? That's the shipping decision. And the ship-fast orthodoxy has almost nothing useful to say about it.
I did the discovery work. Thirty beta testers. Google and YouTube ad experiments. A designer. Real feedback loops running for months. The evidence that a market existed was there — imperfect, but present. My failure was not in skipping discovery. It was in getting stuck on the shipping decision: I could not decide the built product was good enough to ship.
That's where the three traps live.
The Three Traps
After eighteen months, I can name the traps precisely. They're not unique to me — they're patterns that show up consistently in solo founder post-mortems, and they each have a specific mechanism.
The Infrastructure Trap
This one feels like discipline. You're setting up CI/CD pipelines, configuring monorepos, implementing proper release automation. You're being rigorous. You're thinking about scale.
What you're actually doing is avoiding the one thing that would tell you whether any of it matters: putting the core value proposition in front of a real user.
Production-grade infrastructure for zero users is not engineering rigour. It's a psychological shield. The work is real and the output is tangible, so it feels like progress. But it's optimising the delivery layer of a product whose value hasn't been tested. Every hour spent there is an hour not spent answering the only question that matters in the early stage: does this solve a problem people care about enough to change their behaviour?
The fix is blunt: no infrastructure investment before the core loop has been validated with low-fidelity prototypes. Not code. Prototypes. Figma flows, paper sketches, wizard-of-oz demos. Build the thing that answers the value question for the cost of an afternoon, not three months of backend work.
The Validation Trap
Closely related to the first, but distinct. The Validation Trap is using production code as a learning tool when a prototype would do the job faster and cheaper.
Marty Cagan separates discovery from delivery for a reason: discovery is about answering four risk questions — is it valuable, is it usable, is it feasible, is it viable — before you commit engineering resources. When you're a solo founder who is also the engineer, the boundary between discovery and delivery collapses. You answer the "is it feasible?" question by building it. You answer the "is it valuable?" question by releasing it to beta testers. The problem is that you've now spent months answering questions you could have tested in a week.
The fix: treat every product question as a testable hypothesis, and choose the lowest-fidelity test that could falsify it. If a Figma prototype with a dummy backend can answer the question, build that. Code is expensive. Prototypes are cheap. Use the right tool.
The Consensus Trap
This is the one I fell into hardest. Thirty beta testers. A/B testing. Closed testing cycles. Real engagement data. And I kept waiting — for the signal that said yes, this is ready, ship it.
The signal never came.
What I interpreted as "the product isn't ready" was actually "the product isn't generating strong enough signal to overcome my fear of launching." Sparse or lukewarm beta feedback doesn't mean the product needs more iteration. It usually means one of two things: the product isn't solving a problem with enough urgency, or you're testing with the wrong users. Either diagnosis requires a strategic decision — pivot, reframe, or stop. Neither requires another six weeks of iteration.
The fix: define your shipping threshold before you start beta testing, not after. What engagement level, retention signal, or qualitative feedback pattern constitutes a green light? Set the criteria when you're thinking clearly. Don't leave it open-ended, because open-ended thresholds will always find reasons to wait.
What the Build Actually Proved
Here's where the framing shifts.
Patrick Campbell, founder of ProfitWell, has argued that the Lean Startup and MVP movements "did more harm than good" — specifically because they encourage founders to ship products that are too underbaked to test the actual hypothesis. The result: founders conclude there's no market demand when the real problem is the product was too poor to generate a usable signal. They kill good ideas with bad MVPs.
That argument is worth sitting with. Because it implies something uncomfortable: shipping fast is not the same as learning fast. A product that's too rough to generate honest signal doesn't give you data — it gives you noise. And making decisions from noise is worse than not shipping at all.
An 18-month build, run with discipline, teaches things a quick-ship cycle doesn't. It teaches you how to write a functional spec when no one is reviewing it. How to run and interpret beta tests when the feedback is ambiguous. How to make a scope decision when all options feel wrong. How to navigate the shipping decision itself — the one that requires synthesising value risk, usability risk, market timing, and personal conviction simultaneously.
K. Anders Ericsson's research on deliberate practice is relevant here: expertise isn't accumulated through repetition alone, but through focused effort on the components of a skill that are hardest to develop. The shipping decision is one of the hardest PM skills to build. Most title-holders develop it by shipping and iterating. Some develop it by getting close to shipping, repeatedly, and learning precisely why they couldn't pull the trigger.
Both paths are valid. Only one produces a founder who can name their traps.
The Failure Is the Credential — If You Can Name It
Not shipping MoneTask was a failure. I'm not recasting it as a success.
But the 18 months of work — the discovery cycles, the beta test design, the scope decisions, the infrastructure choices (good and bad), and ultimately the analysis of why the product never launched — is a complete product development education. It covers more of the PM job description than most shipped portfolios do.
The question isn't whether you shipped. The question is whether you can articulate what you built, what you learned, and what you'd do differently. That's the interview answer. That's the credential.
If you've got an unshipped product in your history, don't bury it. Build the retrospective. Name the traps. The work you did is more legible than you think.
*I'm currently building Catalyst — applying everything MoneTask taught me. If you want to follow the build, check out my newsletter.