Insights

The decision that made you $1M will cost you $5M at your next stage

Himanshu Sharma
Updated on:  
March 17, 2026
{readTime}
min to read

Table of contents

Most operational disasters don't happen because someone made a bad decision. They happen because someone kept a good decision running two or three years past its expiry date.

The Google Sheet your ops manager built at $2M revenue was a good decision. Fast, free, and everyone understood it. The shared Airtable base you set up when you hit 20 orders a day was a good decision. The WhatsApp group for field agents made total sense when you had six technicians. These aren't signs of bad management. They're signs of a business that was moving fast and making sensible calls with the information it had.

The problem isn't the decisions. It's what happens when a business outgrows them, and the people who built those systems refuse to acknowledge it.

The trap is psychological, not operational

Most operations managers are aware that their systems are under pressure. The Monday morning reconciliation that used to take 20 minutes now takes two hours. The new hire who should've been productive after a month is still getting handholding in month 3. The field manager who messages the warehouse for stock levels seven times before noon because she has no direct access to the inventory system.

They know. They're just not doing anything about it.

The person who built the system is often the business owner or a senior ops person who spent a lot of time setting it up. They know it better than anyone. They can fix it when it breaks and explain it to new staff.

But a system that works for the one person who built it is not the same as the one that works for a 35-person team.

What they're actually defending isn't an operational tool. It's the evidence that an earlier version of themselves made the right call. Admitting the system is broken feels like admitting the decision was wrong. It wasn't wrong. It just has an expiry date.

I had a call last year with the founder of a freight-forwarding company with 28 people and revenue around $9M. They'd been tracking shipments, customer accounts, and driver assignments across three spreadsheets and via WhatsApp and email. It had worked perfectly fine at $3M and 10 people.

At $9M and 28 people, it was held together by one operations coordinator, who was the only person who understood how the three spreadsheets connected. When I asked the founder whether the system was working, he said yes. When I asked how long it would take to get a complete picture of one customer's open shipments, he paused for several seconds before saying, "About an hour, maybe more."

What actually breaks, and when

Spreadsheets break when more than 4 or 5 people regularly update them. When many people edit a spreadsheet at once, there's no record of changes, and complex formulas are unclear to others, it loses its usefulness.

Spreadsheets are meant for analysis, not for use as a database.

Off-the-shelf SaaS tools break differently. If your process has three non-standard steps (and most businesses do), you either bend your operations to fit the software or you build workarounds. Those workarounds add up. After 18 months, the workaround is 40% of how your team actually gets work done.

We've written about when that tipping point arrives in more detail, if you're at the build-vs-buy decision.

Zapier and Make automations break when volume increases or when one platform updates its API. When something breaks, nobody knows whose tool caused the failure.

And after seven years of experience in this work, I've found a key sign that something is wrong with your system. If a new hire takes three weeks to learn a process that should only take three days, there is an issue.

The cost isn't what you think it is

I'm not going to run the full math here, because we've already done that. The short version is that the cost is almost always higher than it looks, and almost always larger than the cost of fixing it would have been.

What I want to focus on is why the cost accelerates, specifically at later stages.

At $3M revenue with 12 people, a broken process costs you 15 hours a week across the team. At $12M with 40 people, it's 60-70 hours a week because more people are touching it, and you've added new workarounds on top of old ones. The cost didn't scale linearly with your revenue. It scaled faster.

The freight forwarding founder I mentioned earlier: when we actually mapped the time cost across his team, it came to 31 hours per week. Because it was shared among seven people, nobody really thought it was a big deal.

At a conservative cost of around $40/hour, that was roughly $64,000 a year spent sitting inside three spreadsheets and a WhatsApp group. The custom operations tool we built cost $22,000 and took eight weeks. He got payback in four months.

He'd been running those systems for three years.

The thing that makes me frustrated when I see this pattern, and I see it on almost every intro call, is that the founder almost always knew something was wrong. They just kept telling themselves they'd fix it when things calmed down. Things don't calm down. Revenue goes up, the team grows, the broken system gets more broken, and the cost of fixing it compounds as the complexity increases.

The competitor problem nobody talks about

There is a version of this story that ends without a single dramatic failure. Just a slow, quiet degradation of margin as a competitor who built proper systems two years earlier starts taking your customers.

They can quote faster because their pricing tool pulls live inventory and cost data. You're quoting from a spreadsheet last updated on Thursday. They can onboard a new client in two days because their process is documented and automated. Your onboarding takes a week because three of the steps require manual intervention from your ops manager. Their team can handle 30% more volume without adding headcount. You can't.

None of this is dramatic. It's just slow. And it's almost impossible to trace back to the decision to keep running on systems built for a smaller business, because the connection between "we didn't fix our tools at $5M" and "we lost market share at $15M" doesn't feel obvious.

The companies that avoid this aren't better at operations strategy. They're just honest earlier. They look at what they built at $3M and ask, genuinely, whether it can carry them to $15M. Before they're already at $12M, and the answer is plainly no.

What to do if you're reading this and recognising your business

Start by taking a good look at where all the knowledge is stored. If the person who built your system packed up and left tomorrow, what would fall apart? If the answer is a lot, then your setup is more unstable than you realise.

The second step is being willing to separate "this decision was smart when I made it" from "this decision is still the right one." Both can be true. The first one doesn't require you to stick with the second one.

Building something custom doesn't have to mean a six-month project or a six-figure budget. We've written about what it actually costs, and the number that surprises most people is how much of their manual-process cost they recover within the first year. The projects we build through our WeWeb and Bubble development typically range from $8K to $50K, go live in 6 to 10 weeks, and are built to be maintained by your team.

If you want to pressure-test whether your current setup can actually carry you to your next revenue stage, book a free 30-minute call. We'll tell you honestly if you don't need anything. That happens more than you'd think.

Want software that moves the needle?

We’ve helped ops teams, marketing leads, and SaaS founders build software that scales.