Tim Lewis on how to recover from terrible tech decisions

Here's how to survive and recover from poor tech calls with insights from Tim Lewis, Planer Sport's Chief Technology Officer and Co-Founder of Leeds Rust Meetup.

Tim leads digital innovation for products used by over 25 million sports fans each month, and with over 25 years’ experience in the sports technology sector, he knows a thing or two about terrible tech decisions.

We all probably have a story or two about having to live with terrible tech decisions, maybe we've made the odd one over the years as well — this is a safe space, no judgement here! But what actually constitutes terrible tech decisions? To define what that is will hopefully allow us to avoid them in the future.

I'm more interested in the how and why rather than the what.

1. Most tech failures don’t start with bad people or lazy work.

They begin with good intentions, limited time, and high-pressure environments.

No engineer wakes up planning to make poor architectural decisions, but when you’re battling deadlines, resource constraints, and shifting product goals, fear runs rampant, entrenchment becomes rife. In many places, teams are already operating above capacity just keeping the lights on. In these conditions, mistakes are going to happen.

If you want to future-proof your tech stack, start by understanding the conditions that cause cracks, not just the cracks themselves.

Problem: Your team ships an MVP with hardcoded logic to meet a tight launch deadline. A year later, the business has scaled, but the brittle code is breaking under every new feature.

Solution: You make tech debt visible from day one and keep a running log, setting aside 10–20% of sprint time for gradual refactoring.

Image of CTO Tim Lewis, Project Manager Mike Dwyer and Nathan Betts of Doza Consulting

2. Some tech debt is okay. Pretending it doesn't exist kills.

Obviously, allowing monumental amounts of tech debt to build up is not a good idea. However, not all tech debt is bad — some is even strategic. It can and will become dangerous when ignored.

You must be brave enough to name it, document it, and carve out the time to chip away at it. Otherwise, you’re quietly shooting your progress in the foot. Don’t let short-term velocity gains derail long-term scalability.

Problem: Your product manager reassures stakeholders that everything is ticking along appropriately, but the backend team are 'duct-taping' endpoints to avoid breaking legacy APIs.

Solution: You schedule regular 'debt check-ins' where team members surface issues without fear.

3. If no one knows how it works, it’s dangerous.

Legacy systems that no one knows how to fix and only one person understands, which also means no one is allowed to touch them, are not technical assets but operational liabilities.

Fear-led decision paralysis around these systems is often more damaging than the systems themselves. Document, decouple, and reduce single points of failure.

Uncertainty is the enemy of innovation.

Problem: Your 10-year-old payments system built in an outdated PHP version is still live; the only dev who understands it left two years ago.

Solution: You run an architecture audit and prioritise re-documenting and replatforming the riskiest systems, starting with revenue-critical ones.

Data science concept. No face image of male hands typing on keyboard, writing html code for website, sitting at desk with pc and laptop, working on project in software development company

4. Fixing tech when moving fast is like changing a tyre at full speed.

Allow me a sporting anecdote if you would: trying to change a hugely complex production system whilst going full speed in your business-as-usual processes is exactly like trying to do a tyre change on an F1 car whilst it's still racing.

There is a good reason why F1 teams perform pit stops — ultimately you finish faster than you would if you did not.

Change is difficult. Sometimes you have to slow down to go faster.

Problem: Your marketing campaign goes live, but the CMS crashes daily due to unscalable database queries no one had time to optimise.

Solution: You plan scheduled downtime, even if it means pushing a release back.

5. Working at constant load means nobody can fix 'stuff.'

Give your team visible goals and celebrate even the small victories, or risk burning out. If your engineers are permanently operating at 100%, nothing gets improved, only maintained.

Technical recovery requires psychological safety and breathing space. So celebrate those small fixes, and remember that replacing the duct tape with durable solutions is a real route to sustainable progress.

Problem: Your team are constantly fixing bugs and dealing with outages, meaning there’s no time left to improve or upgrade anything.

Solution: You block out regular time in your team schedule for technical improvements like updates, automation, and internal tools, not just feature delivery.

Image of person sitting working at a laptop

6. When data runs out, your gut takes over.

No dataset is perfect.

Eventually, you’ll need to trust your gut, which, let’s be honest, it's often just your experience speaking. That being said, gut instincts should be validated where possible.

Make reversible decisions. Build systems that flex and fail softly, not catastrophically.

Problem: You don’t have enough data to make a confident decision, but senior leadership still wants action by the end of the month.

Solution: You make a smaller, low-risk version of the change first, then track what happens, implementing changes that can be reversed if needed.

7. Chase the ‘shiny new thing,’ but be honest.

New technologies are exciting and often promising, but beware of the hype surrounding them. Whether it’s Rust, Scala, or some bleeding-edge AI software, always ask yourself: What if we’re wrong?

Bake in exits, build small first, and make it safe to change your mind. You'll never know when you'll have to break the emergency glass.

Problem: A CTO pushes for a new AI toolset before the team is trained. Productivity stalls, and integration fails.

Solution: You run a proof of concept, use clear success/fail criteria and commit to walk away if it under-delivers.

Image of CTO Tim Lewis and Michael Silverstone

8. Bad decisions are inevitable, staying stuck is not.

To echo the beginning of this guide, we all probably have a story or two about having to live with terrible tech decisions. The truth is every experienced engineering leader has made decisions they regret.

The difference between sinking and recovering from such decisions is the presence of escape routes. Future-facing strategies aren’t about perfection but the ability to be malleable.

In short, terrible tech decisions aren’t the end, but how you respond to them might be.

We are experts at helping businesses recover from terrible tech decisions. Get in touch or fill out the form below to have a chat about how we can help your business.

 

Recover from terrible tech decisions today