We all probably have a story or two about having to live with terrible tech decisions, maybe we've made the odd one over the years as well — this is a safe space, no judgement here! But what actually constitutes terrible tech decisions? To define what that is will hopefully allow us to avoid them in the future.
I'm more interested in the how and why rather than the what.
1. Most tech failures don’t start with bad people or lazy work.
They begin with good intentions, limited time, and high-pressure environments.
No engineer wakes up planning to make poor architectural decisions, but when you’re battling deadlines, resource constraints, and shifting product goals, fear runs rampant, entrenchment becomes rife. In many places, teams are already operating above capacity just keeping the lights on. In these conditions, mistakes are going to happen.
If you want to future-proof your tech stack, start by understanding the conditions that cause cracks, not just the cracks themselves.
Problem: Your team ships an MVP with hardcoded logic to meet a tight launch deadline. A year later, the business has scaled, but the brittle code is breaking under every new feature.
Solution: You make tech debt visible from day one and keep a running log, setting aside 10–20% of sprint time for gradual refactoring.

2. Some tech debt is okay. Pretending it doesn't exist kills.
Obviously, allowing monumental amounts of tech debt to build up is not a good idea. However, not all tech debt is bad — some is even strategic. It can and will become dangerous when ignored.
You must be brave enough to name it, document it, and carve out the time to chip away at it. Otherwise, you’re quietly shooting your progress in the foot. Don’t let short-term velocity gains derail long-term scalability.
Problem: Your product manager reassures stakeholders that everything is ticking along appropriately, but the backend team are 'duct-taping' endpoints to avoid breaking legacy APIs.
Solution: You schedule regular 'debt check-ins' where team members surface issues without fear.
3. If no one knows how it works, it’s dangerous.
Legacy systems that no one knows how to fix and only one person understands, which also means no one is allowed to touch them, are not technical assets but operational liabilities.
Fear-led decision paralysis around these systems is often more damaging than the systems themselves. Document, decouple, and reduce single points of failure.
Uncertainty is the enemy of innovation.
Problem: Your 10-year-old payments system built in an outdated PHP version is still live; the only dev who understands it left two years ago.
Solution: You run an architecture audit and prioritise re-documenting and replatforming the riskiest systems, starting with revenue-critical ones.

4. Fixing tech when moving fast is like changing a tyre at full speed.
Allow me a sporting anecdote if you would: trying to change a hugely complex production system whilst going full speed in your business-as-usual processes is exactly like trying to do a tyre change on an F1 car whilst it's still racing.
There is a good reason why F1 teams perform pit stops — ultimately you finish faster than you would if you did not.
Change is difficult. Sometimes you have to slow down to go faster.
Problem: Your marketing campaign goes live, but the CMS crashes daily due to unscalable database queries no one had time to optimise.
Solution: You plan scheduled downtime, even if it means pushing a release back.
5. Working at constant load means nobody can fix 'stuff.'
Give your team visible goals and celebrate even the small victories, or risk burning out. If your engineers are permanently operating at 100%, nothing gets improved, only maintained.
Technical recovery requires psychological safety and breathing space. So celebrate those small fixes, and remember that replacing the duct tape with durable solutions is a real route to sustainable progress.
Problem: Your team are constantly fixing bugs and dealing with outages, meaning there’s no time left to improve or upgrade anything.
Solution: You block out regular time in your team schedule for technical improvements like updates, automation, and internal tools, not just feature delivery.

6. When data runs out, your gut takes over.
No dataset is perfect.
Eventually, you’ll need to trust your gut, which, let’s be honest, it's often just your experience speaking. That being said, gut instincts should be validated where possible.
Make reversible decisions. Build systems that flex and fail softly, not catastrophically.
Problem: You don’t have enough data to make a confident decision, but senior leadership still wants action by the end of the month.
Solution: You make a smaller, low-risk version of the change first, then track what happens, implementing changes that can be reversed if needed.
7. Chase the ‘shiny new thing,’ but be honest.
New technologies are exciting and often promising, but beware of the hype surrounding them. Whether it’s Rust, Scala, or some bleeding-edge AI software, always ask yourself: What if we’re wrong?
Bake in exits, build small first, and make it safe to change your mind. You'll never know when you'll have to break the emergency glass.
Problem: A CTO pushes for a new AI toolset before the team is trained. Productivity stalls, and integration fails.
Solution: You run a proof of concept, use clear success/fail criteria and commit to walk away if it under-delivers.

8. Bad decisions are inevitable, staying stuck is not.
To echo the beginning of this guide, we all probably have a story or two about having to live with terrible tech decisions. The truth is every experienced engineering leader has made decisions they regret.
The difference between sinking and recovering from such decisions is the presence of escape routes. Future-facing strategies aren’t about perfection but the ability to be malleable.
In short, terrible tech decisions aren’t the end, but how you respond to them might be.
We are experts at helping businesses recover from terrible tech decisions. Get in touch or fill out the form below to have a chat about how we can help your business.