/ 4 min read

Performance is a design decision, not an optimization step

Why speed is determined long before audits and tooling enter the picture.

It is a familiar pattern in software development: build the features first, ensure the functionality works, and then, right before launch, schedule a “performance sprint” to make it fast.

We treat speed as a layer of polish, something to be applied after the structural work is done. But this approach rests on a flawed assumption: that performance is something you can add. In reality, performance is largely a result of what you choose not to include.

Once a system is architected around heavy dependencies and complex client-side logic, no amount of code splitting or image compression can truly fix it. You cannot optimize a fundamental architectural mismatch.

Speed is Architecture

Performance is not a metric to be chased; it is a constraint to be designed around.

When we view performance as a design decision, we acknowledge that every choice—from the hosting infrastructure to the UI framework—incurs a cost. The goal is not to get a perfect Lighthouse score, but to build a system that respects the user’s time and resources by default.

A site that loads instantly is not just “optimized.” It is respectful. It signals that the engineering team values efficiency and stability over developer convenience.

The Illusion of Bandwidth

We often build on fast machines, on stable fiber connections, in controlled environments. We forget that the public internet is hostile. Mobile networks fluctuate, latency spikes, and devices age.

If your design relies on optimal conditions to feel fast, it is arguably fragile. A resilient system is designed to perform well even when the conditions are poor. This is where the decision-making happens:

1. Budgets before builds

Before writing a line of code, we should establish valid constraints. How much JavaScript is reasonable? What is the maximum acceptable Time to Interactive? Design decisions must fit within these budgets. If a feature pushes the bundle size over the limit, the solution isn’t to lazy-load the bloat, it’s to rethink the feature or the implementation.

2. The cost of abstraction

Modern tooling offers incredible power, but often at the cost of abstraction layers that end up in the user’s browser. While developer experience is important, it should never come at the expense of user experience. We prefer solutions that ship less code, adhering to the platform’s native capabilities (HTML and CSS) whenever possible.

3. Subtract, don’t add

The fastest request is the one that is never made. The fastest code is the code that doesn’t exist. Instead of asking “what tool can make this faster?”, we ask “what can we remove without losing value?” Simplification is the most powerful optimization tool available.

Stability over Speed

Ideally, we are not just aiming for raw speed, but for consistency. A system that is predictably responsive builds trust. One that requires a loading spinner for basic interactions erodes it.

Long-term stability requires us to accept that we cannot control the client environment. We can only control what we send to it. By making conservative, thoughtful decisions about our architecture, we ensure that the system remains performant for years, not just on launch day.

Performance is not a step in the process. It is the process.