Business language often assumes the world is stable, measurable, and controllable. In complex contexts, words like maximize, minimize, and optimize quietly overclaim what can be known or achieved. A more honest approach treats decisions as hypotheses, focuses on direction rather than endpoints, and keeps learning open as situations change.
The language we reach for too easily
In business articles and marketing materials, certain words appear with striking regularity: maximize, minimize, optimize. They are used confidently and rarely questioned.
How do you maximize return on investment?
How do you minimize staff turnover?
How do you optimize team performance?
The problem is not the intent behind these questions. Wanting better outcomes is reasonable. The problem is the language itself. It assumes that the world is sufficiently stable, measurable, and controllable for these ideas to make sense.
In a complex world, that assumption does not hold.
What complexity allows us to know
In a complex environment, how would you ever know that you have maximized, minimized, or optimized anything? You cannot. There is no clear endpoint, no known ceiling, no agreed optimum waiting to be discovered.
All you can ever observe is movement. Something increased or decreased. Perhaps noticeably, perhaps marginally. And even that judgment depends on where you are standing, when you are looking, and what you choose to pay attention to.
This is why so much business language quietly overclaims.
“Simple strategies to maximize profit” would be more honest as “Simple strategies that may increase profit”.
“The best ways to minimize cost overruns” would become “Some possible ways to reduce cost overruns”.
“How to optimize employee performance” might be rephrased as “Some potential ways to improve employee performance”.
These versions sound less decisive, but they are more truthful. They leave room for uncertainty, context, and learning. They acknowledge that outcomes emerge rather than being engineered.
Optimising machines, not organisations
Optimization works when the system is bounded, repeatable, and largely predictable. You can optimize a machine. You can tune a process under controlled conditions. You can often make sensible trade-offs and say, for now, this configuration performs better than that one.
Organizations are different. They are living, social systems. Relationships, histories, power, emotion, incentives, and local meaning shape them. What improves performance in one context may degrade it in another. What reduces turnover this year may store up fragility for the next.
Treating organizations as if they can be optimised in the same way as machinery flattens this reality. It encourages overconfidence and discourages curiosity. It privileges neat answers over careful judgment.
Decisions as hypotheses, not solutions
In complex situations, there is no reliable way of knowing in advance whether a decision is the best one, or even whether it is a good one. Conditions change, time passes, and action itself alters the situation. The value of a decision can only ever be evaluated retrospectively, and even then only partially and provisionally.
Seen this way, decisions are not final answers to be optimized. They are provisional moves in an unfolding situation. Each decision expresses assumptions about how the system might respond, assumptions that can only be explored through action.
Decisions are therefore better treated as hypotheses to be tested, reflected on, and revised, rather than as solutions to be perfected.
This also exposes the limits of optimization language. If decisions are hypotheses, there is nothing meaningful to optimize in advance. There are only directions to explore, signals to pay attention to, and consequences to reflect on later, including the unintended ones that inevitably arise.
Living with direction rather than certainty
A more grounded approach in complex environments is to think in terms of direction rather than destination. Not “Have we optimized?”, but “What seems to be improving, and at what cost?” Not “Is this the best decision?”, but “What are we learning, and what might we need to adjust?”
This way of thinking is less comforting. It offers fewer guarantees and weaker claims. But it is also more honest.
In complex situations, progress rarely comes from finding the optimum. It stems from staying attentive, remaining open to challenge, and being willing to revise our thinking as the world responds to our actions.
As Dave Snowden explains in the Cynefin Framework, different kinds of situations call for very different ways of acting.
In clear situations, where cause and effect are obvious and repeatable, best practice makes sense. There is a known right way of doing things. Standard procedures, checklists, and rules work well because the conditions are stable and predictable.
In complicated situations, cause and effect still exist, but they are not obvious. Expertise and analysis are required. Here, Snowden prefers the term good practice. There may be several valid answers, depending on the context, and judgment matters more than rules.
In complex situations, the logic changes completely. Cause and effect can only be understood after the fact. Outcomes emerge from interactions rather than from plans. In this domain, best practice does not just fail; it becomes dangerous. Applying predefined solutions suppresses learning and creates a false sense of control.
Instead of best or better practice, complex environments require emergent practice. We act, observe what happens, and adjust. Small, safe-to-fail experiments replace grand plans. Learning comes from feedback, not from compliance.
This mirrors the problem with optimization language. Just as there is no meaningful way to optimize a complex system in advance, there is no meaningful way to define best practices in advance. Practice, like outcomes, emerges through engagement with the situation itself.
In complex contexts, the question is not “What is the best practice?” but “What are we learning, and how might we need to adapt?”
In complex situations, we should choose our words more carefully and notice what they imply. We should frame decisions as experiments, watch what changes over time, and adjust as we learn. Rather than chasing perfect answers, we should stay attentive to direction, consequences, and the signals the system gives back.
POST NAVIGATION
CHAPTER NAVIGATION
SEARCH
Blook SearchGoogle Web Search
Photo Credits: Midjourney (Public Domain)
Wednesday 11th March 2026, 14:00 to 18:00 London time (GMT)
Learn how to design & run a Gurteen Knowledge Café, both face-to-face and online.
Information and Registration
