In the world of complex decision-making, organisations often rely on two distinct tools. On one hand, there is Simulation (‘What happens if…?’), allowing us to model uncertainty and test scenarios. On the other, there is Optimisation (‘What is the best choice?’), allowing us to find the ideal solution within constraints.
Separately, they are powerful. But when integrated, they unlock a new level of capability—moving from simple decision support to intelligent, autonomous systems.
At Decision Lab, we don’t believe there is one single best method for this integration. The ideal approach depends entirely on the business problem at hand. Below, we explore the three primary patterns we use to drive value for clients like Migros, FedEx, and Nestlé, and how these models contribute to building truly antifragile organisations.
Three Patterns of Integration
We generally view the integration of simulation and optimisation across a spectrum, moving from tactical support to full autonomy.
1. Optimisation within Simulation (Complex Decision Support)
In this pattern, the simulation runs a large-scale system, such as a warehouse. When a complex, real-time decision is required, the simulation pauses to call a dedicated optimisation algorithm.
How it works: The algorithm solves the specific sub-problem, and the simulation continues, testing how that “optimal” decision performs under real-world uncertainty (like worker delays).
Case Study: For Migros, we utilised this method. Their warehouse simulation calls an optimisation algorithm to determine the most efficient trolley-picking route every time a new order arrives. This allows us to test the routing logic’s real-world impact on the system’s total throughput. Case, video.
2. Optimisation controls Simulation (Strategic Design)
Here, the roles are reversed. An external optimisation wrapper searches for the best strategic solution, such as a factory layout or supply chain network.
How it works: For every solution the optimiser proposes, it uses the simulation as a high-fidelity “evaluation function” to test performance against stochastic conditions.
Case Study: For DataForm Lab, an optimisation model proposed various wind farm layouts. Our simulation then tested each layout against uncertain wind and wave conditions to calculate true energy output. The optimiser used this feedback to find the next, better solution.
3. Simulation trains Optimisation (The Autonomous Future)
This is where we enter the realm of the Digital Twin and Reinforcement Learning (RL). The simulation acts as a high-speed, risk-free training environment.
How it works: A machine learning agent interacts with the simulation millions of times, learning an ‘optimal policy’ for making autonomous decisions.
Case Study: For FedEx, we built a simulator for their linehaul operations. An AI agent was trained inside this simulator to learn the optimal policy on when to “cancel, delay, or add” linehauls based on uncertain package volumes, dramatically improving efficiency.
Building Antifragility: Beyond Resilience
Why go through the effort of building these combined models? It isn’t just about efficiency; it is about survival and growth.
A key philosophy at Decision Lab is Antifragility. While resilient systems merely withstand shock, antifragile systems improve because of it. By integrating simulation and optimisation, we create a “Digital Twin” that acts as a long-term asset.
We use these models to perform rigorous Sensitivity Analysis—identifying which inputs drive outcomes—and to stress-test operations against millions of potential scenarios. This allows organisations to design supply chains and operations that are prepared not just for the average day, but for uncertainty and volatility.
- Read more: Explore our thoughts on Engineering the Antifragile Pharma Supply Chain.
- Industry Insight: See how we align with the Gartner Experiment-Driven Supply Chain Planning approach.
Navigating the Challenges
Every powerful methodology has trade-offs. We mitigate these through a rigorous Verification & Validation (V&V) process.
- Computational Cost: Evaluating thousands of simulation runs is intensive. We use sensitivity analysis to identify key variables early, reducing the search space and focusing computational effort where it matters.
- Data Dependency: “Garbage in, garbage out” applies doubly here. We don’t just use average values; we statistically analyse historical data to find correct probability distributions (e.g., ensuring orders follow specific peak-and-trough patterns, not just flat averages).
The ‘Human-in-the-Loop’
Ultimately, we build these models with you, not just for you.
Our methodology relies on a Human-in-the-Loop (HITL) framework. Whether we are helping Gousto compress development time for routing logic from months to days, or helping Nestle optimise warehouse slotting, the goal is the same: to present insights that empower expert judgment, not replace it.
Ready to build your Digital Twin?
To ensure a successful project, we look for four preliminary conditions:
- A clear, quantifiable business problem.
- Access to operational and historical data.
- Dedicated engagement from your Subject Matter Experts (SMEs).
- Clearly defined system boundaries.
If you are ready to move beyond simple guesswork and start engineering an antifragile operation, Contact Decision Lab today.

Leave a Reply