Tag: simulation

  • Decision Intelligence in Route Optimisation

    Decision Intelligence in Route Optimisation

    A 6-Week PoC with FedEx European Linehaul

    Executive Summary

    Decision Intelligence moves an organisation beyond the fixed-plan trap toward proactive, automated resilience in route optimisation. By evaluating the strategic trade-offs between explainable Stochastic Optimisation and scalable Reinforcement Learning, we proved that move-level agility is the key to maintaining flow in a high-uncertainty environment.

    Key Takeaways:

    • Beyond Rigid Scheduling: Shifting from historical templates to dynamic, operational-time decision-making to maximise capacity utilisation.
    • The Technical Showdown: Comparing the audit trails of Stochastic Programming against the autonomous adaptability of Reinforcement Learning.
    • Predictive Simulation: Utilising a road-based “digital sandbox” to test courses of action and mitigate risks before committing resources.
    • Tangible ROI: Delivering financial returns by improving linehaul utilisation and significantly reducing the need for costly ad-hoc transport.

    The Challenge: The Friction of Fixed Planning

    In the high-stakes corridors of European logistics, fixed plans are often the first casualty of reality. For an Operations Director managing a distribution network across the UK and EU, the daily friction is visceral. You are constantly forced to ask: “Should I delay this trailer, so it leaves full, or stick to the schedule? Do I need to commission an expensive ad-hoc truck to cover this surge, or will the bottleneck clear itself?”

    When package volumes fluctuate unpredictably at major hubs, static schedules become more than just an inconvenience—they become a drain on margins and a threat to service levels. At Decision Lab, we operate under a foundational truth: the success of an organisation is nothing but the sum of all its decisions. To help global leaders move beyond the fixed-plans trap, we conducted a six-week Proof of Concept (PoC) with FedEx. This project tackled real-world complexity head-on, proving that Decision Intelligence is the key to transitioning from reactive firefighting to proactive, automated resilience.

    Static Schedules are the Enemy of Efficiency

    The core challenge identified within FedEx’s European network was the inherent limitation of pre-defined linehaul schedules. These schedules were designed for averages, whereas logistics are often defined by exceptions. When incoming and outgoing package volumes at European hubs diverged from the forecast, rigid plans could not adapt.

    A dynamic approach, powered by operational-time decision-making, is the only way to maintain flow in a volatile environment. By rerouting assets and scheduling departures based on real-time parcel traffic rather than historical templates, an organisation can achieve step-changing improvements in capacity utilisation.

    Our expertise in AI, ML, simulation, and mathematical optimisation helps organisations cut through complexities in strategic, tactical and operational processes.

    The Solution: Bridging the Gap with Decision Intelligence

    The choice between technical approaches is rarely straightforward; in this case it was a strategic balancing act between Explainability and Scalability. During our PoC, we evaluated two competing methodologies: explainable Stochastic Programming and scalable Reinforcement Learning (RL).

    FeatureStochastic ProgrammingReinforcement Learning (RL)
    Primary StrengthFast solving speed; mathematically explainable and provable.Reacts to high uncertainty using World Models and Graph Neural Networks.
    Logic BasisLocates the best strategy to optimise expected outcomes over uncertainty.Uses a dynamics model to predict the optimum next action.
    AdaptabilityMulti-objective handling: Uniquely suited for balancing cost vs. customer service levels.Observation-size invariant: Handles environments with variable data lengths and network nodes.
    Strategic RiskConsulting intensive: Very sensitive to human-built heuristics, which are expensive and time-consuming to develop.Compute intensive: Requires significant hardware resources for training the World Model.

    While Stochastic Programming offers a clear audit trail for every decision, RL provides the adaptability required for massive, interconnected networks. The right choice depends on whether your organisation prioritises a provably optimal solution or a highly performant best-effort that can autonomously learn the shifting dynamics of global markets.

    Don’t Just Predict—Simulate the Impact

    One of the most powerful tools developed for FedEx was a road-based, hub-to-hub package movement simulator. This provides a digital sandbox where controllers can explore alternative COAs (Courses of Action) before committing resources.

    Our completely data-driven deployment method allows us to build these simulations without the months of manual coding traditionally required. By accessing relevant operational and transport data directly, we can simulate supply chain environments to predict the ripple effects of a delay or reroute.

    This tool predicts the impact of different actions, helping to mitigate risks and optimise routes.

    Data Maturity is the Ultimate Competitive Moat

    For large-scale firms with a £200M+ turnover, the transactional backbone—usually an ERP or MRP system like SAP or Oracle—is necessary but insufficient. To achieve true antifragility, you must layer Decision Intelligence over these systems.

    Antifragility is the ability to not just survive volatility, but to actually improve because of it. By utilising a World Model within an RL framework, the system treats every fluctuation in package volume as a learning opportunity, refining its dynamics model to better anticipate future shocks. This requires three layers of data maturity:

    • Strategic Level: Long-term high-level routes, fleet capacity, and cost-per-mile data.
    • Operational Level: Real-time visibility into items currently loaded or waiting at the depot.
    • Historical Level: Deep archives of how volumes fluctuated in similar time slots in the past.

    The Result: Antifragility and Bottom-Line Returns

    In the C-suite, the value of AI is measured by the bottom line. The FedEx project was not an academic exercise; it was focused on delivering the financial returns demanded by an industry with tight margins. The PoC demonstrated that an autonomous planning agent directly impacts:

    • Improved Linehaul Utilisation: Driving higher Overall Equipment Effectiveness (OEE) across the fleet.
    • Reduced Rescheduling: Eliminating the administrative friction and cost of mid-stream plan changes.
    • Minimised Ad-hoc Linehauls: Directly de-risking Operational Expenditure (OPEX) and informing more accurate Capital Expenditure (CAPEX) by reducing the need for emergency transport.

    Conclusion: Toward the Global Digital Twin

    The ultimate evolution of this journey is an advanced road-based package movement digital twin. By connecting multiple hubs in real-time, organisations can create a living model of their entire network that learns, adapts, and optimises itself.

    What is the sum of your organisation’s decisions? How many of your current logistics choices are being left to a fixed plan that no longer fits your reality? In a world of increasing volatility, the goal is no longer just to have a plan—it is to have a system that provides decision clarity and reliable value.

    Transform your logistics operations today. Reach out directly via out contact page, or connect with us on LinkedIn to start a conversation about de-risking your future.


  • Why LLMs Aren’t Enough: Engineering Antifragile Operations with Composable Decision Intelligence

    Why LLMs Aren’t Enough: Engineering Antifragile Operations with Composable Decision Intelligence

    As we navigate the technological landscape of 2026, Generative AI has undoubtedly transformed the way we interact with information. Chatbots and Large Language Models (LLMs) have proliferated across enterprise software, streamlining communication and automating basic workflows. However, for operations and supply chain leaders in complex, capital-intensive industries like FMCG, Automotive, and Retail manufacturing, a stark reality is emerging: LLMs are not a silver bullet.

    While language models excel at processing text, they cannot single-handedly optimise a global supply chain network, nor can they provide the quantitative assurance needed to de-risk a £50m factory expansion. When dealing with physical realities, extreme market volatility, and fragmented legacy systems, text prediction is insufficient.

    By 2026, 75% of Global 500 companies will apply decision intelligence practices

    Gartner

    The definitive competitive edge in 2026 belongs to those looking beyond Generative AI toward Decision AI. It belongs to organisations thoughtfully advancing their tech stack and building on established capabilities to embrace the architecture of a Composable Decision Intelligence Platform (DIP).

    The Industrial Reality: High Stakes and High Volatility

    Traditional planning systems struggle to account for agile and accelerated business and the consequent hazards. Today’s supply chain and operations leaders are caught in a crossfire of overlapping challenges, two of the most critical being:

    • Demand & Supply Volatility: SKU proliferation, shifting consumer behaviours, and frequent supply chain disruptions are breaking static planning models. The inability of legacy systems to cope with this extreme volatility inevitably results in poor service levels, excess inventory, and spiralling costs.
    • High-Stakes CAPEX Uncertainty: Securing funding for major capital investments—whether a new automated line, a facility expansion, or rationalising a post-merger manufacturing network—requires robust, data-driven justification. Without quantitative assurance, it is incredibly difficult to de-risk these investments and guarantee ROI.

    Solving these multi-dimensional problems requires more than just analysing past data; it requires a platform capable of simulating the future and discovering the optimal path forward.

    The Path to Implementation: A Composable Architecture

    At Decision Lab, we deliver Decision Intelligence to help leaders master this uncertainty. We achieve this not through a rigid, black-box AI model, but by building a Composable Decision Intelligence Platform based on responsible AI TRiSM principles.

    Composability is the principle that enables businesses to be agile. Rather than relying on a single vendor’s inflexible suite, a composable DIP orchestrates best-in-class, modular capabilities that ingest data from fragmented ERP, MES, and WMS systems. This creates a unified, dynamic view—an AI Simulation Twin.

    The Strategic Advantage of the AI Simulation Twin

    Instead of waiting years for a fully instrumented, hardware-dependent Digital Twin, leading organisations are accelerating their time-to-value by deploying an AI Simulation Twin.

    Traditional Digital Twin programmes often stall in pilot purgatory due to immense IoT integration challenges, prohibitive hardware costs, and fragmented legacy data pipelines. A Simulation Twin, while still ingesting real data, fundamentally bypasses these immediate infrastructure hurdles. It delivers the core predictive and prescriptive advantages now—providing a high-fidelity virtual environment to solve urgent CAPEX and operational bottlenecks—while your physical IoT maturity can be developed as a separate, parallel track. This decoupling ensures you realise ROI in months, rather than years, before moving into the four pillars of the platform:

    Infographic of the four pillars of a composable decision intelligence platform.

    1. The Cognitive Engine: Autonomous AI Agents

    Agentic AI serves as the reasoning layer of the platform. These agents can interpret complex scenarios, model market volatility, and process multi-tiered supply chain dynamics, translating raw data into actionable context.

    2. The Virtual Sandbox: Simulation

    To understand a complex physical network, you must be able to interrogate it and test it. Practically, that means replicating it. We use simulation to build a high-fidelity digital twin environment, employing appropriate technologies, such as AnyLogic’s multi-method capabilities. A simulation maps constraints, machines, and distribution nodes, providing the holistic view necessary to test what-if scenarios safely. It answers critical CAPEX questions before money is spent.

    3. The Mathematical Engine: Optimisation

    Where simulation shows you what could happen, optimisation dictates what should happen. For us, that means employing mathematical optimisation, such as Gurobi’s world-class mathematical solver, to cut through millions of potential permutations. It discovers the mathematically perfect production schedules and inventory policies—maximising throughput and service levels while minimising duplicated costs. The key is being timely—it is no good getting the answer after it was needed. Gurobi’s speed is key here (Gurobi white paper on solver speed).

    4. The Continuous Learning Loop: Reinforcement Learning

    This is where the platform moves from a passive analytical tool to an active operational asset. By applying Reinforcement Learning, specifically leveraging AgileRL, a platform can learn from real-time feedback. It continually experiments within the simulation, discovering new strategies to navigate supply shocks or demand spikes as they happen.

    Engineering the Antifragile Supply Chain

    The ultimate goal of implementing a Composable Decision Intelligence Platform is to shift operations from a state of fragility to one of Antifragility.

    A robust system merely survives a shock. An antifragile operational system improves when exposed to volatility. When a sudden supply chain disruption occurs, the reinforcement learning algorithms immediately assess the new reality within the simulation, trigger the optimisation engine to recalculate the best path, and deploy autonomous agents to orchestrate a self-adapting response. Relying on singular AI models or monolithic ERPs to solve complex physical problems is being consigned to the past.

    For leaders navigating constant disruption, true agility requires an adaptable, composable ecosystem. By implementing a Decision Intelligence Platform, you gain the foresight not just to predict the future, but to engineer your position—a compelling competitive advantage now and for the future.

    To find out more, check out our case studies or contact us!

  • The Power Couple of Decision Science: Integrating Simulation and Optimisation

    The Power Couple of Decision Science: Integrating Simulation and Optimisation

    In the world of complex decision-making, organisations often rely on two distinct tools. On one hand, there is Simulation (‘What happens if…?’), allowing us to model uncertainty and test scenarios. On the other, there is Optimisation (‘What is the best choice?’), allowing us to find the ideal solution within constraints.

    Separately, they are powerful. But when integrated, they unlock a new level of capability—moving from simple decision support to intelligent, autonomous systems.

    At Decision Lab, we don’t believe there is one single best method for this integration. The ideal approach depends entirely on the business problem at hand. Below, we explore the three primary patterns we use to drive value for clients like Migros, FedEx, and Nestlé, and how these models contribute to building truly antifragile organisations.


    Three Patterns of Integration

    We generally view the integration of simulation and optimisation across a spectrum, moving from tactical support to full autonomy.

    1. Optimisation within Simulation (Complex Decision Support)

    In this pattern, the simulation runs a large-scale system, such as a warehouse. When a complex, real-time decision is required, the simulation pauses to call a dedicated optimisation algorithm.

    How it works: The algorithm solves the specific sub-problem, and the simulation continues, testing how that “optimal” decision performs under real-world uncertainty (like worker delays).

    Case Study: For Migros, we utilised this method. Their warehouse simulation calls an optimisation algorithm to determine the most efficient trolley-picking route every time a new order arrives. This allows us to test the routing logic’s real-world impact on the system’s total throughput. Case, video.

    2. Optimisation controls Simulation (Strategic Design)

    Here, the roles are reversed. An external optimisation wrapper searches for the best strategic solution, such as a factory layout or supply chain network.

    How it works: For every solution the optimiser proposes, it uses the simulation as a high-fidelity “evaluation function” to test performance against stochastic conditions.

    Case Study: For DataForm Lab, an optimisation model proposed various wind farm layouts. Our simulation then tested each layout against uncertain wind and wave conditions to calculate true energy output. The optimiser used this feedback to find the next, better solution.

    3. Simulation trains Optimisation (The Autonomous Future)

    This is where we enter the realm of the Digital Twin and Reinforcement Learning (RL). The simulation acts as a high-speed, risk-free training environment.

    How it works: A machine learning agent interacts with the simulation millions of times, learning an ‘optimal policy’ for making autonomous decisions.

    Case Study: For FedEx, we built a simulator for their linehaul operations. An AI agent was trained inside this simulator to learn the optimal policy on when to “cancel, delay, or add” linehauls based on uncertain package volumes, dramatically improving efficiency.


    Building Antifragility: Beyond Resilience

    Why go through the effort of building these combined models? It isn’t just about efficiency; it is about survival and growth.

    A key philosophy at Decision Lab is Antifragility. While resilient systems merely withstand shock, antifragile systems improve because of it. By integrating simulation and optimisation, we create a “Digital Twin” that acts as a long-term asset.

    We use these models to perform rigorous Sensitivity Analysis—identifying which inputs drive outcomes—and to stress-test operations against millions of potential scenarios. This allows organisations to design supply chains and operations that are prepared not just for the average day, but for uncertainty and volatility.


    Navigating the Challenges

    Every powerful methodology has trade-offs. We mitigate these through a rigorous Verification & Validation (V&V) process.

    • Computational Cost: Evaluating thousands of simulation runs is intensive. We use sensitivity analysis to identify key variables early, reducing the search space and focusing computational effort where it matters.
    • Data Dependency: “Garbage in, garbage out” applies doubly here. We don’t just use average values; we statistically analyse historical data to find correct probability distributions (e.g., ensuring orders follow specific peak-and-trough patterns, not just flat averages).

    The ‘Human-in-the-Loop’

    Ultimately, we build these models with you, not just for you.

    Our methodology relies on a Human-in-the-Loop (HITL) framework. Whether we are helping Gousto compress development time for routing logic from months to days, or helping Nestle optimise warehouse slotting, the goal is the same: to present insights that empower expert judgment, not replace it.

    Ready to build your Digital Twin?

    To ensure a successful project, we look for four preliminary conditions:

    1. A clear, quantifiable business problem.
    2. Access to operational and historical data.
    3. Dedicated engagement from your Subject Matter Experts (SMEs).
    4. Clearly defined system boundaries.

    If you are ready to move beyond simple guesswork and start engineering an antifragile operation, Contact Decision Lab today.