Scroll to top

GUEST CHAT with Giovanni Giorgio, Senior Digital Engineer at GSK.

In the fourth interview with an External Guest we meet with Giovanni Giorgio, who is a Senior Digital Engineer at GSK. Giovanni is a Chemical Engineer with 15 years of experience in the pharmaceutical industry (R&D and Global Manufacturing), with a strong background in API chemical & process engineering and with an intensive and diversified experience acquired in different business units. More recently Giovanni has developed an interest in applying advanced modelling techniques to solve complex business problems.

In our Chat, Giovanni explains his current role, talks about the obstacles/challenges for the technology adoption and tells us about his believe in an old but very relevant and solid methodology – “Decision support systems”. This message headlines Giovanni’s LinkedIn profile, which is together with the background image on his LinkedIn page gives an insight into the strong ethos about building the tools that aid decision-making (NOT replacing it)!

You’ve been working at GSK for over 15 years. Nowadays that is a very long career in the same place.

Although it may seem this way, in reality my job role and responsibilities have been changing quite frequently. Especially in the last 5-6 years I started doing different things almost every year, but the leitmotif stayed the same – modelling and simulation. We started the relationship with Decision Lab about 4 years ago.

Please tell us about your work? What is it that you do?

My main role is Modelling and Digital Lead in “Front End Engineering and Design (FEED). It’s the group that operates within the Global Capital Projects that look after the big investment for the supply chain – new equipment, new facilities, mainly in pharma. FEED looks after the initial stages of design and business case. Usually, all the projects go through three key phases: business analysis, feasibility and concept selection.

I lead modelling and the analytics activities which usually play a big role in the projects, as that helps to understand critical information – how big is the demand for a certain product going to be, hence what size the facility should be? What kind of equipment we use for the production and what’s the production cycle going to be like? So, we need to model lots of factors and options (we call them scenario) in a highly sophisticated network of dependencies and their impact on the investments the company has to make (usually in the range of £20m to £500m. We work looking at the future, (5 to 20 years). We also use modelling and simulation to understand how to optimise such investments by phasing them while still minimising the risks.

For example, last year we worked with Decision Lab on a project that was already in the feasibility stage. The modelling activity focused on building a more accurate operational model. It was for a facility in Italy, and it was a very successful story. We needed to decide on how many machines and on the technology of the machines – which machines we should be buying and also on their operational regime, because the type of the machine will dictate how they can be used operationally. It was a very complicated project, but a great team effort, because we needed to involve some specialist teams to consult us on the particular aspects of the data input. Just the fact that the model condensed the knowledge and the requirements of circa 20 active contributors from different disciplines says a lot about the value we gained in terms of engagement, knowledge transfer and decision support. At the moment, we are entering the last phase and it looks very promising for the build to start next year.

What changes have you seen that made an impact?

We worked with Decision Lab on simulation model of GSK’s internal R&D pipeline, where it tracks the passage of assets – potential medicines – through different clinical stages, with some making it through to the next stage and some being terminated. It simulates the number of assets in each stage in each year, incorporating the randomness that is present in reality. It allows us to access not just the capacity needed (both for commercial and clinical manufacturing) but also to understand the business case for investments.

Traditionally, when you needed to introduce a manufacturing new technology, you’d find an asset that looked like it would deliver value and the estimated value that asset would deliver would justify the investment. The issue is that if the asset does not make it through the clinical stages and gets licensed, especially if it fails at the late stage, you’ve lost your justification and you have to wait with your new technology for the next one. Potentially very expensive.

The R&D pipeline simulation model was able to show us the percentage of assets in the portfolio that would come through that would be amenable to this new technology. It wouldn’t matter if specific ones failed – in the model of the real world, as the model provides an overall distribution, and this could be used to work out the overall benefit over the next 10 or 15 years. It was really a game changer because it removed the uncertainty around building a business case for a new technology without linking to a particular asset but rather against the overall portfolio. It provides a mathematically valid proposition in terms of the value you can derive from investment.

Now there is an interest to use this tool as part of the decision-making regarding the whole global network chain investments long-term.

It sounds as though you had success deploying the models that been developed. Is it easy to convince others to start using the models?

Often, we have to prove the value, negotiate sponsorship with the internal stakeholders who hold the keys. First thing I always say is I don’t develop the model to tell you what to do. It’s a model that should aid the decision. I am a fan of decision support systems, which is a relatively old methodology, which is about helping the decision-makers, not replacing them. Decisions are very complex matters. Sometimes they are based on assumptions or simplification of something very complex behind it. Often a decision-maker would make a decision based on his experience or a gut feel, which comes from the years of experience. And not all of this can be included into a model. My objective has always been not to create prescriptive models. The other obstacles are data literacy. The system complexity, the richness of inputs and various parameters that a model considers can scare people off. As human, we tend to predict the future by looking at the past. We tend to see the future as the linear progression of the past events. There is a famous very old but a very good example that demonstrates that – ‘Great Horse Manure Crisis of 1894’. The newspapers predicted that very soon, every street will be buried under many feet of manure. There was a big meeting in New York where they were deciding on planning what to do. And of course, soon after the cars took over. The trams and buses appeared on the streets, replacing the horse-drawn ones.

Another hurdle is the black box factor. In Excel most of the times you can see what it’s doing and that’s an advantage. A black box means we can’t immediately (or visually) explain how it produces an output, which is hard for users to trust and accept.

So what do you think can help to combat these obstacles and get an easier adoption or the buy-in?

First thing is to present options (scenario) and not a solution. This approach is far more engaging as people start to question how you reach the different options (what are the pros and cons), what the key difference in assumptions are, etc. This helps get across some of the complexity of a model.

Second is to visualise the data (input and output) in a compelling and intuitive way.

Thirdly (here we had some success) is to let them play with the model and the data so they can start understand and appreciate the complexity. But in order to do that, I think it is very important do develop great UI and simplify the UX (which unfortunately requires resources and effort).

Another thing I am trying is to develop an early and simple way to visualise the model (either in 2D or 3D).

One of the things can be an early visualisation. Building a model in 2D or 3D layout showing how the operations flow and how they change dependent on different input. In this way it should be easier to engage with SMEs and decision makers who are usually not familiar with modelling techniques and data science.

Since we use AnyLogic quite a bit, the good news is that the AnyLogic version 9 release at the end of this year should provide a new feature that will interface with our building information models (BIM), which are basically intelligent 3D asset models. This should simplify getting the visualisations and information from the intelligent models within.

What do you think is the future for modelling and analytics for informing manufacturing in the pharmaceutical industry?

I think the focus should be on two applications.

The first is using machine learning to support the simulation approach. Developing the complex logic rules for the simulation is challenging; a better way could be to use machine learning to learn the rules from data. As well as making the model development process easier, this could have the advantage of much faster run times once trained. This is important for optimisation where you need many thousands of Monte Carlo simulation runs, each of which could take ten minutes or more to simulate a year. With a trained ML model, it could run in seconds and so greatly improve the optimisation.

The second is using deep learning to do intelligent scheduling. Generally, GSK uses Excel and some more advanced tools that rely on mixed integer linear programming (MILP) optimisation – but these are very expensive and complex to build, are not quite flexible and cannot model all the complexity you’d want to include. We’ve been discussing with Microsoft and Decision Lab about using Microsoft’s Bonsai Deep Reinforcement Learning platform for this. We are also discussing this internally in some area of our business (i.e. Vaccines).

Planning for clinical makes it a complex scheduling problem, as you have to plan for each asset the manufacturing campaigns per year, which depend on whether they are pursued or cancelled, with the aim of meeting a target of making an amount of drug by a required date. But this is constrained by the available resources and whether there is enough of the right equipment available. It gets even harder when the target date gets moved forward or backwards. So there is a huge amount of variability and uncertainty and you want a plan that is flexible enough to best handle this, and a way to re-optimise when things do change. Currently this is done with traditional methodologies (Kanban board, Excel, etc), therefore, to avoid risks and handle the complexity, contingencies are added at each step, which is inefficient and costly. We want to know if we simulate this variability, can we train Bonsai to use its AI algorithms to provide a better solution that allows the plan to meet the targets while reducing the contingency, and also being able to quickly update the plan as things do change. If we do pursue this and it does work, I think the approach could be scaled to many more areas within GSK. With our strategic relationship with Microsoft it could become increasingly important.

Author avatar
Decision Lab
We use cookies to give you the best experience.