
Traditional bottleneck busting
Our clients often need to identify and remove bottlenecks in order to drive up their efficiency and profitability. They typically start with the classic approach of:
Look for a bottleneck
Make a change to the physical operation designed to alleviate the bottleneck
Repeat
Problems
At first glance, this looks like a great plan. However:
You don’t know what effect the change will have ahead of making it.
You will often be wrong about the effect you think the change will have. Even seemingly simple operations often have complex and intricate interactions and effects between processes.
You often can’t be sure of what effect the change had after making it. Physical operations often have a lot of other variables that could be… varying… at the same time as you make your change. E.g. it performed better, but was that just because of the type of work that the operation needs to do on the day of the week that we tried it? Or because of the staff on duty? Or because you happened not to have mechanical failures that day?
You will run into the next bottleneck, meaning you only get a marginal gain.
It’s expensive to get wrong.
It’s a slow process.
A simple example
Say you have a conveyor system. There’s an infeed, some processing, and an outfeed.

You thought you had designed it to process 1000 items per hour, but it’s only processing 500. You investigate. It turns out that the infeed is only mechanically able to feed items at 500 items per hour.
You spend a bunch of time and money replacing the infeed with a faster machine.

However, you find that this only increases your output to 600 items per hour. On investigation, it turns out that the processing step is now limiting your throughput. You have pickers adding items to the conveyor, but they are only able to pick at 600 items per hour.

Was the cost worth the extra 100 units per hour? The business case you made for installing it assumed you would gain 500 items per hour… What you thought would pay for itself after 2 years now only pays for itself after 10 years.
This is obviously a very simple case. Real operations are typically much more complex, involving lots of processes interacting in hard to predict ways.

Even just expanding the above example to multiple pickers, some in series some in parallel, makes the final throughput hard to predict.
An alternative approach: Bottleneck busting with simulation
Do the same thing, but make the changes in a simulated environment / digital twin, not in the real operation. Then, once the effect of a change is proven by the simulation, you can decide whether to apply the change to the real operation. This is data driven decision making.
Advantages
Compared to experimenting in the real world, busting bottlenecks with simulation is:
Much cheaper.
Much faster.
Much easier to analyse.
Much more statistically significant. You can make sure that you are only changing the one thing you care about, and you can run thousands of hours / days worth of simulation to gain confidence that the effect you are seeing is not just a random outlier.
Feel free to get in touch if you’d like to talk about how this works.
Comments