Pick the right process, and automate to the optimal

It is normally viewed that one of the advantages of BOTs is their consistency. BOTs do exactly what they are calibrated/ programmed to do, and this provides consistency and repeatability or even eliminating unpredictability in how a process is performed.  However, reduction in unpredictability can be both good and bad, depending upon how we interpret our process performance data, and how we apply it towards this reduction in unpredictability.

No doubt, automation eliminates unpredictability or variation. While humans may perform a task with a wide range of skills, BOTs are much more consistent, but they will perform the task the same way each time, every time, unless an exception is experienced.  Hence, modeling process performance based upon average process cycle times can often lead to suboptimal choices in process, and suboptimal performance expectations.

Operating metrics are often used to identify processes worth automating. It is common to select target processes based upon their average performance.  It doesn’t matter that a given process is handling how many transactions, the process is always evaluated by how much time it takes, on average, to complete that many transactions.  Whatever that number is, it is the standard of performance against which the BOT is going to be evaluated.  This is a totally rational approach to modeling processes.

But if we really want to understand process performance, it is necessary to look at both average performance and the standard deviation.  Averages tell us how long it takes to perform the task while standard deviation tells us how much unpredictability exists within that average.  The lower the standard deviation, the more consistent the process actually performs. Eliminating variation in processes automated with BOTs is important because BOTs are notoriously inflexible. It is common practice to encode BOTs with stops or waits, where the BOT is instructed to pause for some length of time before it proceeds to its next step.

The challenge here is when those delays or pauses vary greatly. If the average wait time is three seconds, we would design the BOT to wait those three seconds before proceeding.  However, since three seconds is the average delay, if we set the BOT to wait three seconds, then half of the time the BOT would fail in its next step! Designing for the average doesn’t work very well, as it necessarily leads to failure half of the time.  Instead, we would set the BOT to wait perhaps five, six or even ten seconds, so that it would have waited as long as necessary most, if not all, of the time.

To ensure the BOT never fails at its next step, we would have to look at the worst-case delay that has ever been experienced and design the BOT to wait that long.  However, this is a terrible design approach, because the vast majority of the time the BOT is dwelling far longer than it needs to. If the rationale for using BOTs is that they’re cheaper and faster, we just killed a significant part of the value you had hoped to gain.

Understanding the sources of variation that lead to these dwells or waits, and eliminating that variation, allows for BOTs to operate much more effectively and successfully. When we remove variation in a process, we eliminate the outliers, the high tail and low tail of the distribution curve.  The result is that the process collapses down to the average performance produced by the one single way of operating that is programmed into the BOT. This sounds like exactly what we want from the BOT: elimination of variation and completely predictable performance.  But, when we collapse the distribution to the average, which average are we collapsing to?

As we discussed above, “average” performance is a midpoint.  Roughly half of our transactions perform better than average, half are worse than average.  If we use this average performance as our target for our BOTs performance, we are actually discounting the better performance that we achieved in the better half of our transactions.  This is important.  When we design BOTs around average performance, we are eliminating variability, and settling for far worse performance than when we are operating at our best.

This notion of designing around optimal, rather than average, is fundamental to achieving the greatest possible return from our BOTs.  Regardless of which process or task we are considering for automation, understanding both average performance and best performance is a key to making the best possible selection for maximum benefit.  Automating a process that has a high degree of variation will likely provide greater benefit than a process with very little variation.  The high degree of variation means that there is a bigger gap between average and optimal, and it is the size of this gap that determines how much additional leverage the BOT could provide.

Note that variation comes from a wide range of sources. It’s important to understand these sources of variation, the scope of their variation, and where possible their causes.  These characteristics will help you assess whether or not the process is a good target for automation. The process of selecting strong targets for automation, particularly when we are starting with RPA, is absolutely critical to early success.  Pick the right process, and automate to the optimal, and you’re far more likely to achieve compelling business value.

------------------------------
--------------------
----------

Comments

Popular posts from this blog

Generative AI & Meta-Leaders

Good automation! Select right process for RPA

RPA Exception Handling – Be in control