Proof of Value or Assurance is not Proof of Sustainability

We keep reading and as business leaders claim that Robotic process automation (RPA) has made great strides in fulfilling its promise to the marketplace. RPA has positioned as a no-code end-user computing tool, owned, and operated by many business functions, that automates routine work tasks with quantifiable business results. Now, businesses have a digital workforce in the form of powerful new end-user computing tools positioned and priced attractively to operations and finance leaders. It has the power to reduce the dependency on traditional automation delivered by IT, improve productivity and accuracy, reduce operational risk, and enable redeployment of expensive human resources to tasks that add more value. But how far this claim is true or close to the reality need to be rechecked...

We all know that RPA maturity model is designed across industry around two key components - the first one around RPA strategy or RPA operations and second is the levels of RPA Maturity that can be used to assess comparative states of RPA maturity across the elements. As RPA matured in last few years the need for proving the concept of automation faded away and replaced by the Proof of Value (POV) or Proof of Assurance (POA) that we can have better ROI if the BOT’s work.  This is an important shift in thinking, as it eliminates overabundance of confusion tactics used to avoid the adoption of new technology in an organization.

Proofs of Value (POV) or Proofs of Assurance (POA) concepts are used to confirm that the benefits projected in the ROI models are achievable in actual use. This is a significant step beyond POCs, which only demonstrates that a process is automatable. With POV or POA the expectation of industry leaders, business owners are that the projected value from the BOT is validated. Now they should stop pursuing POCs and start deploying POVs or POAs to battle the issue of financial failures. 

The first key step in this direction of deployment is to focus on how to model and bring project costs, benefits, and timelines as close as possible to reality, in order to reduce the disillusionment that most RPA projects are experiencing. But this line of statement does not mean that leaders across industry have not done anything so far. They have rolled out the above concept very smartly and are winning projects as well. However, most of them raised a common concern across platform that despite winning projects, higher CSAT, their project win rate is falling dramatically. They continued to have less stellar results then they had hoped. They are much better than their peers, but they still weren’t able to achieve the results basis the suggested improved ROI models.

Now the question comes, what is going wrong? Why, even with improved estimating accuracy, more realistic assumptions, and recognition of the critical importance of BOT yield (utilization times first pass success rate etc.), organizations continued to miss the mark more often than not. Being a techno-commercial person the first and the only thing strikes to mind is that….is there something wrong with the RPA infrastructure that majority of organizations have in place, or has something to do with solution design development, testing & deployment? Let’s take a deep dive into the same…

The first rule of effective systems testing is that the test must imitate production. The accuracy or reliability of the test must represent what the system will experience in actual use, otherwise the tests are not only a waste of time and money, but they can also convince leadership/ project owners that all is well when in reality all is not. For this reason, most RPA development operations maintain at least three, and often four copies of the environment that a system is being developed for: Development, Test, Production, and sometimes Staging.

RPA Environment

  • Development is just what it sounds like, it’s the environment where developers and engineers write code, try it out, and see if it works.  This environment is kept completely separate from production systems so that if a developer writes a bit of code that disastrously deletes all of the data in a database only a set of development data is lost, not real, production data. 
  • Once code meets the developer’s expectations it is moved to a Test environment for Testing.  This environment is separate from both development and testing, again so that production data is not affected by testing and so that developers’ ongoing work does not impact testing. The best practice with testing is that the test environment is an exact copy of production, with the full set of data from production, so that system performance can be tested, as well as functionality.
  • Staging environment is then used for pre-production.  It is where new code is placed just prior to being moved into production, with the ability to undo that move, or to roll it back, if something goes wrong. Staging is sort of the brief pause a person takes before bungie jumping off a bridge, checking the ropes and buckles one last time before committing!
  • Production is the actual environment where business is conducted, records are generated, and money, goods and ideas change hands.  This is the real deal, and because it is the real deal nobody, ever want to test new RPA code in production, because actual things happen when you do.

Finally, The reality in the world of RPA is that in majority of cases not only is a full copy of production missing for testing, but often times also not even feasible! Most of the manual tasks that exist in organizations today are used to connect to a mixture of internal and external systems for which integration is too difficult, expensive or time consuming.

Because of this complexity, and the fact that most companies interact with so many outside players, it is literally impossible to create a truly production-representative test environment. Few have a full test copy of their internal systems, and even fewer have tied to test environments, if this were even possible.  Because of this, only the very simplest BOTs, working on elementary tasks, between relatively few systems, all internal to the organization, ever receive appropriate degrees of system testing prior to going live.  Far more typically, a BOT will go through some basic testing, and will then be put into production and monitored like crazy for several days, weeks or months, to try to make sure it doesn’t do something terrible.

To address this unique situation and system complexity, the concept of Proof of Value (POV) or Proof of Assurance (POA) developed and a distinction between a proof of value and a Proof of Sustainability (POS) is rolled out. Earlier, leaderships took for granted that once a BOT passed its functional testing (POC), it would work in production and generate business benefits (POV).  But, later realized that nothing could be counted on until BOT were actually moved in production.

The challenge with proving the viability of a BOT is that unless organizations have a truly-production-representative test environment in place to test BOTs, no one will ever come to know how they will perform until they actually go into production.  Then, and only then, will leadership come to know if they are truly viable.  Organizations / Leaders / Project owners can show in a proof of value that BOT does complete assigned tasks in half the time that it takes humans, but until BOT get into production environment, and nobody will ever come to know if that BOT can do a thousand assigned tasks processing without issues, or crashes.  Making a proof of sustainability (POS) part of RPA planning, and budgeting, process is critical to deploying successful BOTs. Without it, the experience of Solution Architect & RPA Developer prospect of success falls by at least a factor of ten, if not by a hundred.

----------------------
------------------
------------

Comments

Post a Comment

Popular posts from this blog

Generative AI & Meta-Leaders

Good automation! Select right process for RPA

RPA Exception Handling – Be in control