Proof of Value or Assurance is not Proof of Sustainability
We keep reading and as business leaders claim that Robotic process automation (RPA) has made great strides in fulfilling its promise to the marketplace. RPA has positioned as a no-code end-user computing tool, owned, and operated by many business functions, that automates routine work tasks with quantifiable business results. Now, businesses have a digital workforce in the form of powerful new end-user computing tools positioned and priced attractively to operations and finance leaders. It has the power to reduce the dependency on traditional automation delivered by IT, improve productivity and accuracy, reduce operational risk, and enable redeployment of expensive human resources to tasks that add more value. But how far this claim is true or close to the reality need to be rechecked...
We all know that RPA maturity model is designed across industry around two key components - the first one around RPA strategy or RPA operations and second is the levels of RPA Maturity that can be used to assess comparative states of RPA maturity across the elements. As RPA matured in last few years the need for proving the concept of automation faded away and replaced by the Proof of Value (POV) or Proof of Assurance (POA) that we can have better ROI if the BOT’s work. This is an important shift in thinking, as it eliminates overabundance of confusion tactics used to avoid the adoption of new technology in an organization.
Proofs of Value (POV) or Proofs of Assurance (POA) concepts are used to confirm that the benefits projected in the ROI models are achievable in actual use. This is a significant step beyond POCs, which only demonstrates that a process is automatable. With POV or POA the expectation of industry leaders, business owners are that the projected value from the BOT is validated. Now they should stop pursuing POCs and start deploying POVs or POAs to battle the issue of financial failures.
The first key step in this direction of deployment is to focus on how to model and bring project costs, benefits, and timelines as close as possible to reality, in order to reduce the disillusionment that most RPA projects are experiencing. But this line of statement does not mean that leaders across industry have not done anything so far. They have rolled out the above concept very smartly and are winning projects as well. However, most of them raised a common concern across platform that despite winning projects, higher CSAT, their project win rate is falling dramatically. They continued to have less stellar results then they had hoped. They are much better than their peers, but they still weren’t able to achieve the results basis the suggested improved ROI models.
Now the question comes, what is going wrong? Why, even with improved estimating accuracy, more realistic assumptions, and recognition of the critical importance of BOT yield (utilization times first pass success rate etc.), organizations continued to miss the mark more often than not. Being a techno-commercial person the first and the only thing strikes to mind is that….is there something wrong with the RPA infrastructure that majority of organizations have in place, or has something to do with solution design development, testing & deployment? Let’s take a deep dive into the same…
The first rule of effective systems testing is that the test must imitate production. The accuracy or reliability of the test must represent what the system will experience in actual use, otherwise the tests are not only a waste of time and money, but they can also convince leadership/ project owners that all is well when in reality all is not. For this reason, most RPA development operations maintain at least three, and often four copies of the environment that a system is being developed for: Development, Test, Production, and sometimes Staging.
RPA Environment
- Development is just what it sounds like, it’s the environment where developers and engineers write code, try it out, and see if it works. This environment is kept completely separate from production systems so that if a developer writes a bit of code that disastrously deletes all of the data in a database only a set of development data is lost, not real, production data.
- Once code meets the developer’s expectations it is moved to a Test environment for Testing. This environment is separate from both development and testing, again so that production data is not affected by testing and so that developers’ ongoing work does not impact testing. The best practice with testing is that the test environment is an exact copy of production, with the full set of data from production, so that system performance can be tested, as well as functionality.
- Staging environment is then used for pre-production. It is where new code is placed just prior to being moved into production, with the ability to undo that move, or to roll it back, if something goes wrong. Staging is sort of the brief pause a person takes before bungie jumping off a bridge, checking the ropes and buckles one last time before committing!
- Production is the actual environment where business is conducted, records are generated, and money, goods and ideas change hands. This is the real deal, and because it is the real deal nobody, ever want to test new RPA code in production, because actual things happen when you do.
------------------
------------
Thanks Gaurav
ReplyDelete