Some of these costs are directly quantifiable, such as unnecessary installer revisits or meter replacements. Others, such as poor customer experiences and process inefficiencies, are harder to measure but are becoming increasingly what makes the difference between success and failure in our fast-evolving sector.
As an example, I received a call the other day from an engineer on his way to install my new smart meter. Only he wasn’t due for another two days. It transpired (after 30 minutes of mucking me about) that my phone number had been connected with the wrong customer.
But was it just my telephone number that was incorrectly transcribed? And was it just my job or all the jobs for that engineer? Or all the jobs for that operator for that day? If one piece of data had crossed records, it’s almost certain others had at the same time. An embarrassing phone call at one end of the spectrum can become dozens of errors with the smart meter data. Let’s consider what that scenario might mean.
From the operator’s perspective, the error could keep a team of back office staff busy for weeks trying to sort out rejected data flows relating to meter commissioning. During that time, the meter operator isn’t getting paid for the jobs, which are still considered Work In Progress (WIP). It’s not unusual for operators to have over a £1 million sitting in their WIP including jobs dating back over a year. So that’s a cost of cashflow. Then there are the engineer revisits that are frequently required to resolve jobs.
For the supplier, customers may end up with bills being delayed by months, posting negative comments far and wide on social media. So, there is the cost of brand damage and defections.
The list goes on. The MAP may have additional problems tracing their assets and, if data errors are too common, add on the potential for regulatory fines for the supplier and loss of contract for the meter operator.
And these kinds of errors are happening all the time.
It’s not a surprise that the industry has ended up here. Regulation split the industry into chunks, with different chunks responsible for pieces of the overall business process. Data silos developed within each chunk. And the smaller businesses, unable to afford expensive IT systems, naturally turned to spreadsheets and CSV files (both notorious for data problems) and increasingly relied upon the accumulated experience of their people to develop increasingly complex practices (frequently passed on by word of mouth) to compensate. Big companies, needing to work with the smaller ones, were forced to use CSV as the lowest common denominator. The rest is history.
The implications for an entrenched industry attempting to rapidly evolve is profound. True innovation and scalability require automation. This isn’t just imagining processes with slightly leaner back offices; it’s imagining back office zero. It’s imagining a flexible workforce of independent engineers, empowered to pick jobs straight from the supplier with all the information they need to manage their own schedule. It’s imagining true open data for consumers, empowered to use multiple tariffs from multiple suppliers to separately run their heat pump, house battery and car charger.
But you cannot automate on top of bad data. As the old joke goes, a computer can make as many mistakes in a second as a thousand people working for a thousand years. Bad data in, bad data out. And even if you can spot and route bad data to exception queues for manual intervention, this fundamentally undermines automation, efficiency and operational scalability.
Thankfully, it doesn’t have to be this way. The cloud provides us with a way out for those bold enough to embrace it.
Firstly, let’s dispel the idea that cloud simply means virtual servers. That paradigm is at least 10 years out of date. Over that time, cloud has evolved to mean a set of technology, standards, platforms and practices that make it far more powerful and secure.
Let’s start with the API (Application Programming Interface). If fallible CSV files can be argued to be the ancestor of the modern API, their great grandchildren are now empowering secure real-time, mission-critical interactions across the world. Your phone isn’t using CSV when it’s synchronising your email, files and banking passwords across your devices. And they’re no longer expensive to create. With the right cloud tools, robust API’s can be even be created without a single line of code.
Similarly, virtual servers have evolved into ‘microservices’ and ‘Kubernetes’. In essence, this provides a rapid way to build massively scalable business services that are easy to test and maintain. If you combine this with API’s, you now have a way to connect all those data silos rapidly, securely and in real-time.
And not just your core systems but extending out to your field service applications used by your engineers. Now you can stop inaccurate data from ever entering the system by connecting handheld devices such as the Zebra TC77 (already widely adopted by delivery companies for its robustness, reliability and fast barcode scanning) to perform real-time look-up and validation.
Finally, companies that understand these technologies and how they can be applied to the energy sector have already built the API’s and software platforms for others to plug into. At CloudKB, we use our uMESH platform to connect in real-time to dozens of industry systems, supporting automated business processes across the meter lifecycle for suppliers, meter operators and MAPs.
Naturally, no change is easy but there has never been an easier or more critical time to do so. With the government continuing to force the pace of change, new digital companies have already shown what organisations based on cloud technology can achieve. The question is not really whether the industry will adopt cloud technologies as a new standard but rather how much longer companies will wait before making the leap and whether they will survive if they wait too long.
It’s time for the industry to get its ducks in a row. To find out more, visit CloudKB.