in association with

Dr David Scholefield, CISSP OPST, Chief Information Security Officer at Flexys Solutions Distributed energy, Electric Vehicles, Electricity transmission/distribution, Energy networks, Energy retail, Flex, Generation, Innovation, Interconnection, Skills, Smart grids, Strategy & management, Technology, Water, Opinion

Machine Learning and AI may be ‘flavour of the month’ with the media talking excitedly about a new dawn in intelligent computers. Behind the hype, there is a quiet revolution happening that will change all of our lives in ways that we are only just beginning to understand. However, with the new opportunities come genuine risks and security challenges that need to be carefully managed.

What is machine learning?

Machine learning (ML) software works by training itself to find patterns in large quantities of data about a given subject area. Once trained the software can recognise those patterns when shown new data.

This may not sound much, but take the example of diagnosing cancer from MRI scans and it suddenly looks more interesting. An ML system can be shown thousands of MRI scans of patients with a specific type of cancer and thousands of scans from patients without cancer, and as a result the program can work out how to make the correct diagnosis from any future MRI scan. A machine learning program may discover patterns that even the best specialists haven’t seen.

In a commercial setting, including the utility sector, AI also has enormous potential. One example in debt collection is the identification of vulnerable customers who may need additional support, as well as identifying those who are likely to self-manage their outstanding debts. This intelligent segmentation means that resources can be concentrated on those customers who need it most, driving down cost and improving customer satisfaction.

But what are the risks?

Most security concerns around machine learning centre on privacy and autonomous decision making. Firstly, in order for AI systems to become expert in specific human behaviours, they must analyse a great deal of relevant personal information. There is a concern that this information may not be adequately protected from disclosure when the AI interacts with other systems.

Secondly, there is a concern that the system might make incorrect decisions that impact on people’s lives in a negative way without the ‘common sense’ or experienced oversight of a human agent.

Although these concerns are legitimate, there are effective systems and controls that allow AI to be used in a manner that protects privacy and protects against unmoderated AI decision making.


When AI learns about behaviour patterns from data concerning a large group of individuals, there is no need to include information that identifies any specific individual. For example, the name of a person isn’t relevant to the learning process: the AI is trying to learn about people in general, not any one specific person.

Responsible providers of ML applications will strictly anonymise any learning data so that the AI never sees any information that identifies any individual. In addition, once the AI has completed its learning, strong data separation controls ensure that the data it has used is securely stored away from the AI system itself and can never be seen or accessed by any third party. There is therefore no way to ‘trick’ the AI into revealing anything specific about the people the data was sourced from.

Autonomous decision making

Another area of potential concern is around autonomous decision making, where AI might make incorrect decisions without any human oversight or intervention. With the new GDPR restrictions, this kind of machine learning faces even more controls and restrictions.

AI should only ever be used to support or augment existing human decision making. At no stage should AI systems make definitive or irreversible decisions that might adversely affect any person’s future. AI should empower its users to make more informed and effective decisions but not attempt to replace existing knowledge and experience.

The future is bright

With controls around privacy and autonomy, AI can support existing business functions by learning about customers and using that learning to improve future interactions with them. There’s a revolution coming, and businesses that adopt secure AI now will certainly have an advantage over those who are left behind!

To find out more about the areas covered in this article or for more details of our solutions and services contact us on 0117 428 5741 or email

This Expert View appears in the latest issue of Flex here, or download a pdf of Flex, Oct-Dec 2019 here

Podcasts available here:

Episode 1:

Episode 2:

What to read next