AI creates new security risks for OT networks, warns NSA

AI creates new security risks for OT networks, warns NSA

The security of operational technology (OT) in critical infrastructure has been a recurring theme for years, but this week the US National Security Agency (NSA) and its global partners added a new concern to the mix: how the increasing use of AI in OT risks making things worse.

The scope of these concerns, and guidance for addressing them, is outlined in the Principles for the Secure Integration of Artificial Intelligence in Operational Technology, authored by the NSA in conjunction with the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) and a global alliance of national security agencies.

While the use of AI in critical infrastructure OT is in its early days, the guidance reads like an attempt by the NSA and its partners to get ahead of the problem before misuse or misapplication becomes entrenched. Although drafted for OT admins, the guidelines mirror concerns that also apply to IT administration.

Currently, AI is being put to work in OT networks in the energy, water treatment, healthcare, and manufacturing sectors for the same reason it is being used elsewhere: to optimize and automate processes, thereby improving efficiency and uptime.

The worry is that organizations are jumping into a new and far from battle-hardened technology without assessing its limitations, echoing what has been happening in IT. Measuring risk against the industrial control systems (ICS) Purdue Model hierarchy, the guidelines enumerate worries such as adversarial prompt injection and data poisoning, data collection leading to reduced safety, and “AI drift” in which models become less accurate as new data diverges from training data.

Also mentioned: AI can lack the explainability necessary to diagnose errors, there are difficulties meeting compliance requirements as AI rapidly evolves, and there’s a human de-skilling effect caused by a creeping over-dependence on AI. Likewise, AI alerts might lead to distraction and cognitive overload among employees.

Finally, the tendency of AI technologies such as chatbots and LLMs to hallucinate raises doubts about whether the technology is robust enough to be used in environments where safety is a priority. “AI may not be reliable enough to independently make critical decisions in industrial environments. As such, AI such as LLMs almost certainly should not be used to make safety decisions for OT environments,” said the authors.

This underlines an important difference between using AI in an OT setting and an IT one – OT networks are by nature safety-critical. Although many of the issues are the same, the margin for error is much smaller.

Struggling to unwind

“The guidance raises the right questions: what risks are we introducing, what value does AI truly bring, who is accountable for oversight, and how do we respond when the technology misbehaves?” commented Sam Maesschalck, an OT engineer with cyber security training platform Immersive Labs. “We’ve already seen what happens when operational demands outpace secure design. IT/OT convergence brought efficiency, but it also exposed OT networks in ways the industry is still struggling to unwind.”

According to Maesschalck, grafting AI systems onto OT infrastructure would fail if pre-existing issues aren’t addressed first. These include the inability of some OT devices to feed the required volumes of data to AI platforms, and a lack of asset inventories that make problem interactions more difficult to predict.

Among the guidelines’ recommendations are for organizations to adopt CISA’s secure design principles, and to assess whether developing an AI-OT project inhouse would give organizations more control over AI design and implementation in the long run.

“This kind of guidance is influential because operators are looking for clarity. Having government-backed principles to reference gives owners and engineers something concrete to point to when they push back on unsafe or rushed adoption. It also reinforces how essential education is,” said Maesschalck.

The guidelines arrive on the heels of last year’s NSA and ACSC report listing the steps organizations should take to secure OT in critical infrastructure. But neither document addresses continuing concerns that OT security still doesn’t get the budget it warrants.

​The original article found on AI creates new security risks for OT networks, warns NSA | CSO Online Read More