Almost every organization is using an AI platform these days. Large language models (LLMs) are either being integrated into existing applications, come with new applications, are being tried by employees, or being selected for addition into workflows.
However, warn cybersecurity experts, those models have to be chosen with risk management in mind, like any other application, regardless of the hype or pressure from the CEO.
When a CISO chooses an AI model for their organization, significant policies, procedures, and technology controls are needed, as well as a risk analysis, said Joseph Steinberg, a US-based expert on cybersecurity, AI, and privacy.
“But,” he noted, “many organizations have invested very little in any of these three areas, instead opting to ignore the scale of the security issues that LLMs create.”
“One critical point for CISOs to consider is that regardless of the number of security breaches, there is the issue of data leaking via the prompts themselves,” he said. “AI users may inadvertently share private information via their prompts — and the people doing so may have no clue that they are doing so. For example, if 10 people coming from IP addresses belonging to a particular organization start asking an AI technical questions about a particular technology, or about how to implement certain types of features, the AI may learn that that organization is both using such technology and lacks advanced knowledge about it.”
Research finds more security risks
Large language models may offer even more security risk than CSOs have considered, a study released this week suggests.
Researchers at Cybernews say five of the 10 AI models they looked at, using publicly available information, had scores of B or lower for risk. The remaining five, including Anthropic, Cohere, and Mistral, were rated as low risk.
Two major players, OpenAI and 01.AI, received a D score, indicating high risk, while Inflection AI scored an F, a critical security risk.
In addition, five of the 10 providers had recorded data breaches, the researchers said. They said that OpenAI allegedly suffered the most breaches, with 1,140 incidents, including a recent data leak just nine days before the analysis was conducted. Perplexity AI allegedly experienced a breach 13 days earlier, the researchers said, with 190 corporate credentials compromised.
A spokesperson for OpenAI gave this statement to CSO: “We value research into AI security, and we take the security and the privacy of our users seriously, are transparent on our security program progress, and regularly publish threat intelligence reports. Despite requesting the underlying report, we have not been given access and cannot evaluate its data or methodology. That said, we disagree with the claims made in the article.”
We also emailed Inflection AI for comment. No response had been received by press time.
Asked for comment on the Cybernews story, Robert T. Lee, chief of research at the SANS Institute, said in an email that “most LLMs can’t pass a basic security sniff test.” Referring to the Cybernews AI provider ratings, he said that “Ds and Fs, plus breaches every week, tell you everything you need to know — security’s not even on the roadmap.”
Advice to CSOs
Lee said that CSOs should consider the following before approving any LLM:
- Training data: figure out where the model got its info. Random web grabs expose your secrets;
- Prompt history: if your questions stick around on their servers, they’ll turn up in the next breach bulletin;
- Credentials: stolen API keys and weak passwords keep attackers fed. Push for MFA (multifactor authentication) and real-time alerts;
- Infrastructure: make sure TLS is tight, patches land without delay, and networks are sealed off. Half-baked configs get popped;
- Access controls: lock down roles, log every AI call, and stream logs into SIEM/DLP. Shadow AI is the silent assailant;
- Incident drills: insist on immediate breach notifications. Practice leaked-key and prompt-injection scenarios so you’re not flailing when it hits the fan.
“Treat LLMs like they’re guarding your bank vault,” Lee said. “Forget the hype. Put them through the same brutal vetting you’d use on any mission-critical system. Do that, and you get AI’s upside without leaving the backdoor wide open.”
The original article found on Risk assessment vital when choosing an AI model, say experts | CSO Online Read More