Is attacker laziness enabled by genAI shortcuts making them easier to catch?

Is attacker laziness enabled by genAI shortcuts making them easier to catch?

OpenAI’s recent report detailing various defenses it has deployed to fight fraudsters, especially those leveraging its LLM to impersonate people on social media, has met with mixed reactions from experts.

One prominent analyst group, Gartner, sees it as more of a PR stunt than actually making a case that OpenAI is delivering a cybersecurity differentiator. 

“OpenAI’s current measures (for example, banning accounts, monitoring, collaborating) to help people defend against cyberattacks are reactive and very limited,” said Avivah Litan, a Gartner distinguished VP analyst currently focusing on AI strategies. “They don’t directly and fully address user needs, and come off as a PR marketing move to act like they do.”

Another analyst, though, saw something very positive in the report. 

‘Flips the usual script’

Jeremy Roberts, senior director of research at Info-Tech Research Group, said he thought the report was interesting because it illustrated how the nature of genAI made it so much easier to catch the crooks.

“OpenAI’s threat intel dump flips the usual script. Yes, it shows the expected laundry list of abuses, but the interesting headline is how often the attackers’ use of ChatGPT made them easier to catch,” Roberts said. “Because the threat actors kept plugging entire workflows into the model, OpenAI could see everything from brute force scripts to social media playbooks in near real-time, and tip off platforms or hosting providers before the campaigns broke out of Category 2 impact.”

That information can help enterprise CISOs in two ways, Roberts said. 

“First, large language model telemetry is becoming a bona fide threat intelligence feed. You’ll want a way to ingest hashes, domains, and TTPs [tactics, techniques, and procedures] that model providers surface,” Roberts said. “Second, AI misuse today is mostly efficiency gain, not capability gain. OpenAI found no evidence that its models gave nation state actors tools they couldn’t already script, just a speed boost that also widened their digital footprint. That means classic controls still work: monitor for script offloading, insist on human validation of resumé pipelines, and treat sudden spikes in polarizing social content as IO [Influence Operations] smoke signals.”

That might just make life easier for SOCs looking to detect and block these attacks.

“In short, AI driven offense is real but still somewhat clumsy, and transparency from model providers turns that clumsiness into a detection advantage,” Roberts said. “Security teams should press vendors for similar reporting and wire those indicators into their SOC before the next [genAI-fueled attack] shows up.”

Tactics of attackers

The OpenAI report, published in June, detailed a variety of defenses the company has deployed against fraudsters. One, for example, involved bogus job applications.

“We identified and banned ChatGPT accounts associated with what appeared to be multiple suspected deceptive employment campaigns. These threat actors used OpenAI’s models to develop materials supporting what may be fraudulent attempts to apply for IT, software engineering, and other remote jobs around the world,” the report said. “Although we cannot determine the locations or nationalities of the threat actors, their behaviors were consistent with activity publicly attributed to IT worker schemes connected to North Korea (DPRK). Some of the actors linked to these recent campaigns may have been employed as contractors by the core group of potential DPRK-linked threat actors to perform application tasks and operate hardware, including within the US.”

Another tactic involved a traditional cyberattack with malware.

“We banned a cluster of ChatGPT accounts that appeared to be operated by a Russian-speaking threat actor. This actor used our models to assist with developing and refining Windows malware, debugging code across multiple languages, and setting up their command-and-control infrastructure,” the report said. “The actor demonstrated knowledge of Windows internals and exhibited some operational security behaviors. Based on the operation’s focus on using a trojanized crosshair gaming tool and its stealthy tactics, we have dubbed it ScopeCreep.”  

Perhaps the most interesting part of the report dealt with some tweaks of fraud attacks that CISO teams can watch for.

“This threat actor had a notable approach to operational security. They utilized temporary email addresses to sign up for ChatGPT accounts, limiting each ChatGPT account to one conversation about making one incremental improvement to their code. They then abandoned the original account and created a new one,” the report noted. “The actor distributed the ScopeCreep malware through a publicly available code repository that impersonated a legitimate and popular crosshair overlay tool (Crosshair-X) for video games.”

The report said that unsuspecting users who downloaded and ran the malicious version would have additional malicious files downloaded from attacker infrastructure and executed. Then the malware would initiate a multi-stage process to escalate privileges, establish stealthy persistence, notify the threat actor, and exfiltrate sensitive data while evading detection. “The threat actor utilized our model to assist in developing the malware iteratively, by continually requesting ChatGPT to implement further specific features,” OpenAI said.

Will Townsend, a VP and principal analyst with Moor Insights & Strategy, was more charitable than Gartner.

“It clearly demonstrates the depth that OpenAI is taking to secure models and mitigate poisoning that can lead to hallucinations and GPU workload disruption,” Townsend said.

Detection ‘easy to sidestep’

However, Gartner’s Litan detailed several of her concerns about the OpenAI report that colored her opinion of it.

“It is reactive and measures [attacks] after misuse is detected” such as after malware is created, Litan said. She also saw the proposed defense techniques as “resource-intense monitoring that relies on heavy-handed human resources for detection. Not scalable.”

She also observed that OpenAI did the obvious, in that it “only focuses on OpenAI models and not other AI platforms or open source models.”

Litan called the techniques that OpenAI described as relatively easy for attackers to sidestep. “There is a risk of attacker evasion [because] their reactive detection can’t keep up with fast evolving tactics,” she said.

​The original article found on Is attacker laziness enabled by genAI shortcuts making them easier to catch? | CSO Online Read More