Code security in the AI era: Balancing speed and safety under new EU regulations

Code security in the AI era: Balancing speed and safety under new EU regulations

The rapid adoption of AI for code generation has been nothing short of astonishing, and it’s completely transforming how software development teams function. According to the 2024 Stack Overflow Developer Survey, 82% of developers now use AI tools to write code. Major tech companies now depend on AI to create code for a significant portion of their new software, with Alphabet’s CEO reporting on their Q3 2024 that AI generates approximately 25% of Google’s codebase. Given how rapidly AI has advanced since then, the percentage of AI-generated code at Google is likely now far higher.

But while AI can vastly increase efficiency and accelerate the pace of software development, the use of AI-generated code is creating serious security risks, all while new EU regulations are raising the stakes for code security. Companies are finding themselves caught between two competing imperatives: maintaining the rapid pace of development necessary to remain competitive while ensuring their code meets increasingly stringent security requirements.

The primary issue with AI generated code is that the large language models (LLMs) powering coding assistants are trained on billions of lines of publicly available code—code that hasn’t been screened for quality or security. Consequently, these models may replicate existing bugs and security vulnerabilities in software that uses this unvetted, AI-generated code.

Though the quality of AI-generated code continues to improve, security analysts have identified many common weaknesses that frequently appear. These include improper input validation, deserialization of untrusted data, operating system command injection, path traversal vulnerabilities, unrestricted upload of dangerous file types, and insufficiently protected credentials (CWE 522).

Black Duck CEO Jason Schmitt sees a parallel between the security issues raised by AI-generated code and a similar situation during the early days of open-source.

“The open-source movement unlocked faster time to market and rapid innovation,” Schmitt says, “because people could focus on the domain or expertise they have in the market and not spend time and resources building foundational elements like networking and infrastructure that they’re not good at. Generative AI provides the same advantages at a greater scale. However, the challenges are also similar, because just like open source did, AI is injecting a lot of new code that contains issues with copyright infringement, license issues, and security risks.

The regulatory response: EU Cyber Resilience Act

European regulators have taken notice of these emerging risks. The EU Cyber Resilience Act is set to take full effect in December 2027, and it imposes comprehensive security requirements on manufacturers of any product that contains digital elements.

Specifically, the act mandates security considerations at every stage of the product lifecycle: planning, design, development, and maintenance. Companies must provide ongoing security updates by default, and customers must be given the option to opt out, not opt in. Products that are classified as critical will require a third-party security assessment before they can be sold in EU markets.

Non-compliance carries severe penalties, with fines of up to €15 million or 2.5% of annual revenues from the previous financial year. These severe penalties underscore the urgency for organizations to implement robust security measures immediately.

“Software is becoming a regulated industry,” Schmitt says. “Software has become so pervasive in every organization — from companies to schools to governments — that the risk that poor quality or flawed security poses to society has become profound.”

Even so, despite these security challenges and regulatory pressures, organizations cannot afford to slow down development. Market dynamics demand rapid release cycles, and AI has become a critical tool to enable development acceleration. Research from McKinsey highlights the productivity gains: AI tools enable developers to document code functionality twice as fast, write new code in nearly half the time, and refactor existing code one-third faster. In competitive markets, those who forgo the efficiencies of AI-assisted development risk missing crucial market windows and ceding advantage to more agile competitors.

The challenge organizations face is not choosing between speed and security but rather finding the way to achieve both simultaneously.

Threading the needle: Security without sacrificing speed

The solution lies in technology approaches that do not force compromises between the capabilities of AI and the requirements of modern, secure software development. Effective partners provide:

  • Comprehensive automated tools that integrate seamlessly into development pipelines, detecting vulnerabilities without disrupting workflows.
  • AI-enabled security solutions that can match the pace and scale of AI-generated code, identifying patterns of vulnerability that might otherwise go undetected.
  • Scalable approaches that grow with development operations, ensuring security coverage doesn’t become a bottleneck as code generation accelerates.
  • Depth of experience in navigating security challenges across diverse industries and development methodologies.

As AI continues to transform software development, the organizations that thrive will be those that embrace both the speed of AI-generated code and the security measures necessary to protect it.

Black Duck cut its teeth providing security solutions that facilitated the safe and rapid adoption of open-source code, and they now provide a comprehensive suite of tools to secure software in the regulated, AI-powered world.

Learn more about how Black Duck can secure AI-generated code without sacrificing speed.

​The original article found on Code security in the AI era: Balancing speed and safety under new EU regulations | CSO Online Read More