This week, I attended the Gartner Security & Risk Management Summit, where the main topic centered on integrating Artificial Intelligence into cybersecurity programs without compromising security. These are some key points I took away from the summit.
The integration of Artificial Intelligence (AI) in cybersecurity represents both a revolutionary opportunity and a significant challenge for organizations worldwide. As highlighted at the recent Gartner Security & Risk Management Summit, while AI offers transformative potential for enhancing security operations, it also introduces unique risks that require careful management and strategic planning.
The Current State of AI in Cybersecurity
The cybersecurity landscape is experiencing a significant shift, with Generative AI (GenAI) emerging as a primary focus of recent investments. Major organizations, such as IBM, Microsoft, and Sony, have already demonstrated successful implementations of AI in their security operations, showcasing the practical value of these technologies in real-world scenarios.
AI’s Role in Enhancing Cybersecurity Resilience
AI has been utilized in cybersecurity for decades, but recent novel investments are primarily in Generative AI (GenAI). AI can significantly improve cyber resilience by boosting both efficiency and efficacy. Key areas where AI can enhance cybersecurity include:
- Planning and Monitoring: Predictive capabilities like CVE prioritization/remediation, AI-led tabletops/scenarios, secure coding, third-party risk assessment, behavioral detections, attack surface topology mapping, and intelligent exposure management.
- Acting: Automation in malware analysis, distributed security automation, polymorphic defenses, and AI-enabled indicators/metrics/signals.
- Operational Efficiency: Augmenting security operations, application security, incident response, and threat intelligence. For instance, by 2027, 25% of common Security Operations Center (SOC) tasks are predicted to become 50% more cost-efficient due to automation and hyperscaling.
Unique Risks and Challenges of AI in Cybersecurity
The integration of AI introduces several distinct risks that organizations must manage:
- Increased Resource Demands and Spending: Through 2025, GenAI is expected to lead to a surge in required cybersecurity resources, resulting in an incremental spend of more than 15% on application and data security.
- Magnified Technical Debt: AI not only exposes but also magnifies existing technical debt through its extensive data access and agency.
- Probabilistic and Unpredictable Nature: AI is probabilistic, not deterministic, meaning unpredictability is inherent to its value, making it difficult to control.
- New Attack Surfaces and Threats: AI introduces new attack surfaces and threats, including data loss, prompt injection, and model theft. Malicious actors can leverage GenAI for the development of deepfakes, malware, disinformation, and enhanced phishing attacks, thereby amplifying business risks such as sensitive data exposure, potential copyright violations, bias, hallucination, input/output violations, and brand damage. By 2028, 33% of enterprise software applications are expected to incorporate agentic AI, which introduces additional risks due to its autonomous nature and interaction patterns.
- Internal Violations: Through 2026, at least 80% of unauthorized AI transactions will be caused by internal policy violations (information oversharing, unacceptable use, misguided AI behavior) rather than malicious attacks.
- SOC Skill Erosion: By 2030, 75% of SOC teams may experience erosion in foundational security analysis skills due to overdependence on automation and AI. This can lead to a decrease in tacit knowledge, erosion of new skill sets, and a decline in critical thinking.
Strategies for Effective AI Integration and Risk Management
To effectively integrate AI into cybersecurity while managing its unique risks, organizations should focus on the following:
- Build a Robust AI Governance Framework:
- Leverage Existing Frameworks: Adopt and integrate guidance from established frameworks, including NIST AI RMF, MITRE ATLAS, OWASP Top 10 LLM, ISO 42001, and relevant federal and state regulations, such as the EU AI Act.
- Define Roles and Responsibilities: Establish effective AI governance by identifying dimension owners (organizational, societal, customer-facing, employee-facing AI dimensions), differentiating decision rights based on unique expertise, and ensuring governance decisions address key dilemmas.
- Security’s Role: Security leaders should take ownership of AI security governance and secure a seat at the AI committee, thereby raising awareness of risks that others may overlook. Do not attempt to own overall AI governance.
- Cross-Functional Collaboration: Collaborate with functional leaders and end-users when defining GenAI-related policies and standards.
- TRiSM (Trust, Risk and Security Management): Implement AI TRiSM as a “team sport” with a wide scope covering trustworthiness, fairness, reliability, robustness, efficacy, governance, legal, privacy, security, and transparency.
- Gain Comprehensive Visibility into Existing AI and Technical Debt:
- Minimum Viable Visibility: Understand crucial aspects such as where users are going, what they are asking, what data is being shared or referenced, what models are being leveraged, and how third parties are using AI.
- Secure Third-Party AI Consumption: Integrate questionnaires for third-party vendors, utilize Security Service Edge (SSE) to identify Large Language Model (LLM) traffic, and develop guidance for users regarding sanctioned and unsanctioned application use.
- Protect Enterprise AI Applications: Develop foundational security in cloud, data, and applications; focus on tactical deployment; secure AI application use; pilot advanced TRiSM tools; upskill security champions; require testing against adversarial prompts and prompt injections; and consider data security options when training and fine-tuning models.
- Establish Processes for Ongoing Maintenance, Monitoring, and Validation:
- AI Visibility and Traceability: Implement cataloging (discovery, inventory, licensing, risk scoring), automated documentation (bill of materials, model cards, regulatory reports), audit trails of state changes, mapping of AI integration with human/system processes, ownership of artifacts, data mapping, and validation of risks, regulations, and controls.
- Continuous Assurances and Evaluation: Conduct AI security testing (red teaming, scanning) for models, applications, and agents. Validate risk and trust controls, including bias, leakage, trust, and use-case alignment. Manage posture and ensure compliance reporting.
- AI TRiSM Runtime Functions: Deploy capabilities for model monitoring, anomalous activity detection, data protection, responsible AI filtering, runtime defense against malicious attacks, selective data obfuscation, and compliance enforcement.
- Adopt a Crawl-Walk-Run Approach: Gradually build capabilities for maintaining, monitoring, and validating AI systems, ensuring robust data integrity and security throughout their lifecycle.
- Address Skill Gaps and Foster a Future-Ready Workforce:
- Bridge Gaps: Gain internal experience, foster knowledge sharing, align with workforce development initiatives, develop flexible hiring strategies, and productize GenAI prompts.
- Develop a Talent Strategy: Create a talent strategy focused on future security skills and needs, leading to a significant increase in CISO effectiveness.
- Focus on New Skills: Prioritize learning prompt engineering, prompt injection, and understanding frameworks like OWASP Top 10 LLM.
- Maintain Human Oversight: Identify areas where human-led Security Operations Center (SOC) functions must persist and define human-in-the-loop requirements to counteract the erosion of foundational security analysis skills due to automation.
- Adopt a Strategic, Outcome-Driven Approach to AI Adoption:
- Support Business Demand Support the business demand for GenAI while balancing innovation with security.
- Cultivate Relationships: Cultivate relationships outside of IT, especially with functional leaders who play key roles in GenAI strategy development.
- Problem-Driven Adoption: Guide AI adoption by identifying the right problems to solve, rather than allowing AI to dictate use-case design. Focus on how AI can demonstrate value to solve specific problems and automate tasks, rather than just asking “What is the best AI?”.
- Agile Roadmaps and Experimentation: Favor tactical AI experiments to learn faster and adapt to compressed time horizons. Be an “AI Tinkerer” CISO, conducting multiple safer experiments and scaling, extending, or pausing based on value, rather than making a few big bets.
- Outcome-Driven Metrics (ODM): Define and use outcome-driven metrics for GenAI to guide defensible cybersecurity investment and evaluate AI security efforts. These metrics should encompass GenAI risk assessment, application security, quality assurance, third-party cybersecurity risk management, skills development, and data readiness.
- Prioritize Preemptive Cybersecurity:
- Cyber Deterrence: Integrate cyber deterrence and preemptive cybersecurity strategies to stay ahead of AI-based attacks, as these attacks are likely to outmaneuver human-led, reactive approaches. This involves altering adversary behavior before attacks occur, for example, by disrupting how attackers monetize exploits, imposing consequences, exposing techniques, and inflicting direct and indirect costs.
- Advanced Tools: Leverage tools like advanced cyber deception, automated moving target defense, predictive threat intelligence, automated exposure management, and advanced obfuscation.
The integration of AI in cybersecurity represents a critical evolution in how organizations protect their digital assets. By following a structured approach to implementation while maintaining awareness of potential risks, organizations can harness the power of AI to strengthen their security posture and build resilience against emerging threats.
The post The Future of Cybersecurity: Integrating AI While Managing New Risks appeared first on .