Key questions CISOs must ask before adopting AI-enabled cyber solutions

Key questions CISOs must ask before adopting AI-enabled cyber solutions

Adversaries are hijacking AI technology for their own purposes, generating deepfakes, creating clever phishing lures, and launching novel types of advanced attacks. They are also targeting AI systems with prompt injection attacks aimed at tricking models into revealing sensitive data. And users are leaking sensitive data through the unauthorized or careless use of AI.

CISOs who don’t respond to these threats with their own AI-powered defenses are putting their organizations at risk.

According to IBM’s 2025 Cost of a Data Breach report, based on a survey by the Ponemon Institute, organizations that extensively deployed AI across their enterprise cybersecurity defenses slashed the amount of time it took to recover from a breach by 80 days, lowering their average breach cost by $1.9 million. And 20% of organizations surveyed said they suffered a breach due to security incidents involving shadow AI. The additional breach cost associated with high levels of shadow AI was an estimated $670,000, according to the report.

Virtually every established security vendor and scores of startups are touting AI-powered security solutions. Incumbents are embedding AI into their existing toolsets. And startups are offering autonomous agents that address specific areas such as vulnerability assessments, email security, endpoint security, or cloud data security.

IDC analyst Craig Robinson says, “Vendors are rapidly embedding AI and generative AI into their incident response workflows to enhance speed, accuracy, and scalability.” Key applications include threat detection, triage, and anomaly detection; generative AI for automated report generation, timeline reconstruction, and executive summaries; natural language queries for log analysis and threat hunting; and AI agents for malware analysis, code interpretation, and adversary behavior prediction.

A survey of CISOs conducted by Splunk reveals that the top use cases for AI and gen AI security are threat detection, triaging alerts, querying security data, automating alert management and response, threat hunting, suggesting investigation steps, threat analysis, and processing phishing emails. Novel uses of AI for defense are rapidly evolving, including machine-learning generative adversarial networks. And agentic AI use cases for cybersecurity are already on the horizon.

For CISOs looking to enhance their security defenses with AI-powered tools, here are some key questions to ask prospective AI security vendors. But before CISOs engage with vendors, it’s important to get your ducks in a row.

What CISOs need to think about before talking to vendors

How does the use of AI at my organization expand our attack surface? It’s important to get a clear picture of how current and future AI implementations at your organization create new potential vulnerabilities.

Achieving that clarity will require asking a wide range of questions that span the organization and beyond. For example, are we deploying or planning to deploy clusters of GPU-based servers in the data center to run AI workloads? Will my current network detection and response tools be able to handle this surge in additional traffic? Are software developers writing new AI apps? How do I protect that development pipeline? How is AI being embedded in my organization’s supply chain? Are we currently building or planning to build our own LLMs on premises, in the cloud, or will we be using third-party LLMs? Everyday SaaS productivity tools incorporate AI into workflows; how do I protect that back-and-forth traffic, which might contain sensitive information?

According to the IBM-Ponemon breach report, 13% of surveyed organizations have experienced an attack on their AI models or applications. “That percentage is small, for now. We are likely to see many more in the coming 12 months, unless security leaders and their business counterparts recognize the risk and pivot to focus more intently on AI security,” says the report.

What is my risk tolerance, level of maturity, and regulatory environment? There’s no point in buying an agentic tool that the vendor claims to be capable of acting autonomously, if the culture of your shop is that these types of tools won’t be completely trusted or properly deployed. Your security practitioners might not get full value out of them or might not use them at all. If your organization is in a highly regulated industry, will it pass muster with the auditors if your logs and other telemetry data are sent to the cloud for processing?

What problem am I trying to solve? Before jumping into an AI security solution, CISOs need to clearly identify the highest priority risks. Are you worried about data leakage, ransomware, incident response, data privacy regulations, securing the application development pipeline, securing assets in the cloud? Or all the above? It’s important to align your most critical needs with the strengths of the vendor solution.

Platform or point product? The perennial point product vs. platform conundrum applies to AI security as well. If my organization has an incumbent security platform vendor and is satisfied with that vendor, does the incumbent offer sufficient AI security capabilities to meet my needs, currently and in the future? Or do I need to seek out point products to address specific gaps?

Questions to ask vendors about their AI security offerings

There are several areas where CISOs will want to focus their attention when considering AI-powered cyber solutions, including the following:

Shadow AI: Uncovering and addressing shadow AI throughout the organization is a key issue for security leaders today. But so too is ensuring that sanctioned AI-enabled solutions are not misused in similar ways.To protect against this, CISOs should ask: Does the vendor offer discovery capabilities to help identify shadow AI usage? What policies and procedures, education and training, identity management and access control, data leakage protection, does the vendor offer to enable employees to continue to use AI, while layering on security features?

Data protection: The superpower of AI security tools is their ability to ingest and process vast amounts of data in near real-time. But where does that data reside? On-prem, in the cloud, or both? Who is responsible for protecting LLMs and other data stores both at rest and in motion? If the vendor is using homegrown or third-party AI models, and providing conduits to the customer’s third-party AI models, how does the vendor protect those pipelines? How does the security team detect vulnerabilities or data leakage in a “black box” LLM? Who is responsible for protecting LLMs against prompt injection attacks or other types of model manipulation? Will my data be used to train the vendor’s models and, if so, how can I be sure that data is protected?

Metrics: Much of the initial hype surrounding AI has turned to disappointment because organizations are struggling to identify benefits from AI pilot projects. CISOs need to be able to provide measurable results for any AI security tool. That can include improved mean-time-to-discovery (MMTD) and mean-time-to-recovery (MTTR) in the event of a breach, a quantifiable reduction in the rate of false positives, improved productivity among SOC staffers, increased accuracy of anomaly detection and threat hunting activities. CISOs should ask vendors and advisors, What metrics will best reflect the value these AI capabilities will have, and can those be captured to help assess the efficacy of the AI capabilities and our use of them?  

Workforce: What kind of training does the vendor provide for the most efficient use of AI, generative AI, and most importantly, agentic AI. Will the AI tool be able to automate low-level tasks so that my SOC analysts can focus on higher-level activities? How does the vendor offering help me to address the skills gap? Are there models and best practices for reorganizing my workforce for the era of AI. Will the use of AI security tools help address overwork and burnout among my staff? Are there specific guidelines or best practices for how my security team should interact with the AI in a human-in-the-loop, copilot-style scenario?

Integration: How will the AI security tool integrate with my current security stack and my security processes and procedures? Most CISOs already have an overload of tools — EDR, XDR, SIEM, SOAR, CSPM, etc. What APIs and pre-built connections are provided to seamlessly integrate with my existing infrastructure? What types of agreements and alliances do you have with other vendors? How can I maintain a single dashboard? If a platform vendor has recently acquired an AI security tool, how well is that new capability integrated within the platform?

Regulation: How will your tool conform to the specific regulatory requirements in my industry with regard to data storage and data privacy. Do you keep up with changes to regulations?

Trust: How can I make sure that my security team trusts the decisions and recommendations that the AI systems make? In what ways can security practitioners go back and retrace how a model reached a conclusion?

Scalability: It’s to be expected that data stores will increase in volume. And an enterprise might pilot the tool at one location, with plans to roll it out globally. How can I be sure the cloud-based AI tools can scale to meet my needs? Can the system handle large traffic volumes without performance delays. Does the solution encompass endpoints, networks, cloud, SaaS?

Roadmap: What is your roadmap for updating the tool, delivering timely patches and providing new capabilities on a regular basis?

Model integrity: How do you address concerns about bias in your models. How do you ensure data accuracy and integrity? How do you assure that the models are constantly updated to reflect changing real-world conditions.

Vendor credibility: How long has this vendor been around? Do they have leaders with an industry pedigree? Do they have references that you can check? What is the financial viability of the company? For a startup, how much money have they raised? Are they generating revenue?

Cost: What are the licensing terms? What types of SLAs or other performance metrics are included with a subscription?

​The original article found on Key questions CISOs must ask before adopting AI-enabled cyber solutions | CSO Online Read More