How to establish an effective AI GRC framework

How to establish an effective AI GRC framework

Enterprise use of artificial intelligence comes with a wide range of risks in areas such as cybersecurity, data privacy, bias and discrimination, ethics, and regulatory compliance. As such, organizations that create a governance, risk, and compliance (GRC) framework specifically for AI are best positioned to get the most value out of the technology while minimizing its risks and ensuring responsible and ethical use.   

Most companies have work to do in this area. A recent survey of 2,920 worldwide IT and business decision-makers conducted by Lenovo and research firm IDC found that only 24% of organizations have fully enforced enterprise AI GRC policies.

“If organizations don’t already have a GRC plan in place for AI, they should prioritize it,” says Jim Hundemer, CISO at enterprise software provider Kalderos.

Generative AI “is a ubiquitous resource available to employees across organizations today,” Hundemer says. “Organizations need to provide employees with guidance and training to help protect the organization against risks such as data leakage, exposing confidential or sensitive information to public AI learning models, and hallucinations, [when] a model’s prompt response is inaccurate or incorrect.”

Recent reports have shown that one in 12 employee generative AI prompts include sensitive company data and that organizations are no closer to containing shadow AI’s data risks despite providing employees with sanctioned AI options.

Organizations need to incorporate AI into their GRC framework — and associated policies and standards — and data is at the heart of it all, says Kristina Podnar, senior policy director at the Data and Trust Alliance, a consortium of business and IT executives at major companies aiming to promote the responsible use of data and AI.

“As AI systems become more pervasive and powerful, it becomes imperative for organizations to identify and respond to those risks,” Podnar says.

Because AI introduces risks that traditional GRC frameworks may not fully address, such as algorithmic bias and lack of transparency and accountability for AI-driven decisions, an AI GRC framework helps organizations proactively identify, assess, and mitigate these risks, says Heather Clauson Haughian, co-founding partner at CM Law, who focuses on AI technology, data privacy, and cybersecurity.

​The original article found on How to establish an effective AI GRC framework | CSO Online Read More