
9 ways CISOs can combat AI hallucinations
AI hallucinations are a well-known problem and, when it comes to compliance assessments, these convincing but inaccurate assessments can cause real damage with poor risk assessments, incorrect policy guidance, or even inaccurate incident reports. Cybersecurity leaders say the real trouble starts when AI moves past writing summaries and begins making judgment calls. That’s when it’s asked to decide things such as whether security controls are doing their job, if a company is meeting compliance standards, or if an incident was handled the right way. Here are nine ways CISOs can tackle the problem of AI hallucinations. Keep humans in the loop for high-stakes decisions Fred Kwong, vice president...