AI Risks: Safeguarding Your Organization From Emerging Cyber Threats

OCTOBER 1, 2024

While the benefits of artificial intelligence (AI) have frequently appeared in headlines, the risks associated with this new technology are now becoming major concerns for organizations. According to a recent report from research firm Arize AI, more than two-thirds (69.4%) of organizations that mentioned AI in their latest annual reports did so in the context of risk — a 473.5% increase from 2023.

The liabilities that AI creates for organizations are clearly on the rise, particularly in areas like data privacy, security, and legal compliance. These exposures include cybersecurity vulnerabilities, potential biases, ethical concerns, and professional and product liability hazards. Organizations can protect themselves by taking a proactive and multi-layered approach to risk mitigation and transfer. This includes adopting an AI governance framework, implementing AI risk awareness and training, and employing risk management strategies. Learn more about each of these areas below and how they can help safeguard your organization.

Follow these governance practices to promote ethical, transparent, and accountable use of AI technologies — thereby minimizing legal, operational, and reputational harm:

  • Clear-use policies: Create and enforce policies around the ethical use of AI, ensuring that AI systems align with legal and ethical standards.
  • Bias mitigation: Implement techniques to detect and reduce bias in training data to avoid discriminatory outcomes that can lead to increased legal risks and reputational damage.
  • Getting consent: Obtain clear consent from users (customers, employees, or third parties) when using their data for AI-driven decisions, reducing potential liabilities from privacy violations.
  • Adherence to AI regulations: Regularly audit AI systems to ensure compliance with privacy, security, and anti-discrimination laws. These laws are evolving rapidly as governments and regulators address the risks associated with AI technologies. Major privacy laws include the General Data Protection Regulation (GDPR), which requires AI systems processing personal data to have a lawful basis for processing (e.g., consent). The California Consumer Privacy Act (CCPA) creates obligations for businesses, including provisions on automated decision-making.
  • Protective contractual provisions: When procuring AI services or products, ensure contracts include clear liability clauses, vendor responsibilities, and indemnity agreements to protect against potential damages caused by third-party AI tools. For more details on this crucial issue, read our article on protecting your organization from increasing third-party cyber exposure.
  • Transparency and explainability: Ensure that AI decisions can be explained in human terms, minimizing risks related to biased or unfair decision-making.
  • Compare AI results with experts’ understandings: AI should help with research efficiency and allow for deeper dives into data and information, but it should not be accepted at face value. Results should be cross-checked with other, more traditional sources of subject expertise.

Implement these training and monitoring practices to help mitigate AI risks by equipping employees with the knowledge to identify and responsibly manage potential AI-related issues and vulnerabilities:

  • Employee training: Be clear regarding your organization’s allowed use of AI, and train employees regularly on the potential legal, ethical, and operational risks of using AI to prevent misuse or accidental exposure to liabilities.
  • AI risk monitoring: Establish ongoing monitoring processes to identify and mitigate emerging risks as AI technologies evolve, especially in areas like data security, employment law, and consumer protection.

Leverage these practices and resources to proactively identify potential threats and secure financial protection against losses or liabilities arising from AI-related incidents:

  • Broad perspective: Understand how the use of (and reliance on) AI will impact your interactions with employees, clients, vendors, regulators, and shareholders.
  • AI-specific insurance coverage: Obtain specialized insurance to cover AI-related risks, such as errors in algorithms, data breaches, or regulatory fines.
  • Cyber liability insurance: Ensure that your organization’s cyber insurance policy covers AI risks, including data breaches or cybersecurity issues originating from AI systems.
  • Errors and omissions (E&O) liability insurance: This is also known as professional liability insurance. There should be no limitations on an organization’s liability for the erroneous use of AI in delivering professional services.
  • Product liability insurance: For companies creating AI products, this helps cover damages from malfunctions or misuse of AI-driven tools.
  • Cyber insurance market outlook: Despite the continued frequency and severity of cyber loss incidents, the available capacity (i.e., insurers offering limits of cyber coverage) in the marketplace continues to sustain competitive rates. The rampant increase in privacy and security regulations on a global, national, and local basis — and the widespread increase in privacy exposures and the evolving considerations of AI — would seem to make a continuing soft market untenable, but as of today, it remains.

For further information, reach out to your USI representative or email us at pcinquiries@usi.com.