Skip to Main Content
Main Menu
Articles

Risk Management Brief: Ethics and Privacy Risks in AI

Artificial intelligence (AI) has emerged as a mainstream trend over many years. And when AI-powered tools crossed over from sci-fi stories into mainstream consciousness a decade ago, consumer and business technology users alike were generally enthusiastic.

Despite all their quirks, virtual assistants such as Amazon’s Alexa, Apple’s Siri, and Google Assistant are considered useful ‘helpers’. These AI-driven technologies generally aren’t considered menacing threats, not by mainstream users, at least.

But consumers are becoming more aware of – and increasingly vocal about – the pernicious use of AI behind the scenes to influence, direct or impact their interactions with businesses.

The European Union (EU) Commission’s regulatory framework proposal on AI acknowledges “certain AI systems create risks we must address to avoid undesirable outcomes” and states: “The regulation ensures Europeans can trust what AI has to offer.”

Similarly, a press release accompanying the United Kingdom’s (UK) AI Regulation declared: “As AI continues developing rapidly, questions have been raised about the future risks it could pose to people’s privacy, their human rights or their safety.”

The need for stricter rules for AI in the US was highlighted in an article published October 22, 2021, by the White House Office of Science and Technology director Dr. Eric Lander and deputy director Dr. Alondra Nelson:

“In the United States, some of the failings of AI may be unintentional, but they are serious, and they disproportionately affect already marginalized individuals and communities.”

A year later, on October 4, 2022, the White House published the Blueprint for an AI Bill of Rights.

Major AI and privacy risks identified

The top risks identified by businesses when AI intersects with privacy concerns were highlighted in the Privacy and AI Governance Report available to members of the International Association of Privacy Professionals (IAPP), a global information privacy community headquartered in New Hampshire (note: download requires subscription):

Harmful bias – bias in AI resulting in harm to individuals and/or violation of their privacy rights, such as discriminatory decisions about housing, finance, education, and insurance

Bad governance – lack of clear strategies to manage and mitigate risks from processing personal data in AI systems, or weak application of privacy principles, such as data minimization and specified purposes for collecting and managing personal information

Lack of legal clarity – businesses are struggling to keep up with the changing regulatory environment and can’t be sure they are implementing the right methods and rules to satisfy due diligence obligations in multiple jurisdictions.

The report also identifies several related risks, including:

  • Lack of skills and resources in organizations to tackle new AI and privacy challenges, regulations, and governance
  • Failure to apply privacy best practices, which may result in data systems used to train AI systems, including non-consensual use of personal data or secondary uses of data
  • Security risks posed by AI systems on a connected network including insider threats, model exploitation, and data breaches.

IAPP’s Privacy and AI Governance Report recommends businesses stay on the right side of consumers – and any existing or upcoming AI regulations – by adopting key principles shared among recently published AI governance guidelines and proposed AI regulations in the EU, UK, and US.

The Report notes there is consensus across many jurisdictions on the following AI governance principles:

  • Privacy
  • Accountability
  • Fairness
  • Explainability
  • Robustness
  • Security
  • Human oversight.

AI experts call out privacy and human rights risks

Privacy and ethics experts have certainly warned about the ethical issues of unshackled AI innovation for many years – though few people expected 2023 to be the year AI experts would call time (or at least, time out).

On March 22, 2023, just a week after OpenAI released GPT-4, a major update on its popular ChatGPT chatbot, hundreds of AI experts were joined by tens of thousands of people signing an open letter calling for a pause on giant AI experiments.

Published by the Future of Life Institute, the letter calls on “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.” It warns that “AI systems with human-competitive intelligence pose profound risks to society and humanity.”

The risks posed by AI achieving “smarter-than-human intelligence” are so extreme, according to Eliezer Yudkowsky, lead researcher at the Machine Intelligence Research Institute, that Pausing AI Developments Isn’t Enough. We Need to Shut it All Down.

He claimed nightmare scenarios of AI going rogue and causing loss of human life are now technically possible. “Without that precision and preparation, the most likely outcome is AI that does not do what we want and does not care for us nor for sentient life in general.”

Given the hype cycle for AI has visibly swung to fear among some AI experts and consumers, organizations are being urged by consumers and rule makers to adopt safer and fairer practices for developing and using AI immediately – ahead of future legislative requirements.

Get the latest resources sent to your inbox

Subscribe
Back to Top