Skip to Main Content
Main Menu
Articles

AI Governance and Regulation: 2023 Trends and Predictions

Data governance is one of the biggest challenges privacy professionals are facing in 2023.

Thanks to emerging technologies such as machine learning and AI (artificial intelligence), it’s becoming more common for organizations to rapidly collect and process a lot more data than they’ll ever likely use.

These technologies also make it easier to manage data more effectively – and potentially make huge improvements to data governance.

Implemented well, AI solutions for data management help organizations extract more value from data with deeper analytics. In fact, these emerging technologies could break what the International Data Corporation (IDC) described as the “80/20 rule of data management”.

As the IDC explains: “The breakdown of time spent on data preparation versus data analytics is woefully lopsided; less than 20% of the time is spent analyzing data, while 82% of the time is spent collectively searching for, preparing, and governing the appropriate data.”

With the right rules and processes in place, AI can help businesses:

  • Safely collect the data needed to support business processes
  • Rapidly process data to provide useful insights – and recommended actions – to improve customer service, service delivery and ROI
  • Identify and manage risks associated with collection of personal data and help address those risks to support regulatory compliance
  • Enhance strategic decision making with up-to-date data on business performance.

AI governance enters the mainstream in 2023

In mid-December 2022, TrustArc CEO Chris Babel hosted an industry panel to talk about privacy law trends in 2023 and issues surrounding AI governance and regulation. The panel of privacy industry experts included:

  • Caitlin Fennessy, VP & Chief Knowledge Officer, International Association of Privacy Professionals (IAPP)
  • Michael Lin, Chief Product Officer, TrustArc
  • Hilary Wandall, Chief Ethics and Compliance Officer, Dun & Bradstreet.

Below is a summary of the industry experts’ discussion on the challenges for businesses using machine learning, AI and other kinds of automation tools for collecting and processing customers’ personal data.

“AI governance is going to be one of the biggest issues that hits the desk of privacy professionals in 2023,” explains Fennessy. “In 2022, we watched organizations begin to think about how to do AI governance and privacy, and privacy professionals are increasingly some of the first called to the table.”

Wandall agrees, noting that updates to privacy regulations worldwide will help drive AI governance as a mainstream concern.

Regulation is increasingly part of how people think about privacy responsibilities,” she says. “We’ve seen this already in the GDPR requirements related to automated decision making and we’ve seen several court decisions focused on organizations’ obligations with respect to automated decision making. I expect we’ll see a lot more developments in case law and enforcement actions.”

How mature is your AI risk management? Take the quiz.

Regulators focusing on AI ethics and responsible AI and automation

As Fennessy noted, privacy experts are often the first at the table when organizations have concerns about AI. This isn’t surprising because many people’s negative experiences of AI have involved perceived or real privacy concerns.

Common issues include:

  • Personal information used to target people with problematic advertising and other content online
  • Information only shared in private with trusted people exposed to untrusted/unauthorized people
  • Biases against people fitting specific profiles (e.g. gender, race, socio-economics) in automated decision-making tools, such as those used for screening job applicants or identifying criminals
  • Exploiting vulnerabilities of a target group of people based on their age, or physical or mental disabilities
  • Profiling people’s behavior or personality traits based on their online activities, then exploiting vulnerabilities (e.g. emotional state) to cause them harm (e.g. problematic gambling).

Our panelists point out that regulators will increasingly focus on concerns about AI ethics in 2023, noting several proposed laws and rules for safeguards are being developed or reviewed.

Below is a summary of key initiatives aimed at ensuring responsible use of AI and regulatory compliance with existing or proposed privacy laws.

New state-based AI governance rules

Multiple jurisdictions in the United States will review and/or enact regulations governing how AI can collect and manage personal information. States currently working on new AI rules include Alabama, Colorado, Mississippi, Vermont and Washington.

Federal Trade Commission’s rules for commercial AI

The Federal Trade Commission (FTC) is investigating potential new rules for use of AI by commercial organizations and has warned it is concerned with “AI harms such as inaccuracy, bias, discrimination, and commercial surveillance creep”.

Federal initiatives to put safeguards on AI and support innovation

The National Institute of Standards and Technology (NIST) is developing an AI Risk Management Framework (AI RMF) “to better manage risks to individuals, organizations, and society associated with artificial intelligence. The AI RMF is intended … to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”

Similarly, the National Artificial Intelligence Initiative Office was recently established to report to government on emerging AI trends and risks, including its research and development for trustworthy AI.

The EU’s proposed new AI Act

The European Commission has proposed “harmonized rules on artificial intelligence” in the form of the Artificial Intelligence Act (AI Act). The European Parliament is expected to vote on the AI Act in March 2023, and possibly begin enforcement in 2026.

Regulatory compliance a major concern for privacy professionals

As CEO of TrustArc, Babel hears from people at both ends of the privacy knowledge spectrum, from seasoned privacy experts to technology decision makers who don’t have the team or resources to help them stay on top of changes in privacy regulations.

“We hear from hundreds if not thousands of customers who are struggling keeping up with privacy regulation compliance,” he says. “They’re worried about new rules for cross-border data transfers. Some organizations are panicking about recent enforcement activities in California that went further than most people expected. And others need more help keeping up with different state laws.”

Lin reports the most common concern he hears from TrustArc customers is how they can understand their obligations in different jurisdictions.

“They’re concerned about issues with some of the technologies they’re using that are sending data to other regions. Data transfers that organizations don’t have control over is a hot topic for us, and we make sure our own technology and vendors are compliant.”

Organizations need more privacy professionals

Babel recalls that, in the early days of the digital technology revolution in the 2000s, the privacy professional was typically someone in an organization’s legal team who had ‘privacy’ as part of their job.

“Privacy was something strapped onto the very end of some longer title,” he says. “Now, privacy professionals have evolved: they’re still typically legal oriented or compliance oriented, but we’re also seeing many more technologists, CISOs or data scientists with privacy a key part of their function.”

Wandall agrees that until recently, most professionals who had privacy as a function of their role were interested in it more from a legal or policy perspective, rather than a technology perspective.

“The competencies necessary to manage privacy well today are very different from those that were necessary previously,” she says. “For example, you now need to be willing to get in and understand the data. You need to understand what technology is doing with data: where it’s flowing, how it is being processed and what risks data might create.”

We need to get future privacy professionals skilled up effectively,” adds Fennessy, listing several functions where privacy expertise is needed, including risk management, user design, business processes and product and technology design.

“We also we need privacy professionals to understand each other’s competencies enough to have helpful conversations,” she says. “I think there is a huge talent deficit in privacy, particularly as so many countries around the world have passed laws that require appointing privacy professionals to manage compliance effectively.”

Babel suggests increased regulatory focus on privacy will motivate more companies to scale up their privacy teams and invest in more sophisticated technologies and other resources to help them improve their privacy stance.

“It takes time and effort to manage privacy operations well,” agrees Lin. “Knowledge of privacy risk – including the policies and processes to ensure compliance with all relevant privacy laws – needs to permeate organizations.”

Looking for help with AI governance and regulatory compliance?

TrustArc is your one-stop solution for responsible AI usage and compliance. Our AI privacy governance tools are industry-leading, designed to help organizations manage privacy compliance effortlessly. Streamline your processes, mitigate risks, and stay ahead with TrustArc.

Manage AI Risk

Improve AI governance and simplify your privacy program management.

Talk to an expert

Decoding AI Governance

Discover key pillars of AI risk governance and how to implement them effectively to build a strong, ethical AI ecosystem.

Download the ebook
Key Topics

Get the latest resources sent to your inbox

Subscribe
Back to Top