Skip to Main Content
Main Menu
Articles

Data Protection and Responsible Generative AI Use: A Comprehensive Guide

Casey Kuktelionis

In 2023, artificial intelligence (AI) crashed into organizations like a tidal wave. By the year’s end, ChatGPT reached 100 million weekly active users, and Goldman Sachs strategists observed 36% of S&P companies discussing AI on conference calls. And now you can’t open an email without the mention of AI. From the front lines to the boardroom, AI discussions are happening everywhere.

While AI isn’t new (think Siri or Alexa), new tools and uses have recently accelerated. For example, AI is used heavily in creating superior customer experiences – 92% of businesses are driving growth with AI-driven personalization. Furthermore, the AI market is expected to grow by over 13x over the next decade.

Yet, despite the increasing value and potential of AI, consumers’ trust in organizations using AI is declining. The IAPP reports that 60% of consumers have already lost trust in organizations over their AI use.

Why is AI use causing a loss of trust in organizations?

Consumer concern stems from a lack of attention to responsible AI use. While AI is being touted by boards, not enough companies have established guidelines and training for its use.

Salesforce research demonstrates that despite 28% of workers using AI at work, 69% of workers reported they haven’t received or completed training to use generative AI safely. And 79% of workers say they don’t have clearly defined policies for using generative AI for work.

Workday’s latest global study agrees, with 4 in 5 employees saying their company has yet to share guidelines on responsible AI use.

Additionally, consumers are no strangers to the risks and cons of AI use. Many have tested generative technologies and were left disappointed. Whether you experienced a generative AI fail to properly create a hand or provide accurate information, you’re likely familiar with some of its limitations.

In fact, workplace AI use is already making headlines. For example, Samsung banned the use of ChatGPT due to employees accidentally leaking confidential company information. Or this headline, Most employees using AI tools for work aren’t telling their bosses.

Lastly, concerns and legal considerations surrounding the collection, use, and storage of personal data continue. The use of large language models, like ChatGPT, is already in question. The New York Times recently filed a copyright infringement lawsuit against OpenAI, and other prominent authors have also followed suit.

AI use and business relationships

And it’s not just about consumers. As businesses adopt AI, third-party vendors and partners question AI use and data practices during vendor screening and risk management. Understanding and addressing these concerns is vital to building trust in the age of AI.

Ultimately, the goal for businesses is to balance innovation and trust. AI delivers positive business outcomes and efficiency when harnessed and used responsibly.

Still, many organizations are wrestling with this challenge. TrustArc’s 2023 Global Privacy Benchmarks Survey revealed that “artificial intelligence implications in privacy” ranked as the #1 global concern.

How mature is your AI risk management? Take the quiz.

Are organizations required to use AI responsibly?

Data protection laws such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) cover much of the world’s population. Comprehensive privacy laws aim to protect individuals’ privacy rights and regulate how organizations handle personal data. Thus some of these regulations already include AI use.

For example, the CCPA, as amended by the California Privacy Rights Act (CPRA), gives the California Privacy Protection Agency the authority to regulate automated decision-making technology (ADMT). And draft regulations are underway.

In Europe, Article 22 of the GDPR protects individuals from automated decision-making, including profiling. It prohibits subjecting individuals to decisions “based solely on automated processing”. This means that in certain instances, human intervention is required for decisions about individuals, not just technology. The UK GDPR has similar rules.

What’s more, lawmakers are trying to keep up with technological advances like AIPrivacy professionals must watch closely as various legislation is proposed and enacted. Some examples include:

  • EU AI Act (enforcement expected in 2025)
  • Canada’s Artificial Intelligence and Data Act (AIDA)

Bookmark the International Association of Privacy Professionals Global AI Law and Policy Tracker to stay up to date on global AI regulations. And review a summary of some of some key AI-focused regulations and governance frameworks around the world: AI Regulations: Prepare for More AI Rules on Privacy Rights, Data Protection, and Fairness.

The FTC is watching

In the United States, the FTC closely monitors AI companies and their use. In early 2024, the FTC warned“Model-as-a-service companies that fail to abide by their privacy commitments to their users and customers, may be liable under the laws enforced by the FTC.”

Later, the FTC announced it launched inquiries into five companies regarding their recent AI investments and partnerships. And on February 13, 2024, it reminded AI (and other) companies that quietly changing your terms of service could be unfair or deceptive.

What is responsible generative AI use?

The glitz of generative AI has caused some to forget that it’s just a new tool. And even though it changes how people work, the basics of data protection haven’t changed. What data is being collected, stored, and used? How is it being used? Can you control it? Is there a service provider agreement?

The data protection foundations of yesterday are still relevant today when considering AI use.

Data protection foundations

  • Transparency and Consent: Be transparent about how the organization collects, uses, and shares personal data. Obtain explicit consent from individuals before processing their data.
  • Data Minimization: Collecting more data than necessary in the digital expanse is tempting. But it’s often best to adopt a “less is more” approach. Collect only the data that is necessary for a specific purpose and limit the retention period to minimize the risk of unauthorized access or misuse. Consequently, data minimization is a standard in most privacy regulations.
  • Data Security: Implement robust security measures to protect personal data from unauthorized access, disclosure, alteration, or destruction. This includes encryption, access controls, and regular security audits. It’s about building a fortress that safeguards privacy.
  • Accountability: Understand, be responsible for, and be able to demonstrate compliance with data protection and security principles.

Leading responsible generative AI use in your organization

There’s still much to learn about generative AI and privacy. As technology and regulations continue to evolve, so do privacy programs.

To start, encourage responsible AI use proactively by using a framework, developing employee guidelines, fostering a culture of privacy, and updating your third-party risk management process.

Adopt a privacy framework

Rather than getting lost in the alphabet soup of global privacy laws and regulations, a framework approach can operationalize your privacy program. Some frameworks worth considering include:

As a baseline, a framework will recommend updating policies and notices to include AI use. For instance, your acceptable use of information resources policy, internal data privacy policy, and your data privacy notice (included at all points where personal data is collected).

Nymity Framework

Download the Nymity Privacy Management and Accountability Framework

Download now

Nymity Research

Learn more about TrustArc’s Nymity Research

Learn more

Develop employee AI use guidelines

AI use in organizations looks like the Wild West right now. Employees are admittedly using unapproved AI tools at work. Now is the time to rein in the horses with some risk based guidelines.

Based on your organization’s risk tolerance and the purpose of AI use in the workplace, develop employee guidelines for AI use. Include use cases, examples, and specific restrictions. What shouldn’t go into generative AI models?

At a minimum, most recommend that no personal data or sensitive organizational data is inputted into public AI tools. If employees use other generative AI tools that come with a service agreement, determine how those tools will be assessed, approved, and implemented.

Continue to connect with privacy professionals to discuss how they manage AI data governance in their organizations. Because this is an evolving industry there’s much to learn from each other.

Train employees and foster a culture of privacy

Once employee guidelines for responsible AI use are established, it’s time to train your employees. To help your employees understand the importance of responsible AI use, start by establishing a common language.

Keeping employees informed is the best defense against the limitations of generative AI. Because the landscape is continuously changing, plan to do frequent training as you update the guidelines and responsible AI use cases.

Fostering a culture of privacy in your organization reduces risk, builds trust, and even helps with privacy regulation compliance!

Download the free Nymity Training & Awareness Checklist for Working with AI.

Update your third-party risk management processes and privacy risk assessments

If they haven’t already, it’s likely that your business partners and vendors will question how your organization is managing AI data governance. And likewise you should update your third-party data privacy risk assessment processes to include AI governance.

What updates need to be made to assess external AI systems and vendors? How does this impact data flows and sharing with current and future partners and vendors? What defined roles and responsibilities of third parties have changed or need to be updated?

Conduct due diligence around the data privacy and security posture of all current and potential vendors and processors. Routinely reassess current vendors and partners with updated guidelines. To do so, leverage the Privacy Impact Assessments (PIAs) you already know. While traditional PIAs may not address AI challenges, they can be elevated to account for the specific characteristics and risks of AI.

Also, consider how you will prove your responsible use of AI to your partners and vendors. For some AI adopters, the TRUSTe Responsible AI certification is the best way to demonstrate accountable AI use and transparent data practices.

Join the vanguard of responsible AI

Lead the charge in responsible AI adoption and data governance. Become a part of our community of AI adopters and position your organization as a trailblazer in privacy innovation and data protection.

Get the latest resources sent to your inbox

Subscribe
NymityAI

Do you want to learn the law faster and easier?

Try, NymityAI. Your personalized privacy legal navigator. Save time in your research process with expert answers in seconds. Obtain precise privacy answers with citations to pinpoint your topics, while our fine-tuned AI search engine does the work. Work smarter with Nymity Content powered by trusted privacy and legal experts over 25 years.

Back to Top