Skip to Main Content
Main Menu
Report

Survey Series: AI Training, Transparency, and Trust

Organizations are moving quickly to govern how AI is trained and disclosed, but are consumer expectations keeping pace with enterprise confidence?

In this second installment of TrustArc’s survey research series, we compare fresh data from professionals and consumers across North America and Europe. While privacy and security teams report high levels of confidence in their safety controls and bias mitigation, the public remains skeptical.

Download this report to explore the “Trust Gap” and discover why transparency is a commercial differentiator, not a compliance checklist. From the divergence between US operational readiness and European policy focus to the impact of plain-language disclosures on brand loyalty, this report provides the benchmarks you need to align your AI governance with market reality.

Key takeaways include:
  • The Trust Gap: While 72% of professionals are confident in their ability to prevent data misuse, over 40% of consumers remain extremely or very concerned about unconsented AI training.

  • Transparency as a Growth Lever: Over half (53%) of consumers indicate they are more likely to use a company’s services when data use is disclosed in plain language, proving that clear consent pathways drive business value.

  • The Atlantic Divide: New data reveals a split between “operations-first” US organizations, which lead in readiness and documentation, versus “policy-first” European stakeholders who emphasize regulation but lag in visible choice mechanisms.

“53% of consumers indicate they are more likely to use a company’s services when the disclosure explains, in plain language, how personal data is used to train AI.”

 
Back to Top