New European Case Law Clarifies Bulk Collection Requirements by Governments

10 9 Blog

Those following the legal debate following the Schrems-II decision, are well aware that one of the main arguments on the U.S. side is that the European Union should not only look at third countries’ surveillance practices, but also at their own. The typical response is that this is not possible, because national security is excluded from the competences of the EU and thus cannot be legislated by the European Commission. A series of new judgments from the Court of Justice of the European Union (CJEU) shed some new light however.

The judgments, released on 6 October 2020, relate to four cases*, criticising legislation allowing the national security agencies in the United Kingdom, Belgium and France to collect communications traffic data, on the basis of an exception in the ePrivacy Directive from 2002. Following the terrorist attacks in Madrid and London in 2004 and 2005, the European Union created a general data retention scheme for telecommunications data, that was since struck down by the CJEU for not complying with the fundamental rights to privacy and data protection. Also national laws creating a similar scheme, either based on the EU scheme or on the own initiative of an EU Member State, have been annulled by the CJEU. In the current cases, the questions put to the Court included if it was possible at all to collect telecommunications traffic data in bulk, and if so, under what conditions?

The judgment of the CJEU

Most importantly, the CJEU has confirmed in both judgments that the transmission of personal data from a communications service provider (i.e. a telecom or internet service provider) to a government authority, including to the national security services, is covered by data protection law. In this specific case, it is the ePrivacy Directive that applies, but read in the light of the GDPR. Since a transmission constitutes a data processing operation, the Court explains, it means that the communications service provider – the data controller – would need to comply with the requirements of the ePrivacy Directive and its national implementations. That includes the general aim of ePrivacy to ensure the confidentiality of communications. It is not relevant in this instance that national security is excluded from the remit of EU legislation, according to the Court, since national security is not the main reason the ePrivacy Directive exists.

National security could however be a good reason for limitations to the confidentiality requirement of the ePrivacy Directive. According to the Court, this is possible as long as the essence of the fundamental rights to privacy and data protection, among others, continue to be respected. An unlimited and continuous collection of telecommunications data is not allowed, since that goes beyond what can be seen as strictly necessary in a democratic society, and could also have detrimental effects on the life people want to live. They may stop doing things for fear of being under constant surveillance, thus causing a chilling effect. 

What would be allowed, is a time-restricted collection of telecommunications data in case of a  genuine and present or foreseeable grave threat to national security. In theory, the Court would allow the data collection under these circumstances to be indiscriminate (i.e. covering everyone), but it makes clear it prefers if the government authorities put in place objective criteria to narrow the scope of data collection, for example to a specific group of people or a specific geographical location. As to the time restrictions, the Court explains the duration of the collection of data should be such that it is foreseeable, and that regular reauthorizations – based on a renewed necessity check – should take place. For such collections of telecommunications data, governments should ensure that there is a possibility for a judicial or administrative review, with binding effect, especially with regard to the existence of the genuine and present or foreseeable grave threat to national security. 

As long as the data collection is limited to the registration of the IP address at the source of a communication – but without the link between IP addresses being documented – the Court provides more leeway, but still imposes a time restriction. The documentation of the personal information (name and address) of electronic communications users is even less restricted, and can generally take place, since it would not really contribute to the chilling effect. These two data types could therefore also be processed for other purposes, such as the fight against serious crime.

Why is this relevant?

The judgment of the Court is mainly directed at the governments putting in place legislation on the collection and use of telecommunications data. So why is it relevant for companies? 

In the first place, this is the first time since the Schrems-II decision that the Court has assessed laws against its own threshold. Paragraph 65 of the Privacy International judgment states that “the requirement that any limitation on the exercise of fundamental rights must be provided for by law implies that the legal basis which permits the interference with those rights must itself de?ne the scope of the limitation on the exercise of the right concerned”. In other words: if mass data collection is taking place, the same law should also provide for the safeguards for individuals. In the Privacy International case, the Court held this criterion was not met, since there is no limitation to the data collection – not in time, not in location, nor in the group of people whose data are transmitted to the security services.

Secondly, both judgments show that the CJEU does not only criticize the legislation of the United States, but holds the EU Member States to the same standards. Unlimited data collections without access to binding judicial or administrative review is also prohibited in the EU Member States, because this interferes with the fundamental rights to privacy and data protection beyond what can be seen as necessary in a democratic society. In addition, in these cases the Court has provided further clarity on the assessment criteria for government interference. It has made clear that in case of a serious and immediate threat to national security, for example because of a suspected imminent terrorist attack, much more would be allowed when it comes to data processing than for regular law enforcement or other government interests. In short: the data collection should be necessary and proportionate, and be accompanied by safeguards to protect the rights and freedoms of individuals. 

* The CJEU released two judgments. One in the case Privacy International v. Secretary of State for Foreign and Commonwealth Affairs and others (C-623/17), and one in the joint cases La Quadrature du Net v. Premier Ministre and others (C-511/18 and C-512/18) and Ordre des barreaux francophones et germanophone v. Conseil des Ministres and others.

Happy Anniversary, GDPR!

Copy of GDPR 2 Year

The European General Data Protection Regulation (GDPR) this week celebrates its second anniversary. For many organisations, it may seem that the GDPR has become business as usual; one of many elements of their global compliance strategy. For many others, it remains a continuous struggle. 

The two year anniversary is an important milestone for the GDPR, since this is the moment the European Commission was supposed to present the first evaluation of the application of the Regulation. Unfortunately, the report has been delayed until the start of the summer. Some of the lessons learned are nevertheless crystal clear.

Overall, the GDPR has been a success

In preparatory analysis for the European Commission’s review, the EU Member States, the European Data Protection Board (EDPB – the assembly of all EU supervisory authorities) and even industry groups, like the Centre for Information Policy Leadership, all agree: overall, the GDPR has been a success. Especially in the private sector, the Regulation has seen a big increase of awareness for privacy and data protection issues. Many organisations have implemented far-reaching privacy programs, to ensure the personal data of their employees, business partners and customers is well protected. And if something goes wrong, they are much more forthcoming to report a breach than was the case in the past, if you look at the total number of data breaches reported thus far.

Also the ‘extraterritorial’ influence of the GDPR is noticeable. Countries around the world have adopted legislation to bring their own privacy laws more in line with GDPR, or are in the process of doing so. Think for example of Japan, where additional legal provisions and guidelines were adopted to ensure their privacy law could be declared adequate. A similar process is ongoing in South Korea. And in Brazil, the new omnibus privacy law LGPD is clearly inspired by the GDPR, as is the draft Indian privacy bill currently before Congress. That doesn’t mean these laws are exact copies of the GDPR: all countries have chosen to embed their laws in their own national legal traditions, but many of the newer concepts and compliance approaches introduced by GDPR have been copied.

The GDPR has not achieved one of its main goals: full harmonisation

One of the main points of criticism of the GDPR, is that it is a Regulation-in-name-only. That requires a bit of explanation. Under EU law, there are two main legal instruments: Regulations, which have direct legal effect in all EU Member States and in principle do not require national implementing laws, and Directives, which are only binding as to the goal they aim to achieve. Directives always require implementing laws in all EU Member States. The GDPR officially is a Regulation, and many of the provisions indeed have direct effect, and can be relied upon by organisations and individuals throughout Europe. However, on many details, like the use of special categories of personal data (including health data), additional national rules can be imposed, to either allow the processing of such data or to make it more difficult. The same goes for data used in an employment relationship and for research and statistical data. Also, the age at which minors can provide consent for online services varies from country to country, between 13 and 16 years. This means the original goal to have “one single privacy rule for the whole of the European Union” has not been completely achieved. The core of the Regulation has been harmonised, but many important details have not.

What also hasn’t been fully harmonised, is the approach supervisory authorities should take when enforcing the law. The GDPR provides the main elements of what an investigation should look like and how authorities should consult each other, but the process itself is run on the basis of national administrative law. These laws fall outside the scope of EU legislation, and thus are not harmonised. 

Supervision and enforcement of the GDPR remains a struggle

Also more in general, the supervision and enforcement of the GDPR is not an unequivocal success. Many had expected – and sometimes hoped – that data protection authorities would start imposing multimillion euro fines from the moment the GDPR went into application. That seems not to have been the case. Especially some high profile complaints brought by civil society groups like NOYB (none of your business, led by the Austrian Max Schrems) and Privacy International, are still awaiting a decision by the competent authorities. But that doesn’t mean the GDPR has not been enforced at all. 

At the start of 2020, well over €115 million had been imposed in fines by the various data protection authorities. In addition, many authorities have taken other types of enforcement decisions, as allowed by the GDPR, from (public) warnings of non-compliance, to the suspension of processing operations. Many data protection authorities also make clear it sometimes suffices to have a phone call with a non-compliant organisation, to explain the correct interpretation and/or application of the GDPR. This may not be the most visible way of enforcement, but it is a really effective one.

The main hurdle for data protection authorities is a lack of resourcing and funding. Two-thirds confirm they do not have sufficient resources to deal with all the complaints received from individuals, as well as with the requests from companies for guidance and approval of certifications and international transfer instruments. Also the Council and CIPL conclude in their GDPR evaluation reports that underfunding of data protection authorities is a risk for the effective implementation of GDPR. 

With only two years experience in working with the GDPR in practice, almost everyone agrees that it is too soon to start discussing any possible changes to the text of the Regulation. For now, Member States, supervisory authorities and industry seem content with more (detailed) guidance from the EDPB. At the same time, they note the reform of the data protection legislation in Europe is still not completed. The ePrivacy Regulation, which shall provide the specific rules for online data protection in line with the standards and principles of the GDPR, is still in the legislative process, with no agreement on a final text of the Regulation in sight. The hope is the German presidency of the Council from July onwards will be able to make some progress in this file.

Leveraging GDPR ‘Legitimate Interests Processing’ for Data Science

 LIP blog header image 1

Darren Abernethy, Senior Counsel TrustArc
Ravi Pather, VP Sales CryptoNumerics

The GDPR is not intended to be a compliance overhead for controllers and processors. It is intended to bring higher and consistent standards and processes for the secure treatment of personal data. It’s fundamentally intended to protect the privacy rights of individuals. This cannot be more true than in emerging data science, analytics, AI and ML environments where due to the nature of vast amounts of data sources there is higher risk of identifying the personal and sensitive information of an individual.

The GDPR requires that personal data be collected for “specified, explicit and legitimate purposes,” and also that a data controller must define a separate legal basis for each and every purpose for which, e.g., customer data is used. If a bank customer took out a bank loan, then the bank can only use the collected account data and transactional data for managing and processing that customer for the purpose of fulfilling its obligations for offering a bank loan. This is colloquially referred to as the “primary purpose” for which the data is collected.  If the bank now wanted to re-use this data for any other purpose incompatible with or beyond the scope of the primary purpose, then this is referred to as a “secondary purpose” and will require a separate legal basis for each and every such secondary purpose.

For the avoidance of any doubt, if the bank wanted to use that customer’s data for profiling in a data science environment, then under GDPR the bank is required to document a legal basis for each and every separate purpose for which it stores and processes this customer’s data. So, for example, a ‘cross sell and up sell’ is one purpose, while ‘customer segmentation’ is another and separate purpose. If relied upon as the lawful basis, consent must be freely given, specific, informed, and unambiguous, and an additional condition, such as explicit consent, is required when processing special categories of personal data, as described in GDPR Article 9.   Additionally, in this example, the Loan division of the bank cannot share data with its credit card or mortgage divisions without the informed consent of the customer. We should not get confused with a further and separate legal basis the bank has which is processing necessary for compliance with a legal obligation to which the controller is subject (AML, Fraud, Risk, KYC, etc.).

The challenge arises when selecting a legal basis for secondary purpose processing in a data science environment as this needs to be a separate and specific legal basis for each and every purpose. 

It quickly becomes an impractical exercise for the bank, let alone annoying to its customers, to attempt obtaining consent for each and every single purpose in a data science use case. Evidence shows anyway a very low level of positive consent using this approach. Consent management under GDPR is also tightening up. No more will blackmail clauses or general and ambiguous consent clauses be deemed acceptable.

GDPR offers controllers a more practical and flexible legal basis for exactly these scenarios and encourages controllers to raise their standards towards protecting the privacy of their customers especially in data science environments. Legitimate interests processing (LIP) is an often misunderstood legal basis under GDPR.  This is in part because reliance on LIP may entail the use of additional technical and organizational controls to mitigate the possible impact or the risk of a given data processing on an individual. Depending on the processing involved, the sensitivity of the data, and the intended purpose, traditional tactical data security solutions such as encryption and hashing methods may not go far enough to mitigate the risk to individuals for the LIP balancing test to come out in favour of the controller’s identified legitimate interest.

If approached correctly, GDPR LIP can provide a framework with defined technical and organisational controls to support controllers’ use of customer data in data science, analytics, AI and ML applications legally. Without it, controllers may be more exposed to possible non-compliance with GDPR and the risks of legal actions as we are seeing in many high profile privacy-related lawsuits.

Legitimate Interests Processing is the most flexible lawful basis for secondary purpose processing of customer data, especially in data science use cases. But you cannot assume it will always be the most appropriate. It is likely to be most appropriate where you use an individual’s data in ways they would reasonably expect and which have a minimal privacy impact, or where there is a compelling justification for the processing.

If you choose to rely on GDPR LIP, you are taking on extra responsibility not only for, where needed, implementing technical and organisational controls to support and defend LIP compliance, but also for demonstrating the ethical and proper use of your customer’s data while fully respecting and protecting their privacy rights and interests. This extra responsibility may include implementing enterprise class, fit for purpose systems and processes (not just paper-based processes). Automation based privacy solutions such as CryptoNumerics CN-Protect that offer a systems-based (Privacy by Design) risk assessment and scoring capability that detects the risk of re-identification, integrated privacy protection that still retains the analytical value of the data in data science while protecting the identity and privacy of the data subject are available today as examples of demonstrating technical and organisational controls to support LIP.  

Data controllers need to initially perform the GDPR three-part test to validate using LIP as a valid legal basis. You need to:

  •               identify a legitimate interest;
  •               show that the processing is necessary to achieve it; and
  •               balance it against the individual’s interests, rights and freedoms.

The legitimate interests can be your own interests (controllers) or the interests of third parties (processors). They can include commercial interests (marketing), individual interests (risk assessments) or broader societal benefits. The processing must be necessary. If you can reasonably achieve the same result in another less intrusive way, legitimate interests will not apply. You must balance your interests against the individual’s. If they would not reasonably expect the processing, or if it would cause unjustified harm, their interests are likely to override your legitimate interests.  Conducting such assessments for accountability purposes is happily now also easier than ever, such as with TrustArc’s Legitimate Interests Assessment (LIA) and Balancing Test that identifies the benefits and risks of data processing, which assigns numerical values to both sides of the scale and uses conditional logic and back-end calculations to generate a full report on the use of legitimate interests at the business process level.

What are the benefits of choosing legitimate interest processing?

Because this basis is particularly flexible, it may be applicable in a wide range of different situations such as data science applications. It can also give you more on-going control over your long-term processing than consent, where an individual could withdraw their consent at any time. Although remember that you still have to consider managing marketing opt outs independently of whatever legal basis you’re using to store and process customer data. 

It also promotes a risk-based approach to data compliance as you need to think about the impact of your processing on individuals, which can help you identify risks and take appropriate safeguards. This can also support your obligation to ensure “data protection by design,” performing risk assessments for re-identification and demonstrating privacy controls applied to balance out privacy with the demand for retaining analytical value of the data in data science environments. This in turn would contribute towards demonstrating your PIAs (Privacy Impact Assessments) which forms part of your DPIA (Data Protection Impact Assessment) requirements and obligations.

LIP as a legal basis, if implemented correctly and supported by the correct organisational and technical controls, also provides the platform to support data collaboration and data sharing.  However, you may need to demonstrate that the data has been sufficiently de-identified, including by showing that the risk assessments for re-identification are performed not just on direct identifiers but also on all indirect identifiers as well. 

Using LIP as a legal basis for processing may help you avoid bombarding people with unnecessary and unwelcome consent requests and can help avoid “consent fatigue.” It can also, if done properly, be an effective way of protecting the individual’s interests, especially when combined with clear privacy information and an upfront and continuing right to object to such processing. Lastly, using LIP not only gives you a legal framework to perform data science it also provides a platform that demonstrates the proper and ethical use of customer data, a topic and business objective of most boards of directors. 

About the Authors  

Darren Abernethy is Senior Counsel at TrustArc in San Francisco.  Darren provides product and legal advice for the company’s portfolio of consent, advertising, marketing and consumer-facing technology solutions, and concentrates on CCPA, GDPR, cross-border data transfers, digital ad tech and EMEA data protection matters. 

Ravi Pather of CryptoNumerics has been working for the last 15 years helping large enterprises address various data compliance such as GDPR, PIPEDA, HIPAA, PCI/DSS, Data Residency, Data Privacy and more recently CCPA compliance. I have a good working knowledge of assisting  large and global companies, implement Privacy Compliance controls as it particularly relates to more complex secondary purpose processing of customer data in a Data Lakes and Warehouse environments.

TrustArc Platform Enhancements Automate Risk Management and Privacy Compliance for CCPA, GDPR

11 12 Blog 1

TrustArc reinforces its position as a leader in operationalizing privacy compliance at scale with multiple feature enhancements to the TrustArc Privacy Platform. The platform uniquely offers automated risk-management and privacy compliance workflows that integrate with existing business process systems so that organizations can efficiently manage risk and meet the obligations of regulations around the globe, including the California Consumer Privacy Act (CCPA) and GDPR, at scale.

“Privacy expectations are growing. Global companies are embracing these heightened expectations by evaluating risk as it relates to global laws,” said Chris Babel, CEO, TrustArc. “But the myriad of privacy regulations make it challenging to conduct risk assessments and operationalize privacy. We’ve updated the TrustArc Privacy Platform with new feature enhancements to simplify how organizations scale privacy compliance and manage the risks associated with that process.”

Platform Features Enable Automated Privacy Compliance at Scale

First-of-its-kind, the Risk Profile powers an automated, comprehensive view of risk that organizations incur as they operationalize privacy practices to meet the demands of global regulations.

Powered by the TrustArc Intelligence Engine, the Risk Profile automatically scores inherent and residual risk of various business activities. Privacy managers and business unit leaders can now access the risk information they need to know, when they need it, and in the right context. Together with the Privacy Profile, the Risk Profile creates a holistic view of privacy programs across all aspects of a business. The Privacy Profile shows what laws apply to an organization and how to prioritize and manage compliance in a comprehensive way. The Risk Profile helps organizations understand their risk obligations as they relate to those different regulations.

  • Dashboard Widget: Using a simple scoring method, privacy managers and business leaders make a determination of how many risk factors are associated with any given business activity. With a high-level view and an ability to dive deep into risk factors, users get greater visibility into risk across their business — straight from the dashboard.
  • Risk Algorithm: The Risk Algorithm covers 40+ laws across the world. This intelligence helps companies identify high-risk business activities, determine the appropriate impact assessment, calculate the risk at the business activity level, and immediately understand overall organizational level risk.
  • Risk Evaluation Heat Map: Privacy leaders have full control to go deeper within any business activity level to further investigate risk. With an easy-to-use heat map, users can indicate the perceived inherent risk of a particular business process. Ultimately, this risk evaluation measures the inherent risk that provides the baseline for automatically calculated residual risk.
  • Dynamically Generated Impact Assessment Reports: Privacy owners can now manage privacy programs with the confidence that they have the right controls in place for risky systems and business activities. The risk algorithm streamlines users’ selection of an appropriate PIA. These assessments result in dynamic reports that can be used in executive meetings, audits, and other business reviews.

Data Inventory Hub enables companies to easily integrate with existing systems to identify and inventory data usage, create visual data flow maps, support DSAR / consumer rights requests, generate compliance reports, maintain audit trails, and much more. New features include:

  • Configurable data elements, processing purposes, and data subjects, which allow companies to streamline the creation of data inventories and business processes while improving accuracy by eliminating end user error.
  • API integrations, including integrations with leading data discovery providers to dramatically simplify the process of creating and maintaining a data inventory or business process by integrating with existing sources of internal data.
  • Additional upload options, to simplify the process of building a data inventory.

The Platform Dashboard provides a centralized, configurable, extensive view of privacy programs and actionable insights to inform privacy program management. The Dashboard provides an extensive library of privacy and risk management widgets to quickly monitor KPIs for a wide range of compliance requirements. New capabilities include:

  • Two new widgets to support individual rights/DSAR management by showing the number and type of opened and closed requests by location.
  • Another new widget to support consent management by providing information on consents over time in different locations and websites for different types of cookies.

Individual Rights Manager enables individuals to easily submit data subject access requests (DSARs) and companies to efficiently manage, evaluate and resolve requests within required timelines. New features include:

  • Fully configurable request intake form including form fields, branding and language translations, to align with each customer’s unique business requirements.
  • Additional identity verification options of individuals with strong authentication. This provides companies flexibility to comply with various regulations while eliminating risk of mistaken or fraudulent identities.
  • Integration to third-party data sources, including data discovery tools and internal systems to streamline and automate fulfilling DSARs.
  • Customizable communication templates, which allow companies to tailor communications such as emails and pop-up windows to their specific needs.

Cookie Consent Manager is a powerful, flexible, proven solution to address cookie compliance. New features allow the Cookie Consent Manager to auto-detect if a website user is based in California or Nevada to serve the appropriate consent banner, a highly useful feature for CCPA compliance.

Learn more about TrustArc privacy solutions here.

 

New TrustArc Research Reports on Consumer Privacy Attitudes One Year Into GDPR Enforcement Era

Consumer Privacy Report Blog

May 25, 2019 marked the one year anniversary of the EU General Data Protection Regulation enforcement deadline. In the last twelve months, companies across the globe have been working diligently to achieve and maintain compliance under the regulation. The GDPR significantly increased the requirements on how businesses address consumer individual rights. Companies have been tasked with putting processes and systems in place in order to receive, escalate, and accommodate consumer requests. Failure to comply with the GDPR can result in fines, loss of reputation, and expenses associated with responding to any compliance investigations. During the IAPP Global Privacy Summit in DC earlier this month, Ireland Data Protection Commissioner Helen Dixon shared that over 6000 complaints have been launched since May 25, 2018, and eighteen large scale investigations are underway and will be reviewed by the European Data Protection Board this summer.

TrustArc has announced new findings from an online study conducted by Ipsos MORI, a global research and consulting firm, on behalf of TrustArc. The survey polled individuals aged 16-75 in the UK about a number of issues surrounding the GDPR one year since it went into effect on 25 May 2018.

A summary of the key findings follows.

TrustArc Research

  1. Trusting Companies With Personal Data Is Increasing

36% of respondents trust companies and organisations with their personal data more since the GDPR privacy regulation came into effect one year ago (rising to 44% among 16 to 24-year-olds and 41% among 25-34 year olds). Only one-third or less of those aged 35-44, 45-54, and 55-75 report being more trusting. Women, at 38%, expressed more trust in companies and organisations than men at 33%.

  1. Understanding GDPR Compliance Is Challenging

25% of respondents are confident they can tell if a company or organisation is GDPR compliant versus 33% who are not confident. There were some differences based on geographic location with respondents in Northern Ireland indicating they are confident at 33%, which was significantly higher than those in Wales at 17%.

  1. Privacy Certifications Can Influence Behavior

57% of respondents would be more likely to use websites that have a certification mark or seal to demonstrate GDPR compliance versus 9% who are not. There were notable differences based on geographic location; respondents in Scotland were significantly more likely to be confident (67%) than those anywhere else in England.

56% are more likely to do business with companies and organisations that have a certification mark or seal to demonstrate GDPR compliance, versus 8% who disagree. Agreement is higher among households with larger incomes (65%).

  1. Young Adults Are Most Positive About How Well GDPR Enforcement Has Worked

34% of respondents agree that the regulatory enforcement of the GDPR privacy regulation has worked well versus 14% who disagree this is the case. There were significant differences based on age with younger respondents more likely than older respondents to agree this regulation has worked well (46% of 16 to 24-year-olds agree compared with 28% of those aged 55-75).

  1. Respondents Are Exercising GDPR Privacy Rights

47% of respondents have exercised their GDPR privacy rights by sending one or more of eight requests to a website, company or organisation. When asked which of these eight rights respondents had exercised in the past 12 months, the results were:

  • Opting out of/Unsubscribe to email marketing = 35%
  • Opting-out of /not consenting to install cookies = 23%
  • Restrict use of my personal data = 13%
  • Erase my personal data = 10%
  • Correct my personal data = 6%
  • To request access to your personal data: 5%
  • To request to transfer your personal data: 3%
  • To make a privacy complaint to a regulator: 3%

There were significant differences based on respondent gender with 52% of females and 42% of males exercising their rights in this respect. 43% of respondents claimed not to have exercised these privacy rights in the past 12 months. Exercising these rights increases progressively with age: 32% of 16-24-year-olds, 33% of 25-34 year-olds, 45% of 35-44 year-olds, 46% of 45-54 year-olds, and 52% of 55-75 year-olds.

For more information, download the research infographic at GDPR Consumer Research Infographic and a summary of the survey questions and responses at GDPR Consumer Research Survey.

Research Methodology

On behalf of TrustArc, Ipsos MORI interviewed 2,230 adults aged 16-75 across the United Kingdom between 15-17 May 2019. Interviews were carried out online on Ipsos MORI’s i:Omnibus. A quota sample of respondents were interviewed with quotas set by age, gender, and geographic region of residence. The final survey data were weighted to the known population of this audience at the analysis data.

TrustArc can help with all phases of GDPR compliance – from building a plan to implementing processes and controls to demonstrating and managing ongoing compliance. Learn more here.

Privacy Shield Approaching Its 3 Year Anniversary in Operation

EU US PS

With data protection-related activity bustling around the world–from “Brexit” and GDPR enforcement to the approaching CCPA and exciting developments in the APAC region–it’s understandable to lose track of the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks.

What follows are responses to the most frequent Privacy Shield inquiries TrustArc is hearing from our customers.

Is Privacy Shield Still Valid?

Yes – in fact, Privacy Shield is fast approaching its three year anniversary on July 12th. Since its 2016 adoption, Privacy Shield has remained a sound, scalable and steady legal transfer mechanism for U.S. entities seeking to receive personal data from the EU and/or Switzerland (with two successive approvals from the European Commission’s annual review process).

What Happened with the Earlier EU Parliament Rumblings and the Successful Annual Reviews?

While the EU Parliament had indicated concerns with the Privacy Shield arrangement–the Parliament actually does not have the authority to determine the adequacy of the Privacy Shield program. This authority is reserved exclusively for the European Commission (EC).

In July of last year the EC’s Justice Commissioner stated that a Parliament-requested suspension was “not warranted,” and further indicated that Privacy Shield is of “vital importance” to commerce and has “vigorous data protection requirements.”

Moreover, in its December 2018 report to the European Parliament and Council, the EC concluded that “the United States continues to ensure an adequate level of protection for personal data transferred under the Privacy Shield,” while further noting the improvements to Privacy Shield’s functioning since its previous annual review, along with steps it will continue to monitor.

Did the GDPR Replace Privacy Shield?

No – personal data transfers outside of the European Economic Area (EEA) are a key component of GDPR and Privacy Shield provides a way for U.S. organizations to address this, as Privacy Shield represents the European Commission’s determination that the United States provides an acceptable level of data protection essentially equivalent to that of the EU.

Would Brexit Invalidate Privacy Shield?

No – with the deadline for the United Kingdom to exit the European Union having been extended to October 31st, EU law will remain applicable in the U.K. until such an exit takes place–with Privacy Shield continuing to apply to U.K. personal data as it always has.

In the event the U.K. does leave, two scenarios are possible for Privacy Shield participants, as the U.S. Department of Commerce has addressed in a set of FAQs. Either an existing “transition period” will be agreed upon by the U.K. and EU, during which EU data protection law (and Privacy Shield) will continue to apply; or, in the event of a “no-transition period” immediate exit, Privacy Shield participants will need to update their privacy notice(s) to include reference to also relying on Privacy Shield for transfers from the U.K.  Regardless of which scenario may ultimately play out, the status of the EU-U.S. Privacy Shield Framework will remain unchanged.

Lastly, where a participant had selected the EU Data Protection Authority panel for dispute resolution purposes, in the event of an exit, the organization would have to instead cooperate with the U.K. ICO for U.K. residents’ complaints.

What Does It Mean that Standard Contractual Clauses Are Being Challenged in Court?

Pre-approved model or standard contractual clauses (SCCs), the existing versions of which pre-date the GDPR, are also recognized under GDPR as a valid data transfer mechanism to non-EEA “third countries.”  According to the U.K. ICO, the European Commission plans to update the existing SCCs for GDPR alignment, but until such amendment or replacement the existing SCCs remain in force and usable.  However, the validity of current SCCs as a transfer mechanism to the U.S. is currently being challenged in the European Court of Justice in a case brought by Austrian privacy advocate Maximilian Schrems.

The eventual conclusions around questions considered by the Court theoretically could invalidate SCCs as a EU-to-U.S. data transfer mechanism, and could also impact the status of the Privacy Shield Framework.

However, most critically, the Privacy Shield Framework itself was developed in direct response to the requirements outlined by the European Court of Justice in response to a previous case brought by Schrems which invalidated the Safe Harbor program.  Compliance with these new requirements was assessed and approved by the European Commission as a condition of its successful adequacy determination, which as noted earlier, has been reaffirmed in two successive reviews by the Commission.

Are There Differences Between Privacy Shield and SCCs?

Yes — whereas Standard Contractual Clauses (SCCs) are transactional-based and apply only as-between the specific parties signing them, an organization’s Privacy Shield self-certification is applicable to the receipt of any EU/Swiss personal data flows.  This can save time and cost for businesses (especially for SMEs and start-ups). Privacy Shield also affords individuals an independent recourse mechanism, which is beneficial for consumers, partners and employees.

In light of the above, Privacy Shield continues its status as a Commission-supported option for U.S. businesses seeking an established, cost-effective, scalable and agile means of protecting and receiving personal data from the EU and Switzerland.

For further information, including how your company can undertake a formal verification of its privacy program against the Privacy Shield Frameworks’ Principles, contact TrustArc today.

 

div>