How to Protect Our Privacy During the COVID-19 Crisis

By Damayanthi Jakubowski (Privacy Consultant and Owner of Privacy101.org)

COVID-19, a novel Coronavirus Disease, has shaken the world to its core. In a matter of weeks, the issue of health has become everyone’s priority, pushing many other essentials to the side. To quickly help curtail the spread of the virus, various mobile technologies are being used by the governments of several countries. Perhaps because of the sense of urgency, the issue of privacy does not always seem to be at the forefront of everyone involved. However, especially in times of crisis, it is of the utmost importance to not abandon our civil liberties. Protecting our privacy should remain a focus for everyone.

Several governments around the world have started using individuals’ geolocation data gathered from local telecommunications providers and from companies such as Google and Facebook to identify how population groups move within a certain region. This data can provide essential information, especially to check whether population groups are obeying a stay-at-home order. However, although an individuals’ name, address, and other identifying information is generally stripped from these types of datasets provided by telecommunications providers, without additional privacy protections, it has been proven to be considerably easy to re-identify the individual. (1)

Governments soon started coming up with other innovative ideas to monitor COVID-19 and its victims. The Chinese government introduced the “health-check app” through which users enter their symptoms and which subsequently displays a red, yellow, or green code. The red code indicates that the individual has a high chance of being infected with COVID-19 and that he or she will have to be quarantined for 14 days. Germany launched an app (Corona Datenspende) which automatically monitors certain symptoms linked to COVID-19. By downloading the app to a Fitbit or an Apple Watch, an individual’s pulse rate, ECG data, stress level, temperature, blood pressure, weight, height, gender, and age are collected. It then provides this data anonymized, but with the users’ zip code, to the German government.

Germany’s neighbor, the Netherlands, is in the process of creating two separate apps. One app will show whether the user was near another individual infected with COVID-19, and through the other app, the user will be able to easily connect with his or her doctor to receive COVID-19 related treatment information. Across the ocean, in the United States, Google and Apple decided to collaborate and develop new technologies aimed at facilitating the sharing of data between healthcare organizations.

When implementing new mobile technologies to fight the COVID-19 pandemic, it’s essential that governments take into consideration their own privacy laws to help protect their citizens’ privacy. However, as of this writing, China does not have a privacy law but focuses on cybersecurity instead. While a cybersecurity law helps protect health data, it does not necessarily also protect the privacy of the individuals whose data are being collected. Those in Europe, on the other hand, benefit from the recent implementation of the General Data Protection Regulation (GDPR), which provides them with comprehensive and thorough data protections. In the U.S., while the Health Insurance Portability and Accountability Act (HIPAA) provides privacy protections to certain health data, there are some important exceptions to the HIPAA Privacy Rule which permit the limited sharing of protected health information for “the purpose of preventing or controlling disease, injury, or disability, including for public health surveillance, public health investigations, and public health interventions,” among other reasons. 45 CFR 164.512(b)(1)(i).

Whether there is a privacy law in place, governments and organizations should ask themselves some important questions to ensure they are protecting individuals’ privacy at all times: What personal (health) data is critical to collect when fighting COVID-19? Will we use a central database to store data, and who will have access to this database? How long will we keep data for, and how exactly will we delete it? Is the use of mobile technologies really as secure as assumed? Will app use be mandatory, and if so, how will enforcement take place? What happens if minority groups are disproportionately affected by COVID-19, and how could the app negatively influence relations between people?

Individuals, on the other hand, will have to make their own privacy decisions before they decide to download an app: What if I have COVID-19 symptoms but do not have the actual virus, do I still want to be tracked? Could my health data ever be used against me? What control measures do I have over the data that is being collected? Can I withdraw consent or delete the data myself?

Previous crises have shown that drastic threats can lead to drastic privacy-violating measures, and COVID-19 certainly is a threat the world has never experienced before. However, individuals, governments and organizational leaders now also have a unique opportunity to create a world in which safety, security and individual privacy go hand-in-hand. Asking ourselves the right privacy questions, before using mobile technologies during this COVID-19 crisis, should be an important first step.


(1) Xu, F., Zhang, P., Tu, Z., Fu, X., Li, Y., Jin, D. (2017). Trajectory Recovery from Ash: User Privacy is NOT Preserved in Aggregated Mobility Data. Computers and Society: Cryptography and Security. Retrieved from https://arxiv.org/abs/1702.06270; Narayanan, A., and Shmatikov, V. (2019). Robust de-anonymization of large sparse datasets: a decade later. Princeton.edu. Retrieved from https://www.cs.princeton.edu/~arvindn/publications/de-anonymization-retrospective.pdf

Can You Legally do Analytics Under the GDPR?

Anonos Logo

by Gary LaFever, CEO of Anonos
Taking the “personal” out of Personal Data®

Many companies aren’t yet aware that they are or will be doing anything wrong processing analytics or using historical data bases under the GDPR. While many companies are understandably focused on conducting data inventories and data protection impact assessments, it is critical to note that inventories and assessments will not support new legal bases required under the GDPR for processing data analytics or for using historical databases involving EU personal data.

An important aspect of the GDPR is the new requirement that “consent” must be specific and unambiguous to serve as a valid legal basis. In order for “consent” to serve as lawful basis for processing personal data, it must be “freely given, specific, informed and an unambiguous indication of the data subject’s agreement to the processing of personal data relating to him or her.”[1] These GDPR requirements for specific and unambiguous consent are impossible to satisfy in the case of iterative data analytics where successive analysis, correlations and computations are not capable of being described with specificity and unambiguity at the time of consent. In addition, the GDPR has no “grandfather” provision allowing for continued use of data collected using non-compliant consent prior to the effective date of the GDPR.

To lawfully process data analytics, and to legally use historical databases, containing EU personal data, new technical measures that support alternate (non-consent) GDPR-compliant legal bases are required. After May 25, 2018, companies that continue to rely on consent for analytics, AI and use of historical databases involving EU personal data will be noncompliant with GDPR requirements and therefore subject themselves, as well as co-data controller and data processor partners,[2] to the risk of well-publicized fines of up to 4% of global turnover or 20 Million Euros, whichever is greater. The good news is that new technical requirements under the GDPR – Pseudonymisation and Data Protection by Default – help to satisfy alternate (non-consent) legal bases[3] for data analytics and use of historical databases involving EU personal data.

GDPR-Compliant Pseudonymisation

The GDPR embraces a new risk-based approach to data protection and shifts the primary burden of risk for inadequate data protection from individual data subjects to corporate data controllers and processors. Prior to the GDPR, the burden of risk was born principally by data subjects because of limited recourse against data controllers and the lack of direct liability for data processors.

The GDPR recognizes that static (persistent) purportedly “anonymous” identifiers used to “tokenize” or replace identifiers are ineffective in protecting privacy. Due to increases in volume, variety and velocity of data combined with advances in technology, static identifiers can be linked or readily linkable due to the Mosaic Effect[4] leading to unauthorized re-identification of data subjects. Continued use of static identifiers by data controllers and processors inappropriately places the risk of unauthorized re-identification on data subjects. However, the GDPR encourages data controllers and processors to continue using personal data by implementing new technical measures to “Pseudonymise” [5] data to reduce the risk of unauthorized re-identification. GDPR compliant Pseudonymisation requires separation of the information value of data from the means of linking the data to individuals. In contrast to static identifiers which are subject to unauthorized relinking via the Mosaic Effect, dynamically changing Pseudonymous identifiers can satisfy requirements to separate the information value of personal data from the means of attributing the data back to individual data subjects.

Data Protection by Default

The GDPR imposes a new mandate to provide Data Protection by Default,[6] which goes further than providing perimeter only protection and is much more than merely “privacy by design.” It is the most stringent implementation of privacy by design. Data Protection by Default requires that data protection be applied at the earliest opportunity (e.g., by dynamically Pseudonymizing data) and requires that steps be affirmatively taken to make use of personal data. This is in stark contrast to common practices prior to the GDPR, when the default was that data was available for use and affirmative steps had to be taken to protect the data. Data Protection by Default requires granular, context sensitive control over data when it is in use so that only the data proportionally necessary at any given time, and only as required to support each authorized use, is made available.

GDPR Technical Requirements and Data Stewardship

Prior to the GDPR, risks associated with not fully comprehending broad grants of consent were borne by individual data subjects. Under the GDPR, broad consent no longer provides sufficient legal basis for data analytics or use of historical databases involving personal data. As a result, data controllers and processors must adopt new technical safeguards to satisfy an alternate legal basis. GDPR requirements may be satisfied by complying with new Pseudonymisation and Data Protection by Default requirements to help support alternate (non-consent) legal bases for analytics and use of historical databases.

Even in situations where a company is not required to comply with EU regulations, compliance with GDPR requirements for Pseudonymisation and Data Protection is evidence of state-of-the-art initiatives to serve as a good steward of data thereby engendering maximum trust with customers.

[1] See Recital 32 and Article 4(11).

[2] See Articles 26 and 82.

[3] See Articles 6(1)(b)-(f).

[4] The “Mosaic Effect” occurs when a person is indirectly identifiable due to a phenomenon referred to by the Article 29 Working Party as “unique combinations” where notwithstanding the lack of identifiers that directly single out of a particular person, the person is still “identifiable” because that information may be combined with other pieces of information (whether the latter is retained by the data controller or not) enabling the individual to be distinguished from others. See http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2007/wp136_en.pdf .

[5] See Article 4(5).

[6] See Article 25.

The GDPR is Coming: 3 Things for DPOs to Consider About Privacy Awareness

MediaPro Logo

by Tom Pendergast

Many of the impacts of the EU’s wide-reaching General Data Protection Regulation (GDPR) are still being hemmed and hawed about, but one thing is clear: more Data Protection Officers will be needed.

The IAPP estimated last year that an estimated 28,000 new DPOs will be needed to oversee data handling for organizations subject to the GDPR. The mandatory DPO is one of many provisions within the GDPR going into effect in May 2018. (Check out our white paper here for a primer and some industry expert input).

While the requirements for getting in compliance with the GDPR are many—see the full 88-page regulation here or take a look at a short version from the DPO Network Europe—there’s one important factor that we want to draw attention to:

The GDPR requires privacy awareness training, and it’s the DPO’s responsibility (see article 39 section 1B).

While the GDPR offers no real specifics on what privacy awareness training should entail, I’d like to provide some suggestions, based on our years of experience working with some of the most privacy-aware global companies. If you’ve been assigned the DPO duties in your organization, here are three things for you to consider as you begin your work (in the form of an open letter).

To the new Data Protection Officer

Congratulations on your new DPO position!

You’ll undoubtedly be hitting the ground running, so allow me to quickly get to my point. The GDPR in no uncertain terms requires privacy awareness training. With this obligation hanging over your head, you might be wondering how exactly to begin moving down the path of organization-wide privacy awareness.

The short answer: strive for a privacy-aware culture. Even nebulous corporate values like “privacy awareness” can be essential to the functioning of an organization if they are championed by executives, embedded in operational procedures, aligned to key business goals, measured regularly, and effectively communicated on a consistent basis to all employees. Such steps ensure that these values are proactive parts of corporate culture, embedded within the organization’s design, and accepted across the organization as the default mode of operation for all employees.

The onus is on you to champion privacy-based thinking as a vital part of organizational culture. Such a task is no small feat, but possible. How? You can start by following some of the best practices of America’s most risk-aware companies. Here are some ideas:

  1. Start at the Top

For better or worse, people look to leaders to set the tone for their organization. That’s why you, your organization’s new DPO, and your executives must understand the importance of clear communication about privacy risks.

In our experience, too few people at this level understand or speak personally and directly about the impact this risk has on their lives and their organizations. We need to do a better job of educating leaders about the nature of risks, and get them to incorporate this understanding into the regular communications to their employees and citizens. Ensuring that privacy risks are understood at the executive level will also make it easier to make the case for comprehensive privacy awareness programs when there is a check to be signed.

Recommendation: Educate all executive-level personnel in privacy best practices and ensure they’re committed to giving privacy a regular place in communications both to their employees and to the public.

  1. But Make it for Everybody

Employees may look to leaders to set the tone, but they will not make substantive changes in behavior unless they can directly connect data privacy risks to their work and personal lives. That’s why it’s so critical that you reach people where they are:

  • Those handling financial information need to practice the skills involved in securing credit card data and all sources of financial data, just as nurses and healthcare professionals need to protect confidential health information
  • Managers and executives need to understand that their heightened access to information makes them targets
  • IT staff need special training, not just on their privileged access to data but also on the role they play as ambassadors in understanding and using information technology to protect information

No matter our age or our job, we all face privacy risks. But these risks take different forms, and what we need to know and do to protect ourselves differs across our roles. The way you educate must reflect those differences, or it will be irrelevant and ultimately ineffective.

Recommendation: Tailor all privacy-related training and communication to roles (whether they be job roles or phases in life) to ensure the information is relevant and actionable.

  1. Make it Engaging

If we ever expect privacy knowledge to become a foundational element in our culture, we need to take our cues from advertising, communications, and PR. (And not, I’m sorry to say, from conventional training practices). Look what Smokey the Bear did for preventing wildfires or what “Where’s the Beef?” did for hamburgers. As someone responsible for teaching privacy best practices (or at least researching and managing a training vendor), you need to think like an ad executive.

Simple slogans or interactive experiences, clearly and repeatedly delivered in fun and relevant ways, do far more to build awareness than the long, dry training courses that are so frequently hailed as the solution when it comes to data privacy. Companies that leverage highly visible, regular communications and activities focusing on key risks have the most success at building information protection into organizational culture. Is there a risk in using humor or games or shock tactics to communicate about data protection? Sure. Some people won’t get it or may be put off by a particular approach. But the risk of boring people is much greater. If people are bored, they’ll never learn.

Recommendation: Engage in a comprehensive campaign to get people talking about privacy best practices with features like games, phishing simulation, posters, and videos. The more varied ways you can present your message, the better.

My advice ultimately comes down to this: Employees need to see the benefits of identifying personal information; handling it appropriately; and reporting potential privacy incidents before they lead to data breaches.

It’s essential that you raise the transparency and visibility of efforts to promote information protection, as it’s critical to the development of a privacy-aware culture within your organization. This is your opportunity to make sure that everyone at your organization makes data privacy their responsibility, as it should be.

I wish you the best.

Sincerely,

Tom Pendergast

Tom Pendergast is the chief architect of MediaPro’s Adaptive Awareness Framework approach to plan, train, reinforce, and analyze workforce learning and awareness in the subjects of information security, privacy, and corporate compliance. Tom has a Ph.D. in American Studies from Purdue University and is the author or editor of 26 books and reference collections. Tom has devoted his entire career to content and curriculum design, first in print, as the founder of Full Circle Editorial, then in learning solutions with MediaPro.

Tom Pendergast will also be speaking at the upcoming TRUSTe Privacy Risk Summit on the “GDPR Readiness Half Time Report: Will Companies Make the Grade?” panel.

The Internet of Things and Connected Cars: Considering Privacy Issues and Minimizing Risk

blank

The internet of things is the connection of a broad range of devices using an IP address. It can range from our smart TVs and phones, to our home security systems, thermostats … the list goes on. A popular prediction is that by 2020, the internet of things will comprise no less than 50 billion devices.

With this type of wide adoption, concerns over private data surface – how it is collected, how it is used, and how it may make your organization vulnerable to risk.

Connected cars, having an IP address, are part of the internet of things. Unless anonymized, all data that comes from a car is potentially personal, frequently behavioral, sometimes social, and now with payment systems, sensitive, financial, and reputational as well. As just one example, a connected car could have access to a credit card number, where the data subject drove before and after a purchase, and all of a phone’s contacts. It may also deduce where the data subject lives and works, how they typically drive, and whether the data subject is driving in a particularly erratic manner at a given moment.

Privacy and the Internet of Things: Understanding Risk

To paraphrase a recent TRUSTe Privacy Blog post, as the internet of things technologies advance and companies have greater monetary incentives to process the data, privacy and transparency should be considered. The more connected devices there are, the greater risk that they will be compromised. The FTC report “Internet of Things: Privacy & Security in a Connected World” indicates that fewer than 10,000 households together generate 150 million discrete data points every day.

Anticipating the need for increased vigilance in privacy protection, in late 2014, the Alliance of Automobile Manufacturers (representing almost all car manufacturers) developed and released a set of Consumer Protection Privacy Principles to be incorporated into the privacy policies and statements of car manufacturers.

Now, regulators are increasingly weighing in. When it comes to connected automobiles alone, privacy laws and enforcements are growing. In a keynote presentation at the 2016 Connected Cars conference, FTC Commissioner Terry Sweeney stated that the Commission was watching to ensure that automobiles protect the security and privacy of consumers. France’s data protection authority CNIL released a compliance package which provides guidelines for how to treat the personal data gathered by connected cars. This guideline is intended to be consistent with requirements under the EU General Data Protection Regulation (GDPR) when that law goes into effect next year.

IoT and Unauthorized Disclosure of Data: Incident or Breach?

Like any other privacy incident in which private, protected data is revealed without authorization, an incident involving an IoT device should be analyzed under all applicable breach notification laws and contractual obligations. When conducting a multi-factor risk assessment to determine if an incident meets a breach threshold, keep the following in mind:

  1. Understand the difference between an incident and a breach, it’s key to determining if your incident requires notification. Making this determination means answering questions such as: how was the data stored, how was it transmitted, were there adequate technical safeguards in place with respect to both… how much risk should be attributed to the recipient? Were they authorized? How likely are they to misuse the data? Are there any administrative or contractual protections on that relationship? After the incident, were there any mitigation measures taken, such as remotely wiping storage media, the changing of credentials, or other measures that could limit or remove further risk exposure?
  2. Proving consistency in your risk assessment process can help you pass audit – or even avoid coming under scrutiny of audit. Automation tools in incident response provide a consistent process for documenting and profiling the incident, scoring that incident against applicable laws, and generating incident specific notification guidance and decision-support.
  3. Track trends in incident categories and root causes. Learn from your incidents. Accurately identifying weaknesses in your systems, departments or processes can reduce the number of incidents and your organizational risk. Automation is key to ensuring proper analysis and risk mitigation.

2017 Privacy Risk Summit Session

For more on the topic of Privacy and the Internet of Things, attendees of the upcoming Privacy Risk Summit are invited to join the session “What’s your Wallet? The Privacy and Security of In-Car Payment Systems” on June 6, 2017 from 10:30 – 11:30 AM. A panel that includes K&L Gates attorneys from the US and Europe, a client manufacturer connected car technology and myself will discuss challenges of implementing the new standards imposed by the US Federal Trade Commission, as well as French, German and British data protection authorities. Panelists include:

  • Jill Phillips, Sr Attorney, Privacy & Security, Intel
  • Julia Jacobson, Partner (Boston Office), K&L Gates LLP
  • Claude-Etienne Armingaud, Partner (Paris Office), K&L Gates LLP
  • Alex Wall, Senior Counsel & Global Privacy Officer, RADAR, Inc.

 

 

How the Privacy Landscape is Creating In-Demand Jobs

blank

By KimAnh Tran, Associate Legal Counsel, CIPP/US, Contributor

High profile breaches seem to arise almost weekly across all industries and verticals, making privacy and security top-of-mind for organizations large and small. Fear has proven to be a strong motivator for many organizations, as an expensive remediation process, a regulatory audit and a public relations disaster looms with any breach. Predictably, companies are reacting by trying to clean up their own privacy practices company-wide. This objective, though admirable, is not easily accomplished, and typically requires the skills of experienced privacy professionals.

Privacy management as an industry is still relatively young and consequently, privacy veterans are few and far between. However, more and more job descriptions express a need for seasoned privacy professionals with experience in tracking and understanding privacy regulations and best practices, and applying such knowledge in a variety of different roles and functions.

Though official titles may vary, there are several roles and functions that seem to be in-demand in the privacy space. The qualifications for each may differ depending on company size, the company’s industry and need for privacy support. However, a CIPP certification through the International Association of Privacy Professional may indicate a certain level of credibility and dedication to privacy in the eyes of a hiring manager.

(more…)

EdTech Companies: Tips on Compliance with the Applicable Regulatory Framework (COPPA)

By Shreya Vora, Esq., CIPP/US

blank

Educational technology is really taking off. Kids today use tablets and computers at school, learning apps and a bevy of other online tools. When building products for the education technology sector, all business owners need to consider privacy – everyone from budding entrepreneurs to established companies to large multi-national corporations.  When your technology is aimed at kids there are laws as well as best practices to follow in order to mitigate risk and ensure consumer trust.

Understanding the legal landscape within which your technology is operating is essential to ensuring your company’s survival and success. Failure to comply can lead to hefty fines, the loss of business, reputational damage, and a media nightmare. Understanding the laws and best practices in your industry will empower you to design and update your technology with children’s privacy issues in mind. It goes without saying that given the speed of technological innovation, many of the applicable laws have necessitated (and continue to necessitate) reform to truly address the risks posed by education technology, as well as the data gathered about children through such technology (i.e. what can be done with metadata, data retention policies, use of information for advertising purposes — the list goes on). That said, for those working in this space, there are some key regulations to keep in mind (though this is by no means a comprehensive list).

(more…)

div>