With privacy, communication is critical (but it isn’t everything)

Analysis and chart of emotions expressed in social media about Windows 10 release, courtesy NetBase

Analysis and chart of emotions expressed in social media about Windows 10 release, courtesy NetBase.

Every week, we’re seeing new stories in the news that highlight the uneasy state of privacy norms.

The announcement of Windows 10 came with swift backlash against “privacy nightmares,” Spotify’s new privacy policy sparked another wave of disbelief and outrage, and other stories–such as the one about how JFK airport may be pinging your phone to deliver more accurate wait times–are being reported with a mixture of breathlessness and unease.

Basically, according to the news, you have a choice between two extremes:

  • Everyone is tracking everything about you, and we’re hurtling toward 1984; or
  • Just calm down already, you paranoid Luddite

As you’d expect, the truth is somewhere in the middle. But where, exactly?

If you read past the first wave of reporting on Windows and Spotify, a new theme emerges. It’s not about what is being collected and why, it’s about how the data collection was communicated. Consider this comment from Whitson Gordon, published in Lifehacker:

Microsoft’s language on one or two settings is very vague, which means it’s hard to tell when it is and isn’t collecting data related to some settings. The “Getting to Know You” setting is particularly vague and problematic.

Now compare this comment to one made by Spotify CEO Daniel Ek in a blog post apologizing for the rollout of the new privacy policy:

We are in the middle of rolling out new terms and conditions and privacy policy and they’ve caused a lot of confusion about what kind of information we access and what we do with it. We apologize for that. We should have done a better job in communicating what these policies mean and how any information you choose to share will – and will not – be used.

Implicit in these arguments is that it’s less what companies are doing that’s at issue than how they communicate about what they’re doing. And all of that comes in response to popular backlash.

Interestingly, the story about JFK airport pinging your phone with Beacons and pulling off its MAC address–your phone’s unique identifier–did not garner nearly as much attention as the other stories, especially given that this data collection is being done at TSA and border control checkpoints. [Wouldn’t you like to know whether the TSA and Border Control have access to that data? I bet a lot of people would.]

Certainly there is a tremendous opportunity for more active transparency (meaning that companies make a concerted effort to communicate) and clarity (the effectiveness of these communications) when it comes to data and how it is used, both within privacy policies and in the apps themselves. This would, as the Lifehacker and Fast Company articles assert, solve a lot of problems. For example:

  • Want to upload a photo to your profile?
    Then you have to grant access to the app to access your photos.
  • Want your ride-share service to pick you up where you actually are?
    Then you have to share your location with the app and give it your precise whereabouts.
  • Want to use voice control on your apps?
    Then you need to let the app collect, record and process your speech into something the machine can understand.

All of the above are rational trade-offs, assuming the data is used as advertised, isn’t stored, used for another undisclosed purpose, or shared with someone or something you didn’t intend.

But, as the saying goes, there’s the rub.

Our recent report, “The Trust Imperative” uses a framework developed by the Information Accountability Foundation that identifies six principles that are critical to ethical data use. Here is the list:

Screen Shot 2015-08-31 at 2.34.20 PM 1

As you can see, there is a lot of ground to cover. Let’s run the JFK example through this filter.

  1. Is it beneficial? Yes, because collecting the MAC address helps the airport communicate more accurate wait times. (But, of course, benefit is in the eye of the beholder).
  2. Is it progressive? Is the minimum amount of data (MAC address only, not stored, no additional personal information) being collected? Arguably yes, because the company that makes the technology, BlipTrack, says that only the MAC address is captured, and they tell us it is not stored.
  3. Is it sustainable? Harder to know. If, for the sake of argument, Blip Systems goes out of business and this service is discontinued, what happens?
  4. Is it respectful? I’d have to say no. No one let passengers know their phones were being pinged by beacons or that their MAC addresses were being collected, encrypted or not. You could make an argument that people should know not to make their phones discoverable in public places, but from what I can tell, there was also no signage that explained that this technology was being used.
  5. Is it fair? Unclear. If the only use of the MAC address is to communicate accurate wait times, probably. But if the data were to be used for any other purpose, commercial or legal, it could be a different story.

While this isn’t a perfect science, it’s a good filter to use to determine whether a new or existing use case for data collection might have unwanted consequences. For more detailed information on this framework and its implications, please download “The Trust Imperative: A Framework for Ethical Data Use.”

About susanetlinger

Industry Analyst at Altimeter Group
This entry was posted in Uncategorized and tagged , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s