Every week, we’re seeing new stories in the news that highlight the uneasy state of privacy norms.
Basically, according to the news, you have a choice between two extremes:
- Everyone is tracking everything about you, and we’re hurtling toward 1984; or
- Just calm down already, you paranoid Luddite
As you’d expect, the truth is somewhere in the middle. But where, exactly?
If you read past the first wave of reporting on Windows and Spotify, a new theme emerges. It’s not about what is being collected and why, it’s about how the data collection was communicated. Consider this comment from Whitson Gordon, published in Lifehacker:
Microsoft’s language on one or two settings is very vague, which means it’s hard to tell when it is and isn’t collecting data related to some settings. The “Getting to Know You” setting is particularly vague and problematic.
Implicit in these arguments is that it’s less what companies are doing that’s at issue than how they communicate about what they’re doing. And all of that comes in response to popular backlash.
Interestingly, the story about JFK airport pinging your phone with Beacons and pulling off its MAC address–your phone’s unique identifier–did not garner nearly as much attention as the other stories, especially given that this data collection is being done at TSA and border control checkpoints. [Wouldn’t you like to know whether the TSA and Border Control have access to that data? I bet a lot of people would.]
Certainly there is a tremendous opportunity for more active transparency (meaning that companies make a concerted effort to communicate) and clarity (the effectiveness of these communications) when it comes to data and how it is used, both within privacy policies and in the apps themselves. This would, as the Lifehacker and Fast Company articles assert, solve a lot of problems. For example:
- Want to upload a photo to your profile?
Then you have to grant access to the app to access your photos.
- Want your ride-share service to pick you up where you actually are?
Then you have to share your location with the app and give it your precise whereabouts.
- Want to use voice control on your apps?
Then you need to let the app collect, record and process your speech into something the machine can understand.
All of the above are rational trade-offs, assuming the data is used as advertised, isn’t stored, used for another undisclosed purpose, or shared with someone or something you didn’t intend.
But, as the saying goes, there’s the rub.
Our recent report, “The Trust Imperative” uses a framework developed by the Information Accountability Foundation that identifies six principles that are critical to ethical data use. Here is the list:
As you can see, there is a lot of ground to cover. Let’s run the JFK example through this filter.
- Is it beneficial? Yes, because collecting the MAC address helps the airport communicate more accurate wait times. (But, of course, benefit is in the eye of the beholder).
- Is it progressive? Is the minimum amount of data (MAC address only, not stored, no additional personal information) being collected? Arguably yes, because the company that makes the technology, BlipTrack, says that only the MAC address is captured, and they tell us it is not stored.
- Is it sustainable? Harder to know. If, for the sake of argument, Blip Systems goes out of business and this service is discontinued, what happens?
- Is it respectful? I’d have to say no. No one let passengers know their phones were being pinged by beacons or that their MAC addresses were being collected, encrypted or not. You could make an argument that people should know not to make their phones discoverable in public places, but from what I can tell, there was also no signage that explained that this technology was being used.
- Is it fair? Unclear. If the only use of the MAC address is to communicate accurate wait times, probably. But if the data were to be used for any other purpose, commercial or legal, it could be a different story.
While this isn’t a perfect science, it’s a good filter to use to determine whether a new or existing use case for data collection might have unwanted consequences. For more detailed information on this framework and its implications, please download “The Trust Imperative: A Framework for Ethical Data Use.”