What’s the ROI of Trust? Results of the Edelman Trust Barometer 2015

Photo: purplejavatroll, CC 2.0

Photo: purplejavatroll, CC 2.0

Each year, Edelman conducts research that culminates in the publication of its Trust Barometer (disclosure: I worked at Edelman when the first report was published). It’s been long enough now–15 years and counting–that the study has seen meaningful shifts in patterns of trust: of governments, NGOs, business and business leaders. We’ve seen these shifts through the lens of many significant geopolitical events: the 1999 “Battle in Seattle,” the war in Iraq, and the great recession of 2007-2008 are some examples.

But this year’s results have troubling implications for the technology industry in particular. According to the report, “[F]or the first time since the end of the Great Recession, trust in business faltered.” Technology, while still the most trusted of industries at 78 percent, experienced declines in many countries across multiple sectors. Trust in the consumer electronics sector fell across 74 percent of countries, trust in the telecommunication sector fell in 67 percent of countries, and, “in 70 percent of countries trust in technology in general sank.”

But most troubling of all were the findings about trust in innovation. Granted, this was the first time that Edelman has asked this particular set of questions, but it likely won’t be the last. And it sets a useful baseline against which to measure and better understand how the general population feels about the nature and pace of innovation over time.

The finding–that 51 percent of respondents feel the pace of innovation is “too fast”–is worth unpacking, especially as it varies substantially among industries. One of the findings I found most interesting: trust in a particular sector–healthcare or financial services, for example–does not guarantee that the innovations in that sector are also trusted. Consider financial services (trust at 54 percent overall) compared to electronic payments (62 percent, up 8 percent), or Food & Beverage, (a 67 percent overall level of trust, compared to 30 percent for GMO foods). Of course, these numbers change dramatically as one views them across regions, from developed to developing countries, for example.

So, even given high overall levels of overall trust in the technology industry, we cannot sit comfortably and assume that there is a “trust dividend” that we can collect on as we continue to work on topics from cloud computing to big data to artificial intelligence.

While we don’t have data that specifically links levels of trust in technology and innovation to Edward Snowden’s revelations about NSA surveillance methods, or recent corporate data breaches, or disclosures about just how many pieces of credit card metadata you need to identify an individual, we do have evidence as to what kinds of behaviors affect perceptions of trust.

Unsurprisingly, integrity and engagement are the winners.

From there, you don’t need a map to get to the value of integrating trust into everything we do as an industry, or, more accurately, a collection of many, interrelated industries. Here is how Edelman reported respondents’ actions based on levels of trust:

Source: Edelman

Source: Edelman

And this is where we have to turn to the inevitable question: what are the ingredients in the ROI of trust? Based on the implicit and explicit findings above, I’d propose the following list of metrics, for starters:

  • Propensity to buy/revenue opportunity
  • Brand Reputation
  • Customer value (transaction to aggregate)
  • Shareholder value
  • Cost savings/improvement based loyalty
  • Cost savings/improvement based on advocacy

The next step, of course, is to take those twin behaviors–integrity and engagement–and drill down so that we really understand what moves the needle one way or another. That will be a continuing topic for upcoming research and blog posts.

Posted in Big Data, Ethics, Innovation, Internet of Things, Research | Tagged , , , , , , | Leave a comment

This week in #digitalethics: the useful versus creepy problem

Photo: Phillippe Teuwen, cc 2.0

Photo: Phillippe Teuwen, cc 2.0

Remember when we were twelve, and we thought the funniest thing ever was to play the game where you add the phrase “in bed” to every sentence? As the mom to a middle-schooler, I’m rapidly being re-introduced to this delightful genre of humor. Puberty is disruptive. Context is everything.

And as I continue to think about the events of the last week, I can’t help but substitute the phrase “in digital” to every story I read. It changes, well, not everything, but a lot.

In digital, data is less transactional, more ambient

An excellent article by Tom Goodwin in AdAge argues that connected devices—from refrigerators to wearables to cars to, of course, mobile phones, what we call the Internet of Things—are driving a redefinition of data collection, from something that requires action (e.g., dial up in the olden days) to something that just…happens. And this data—what you eat, where you go, how much you move—is increasingly intimate.

Speaking of which, your television may be listening

You’ve seen the movies; this is classic thriller/horror-story fare. You would hope to be entitled to privacy in your own home (unless you are a spy, under investigation, the owner of a baby monitor, or Olivia Pope), but if you use voice commands, your TV may actually be listening to all your conversations. Samsung just added a supplemental disclosure to its Smart TV web page stating that the company:

… may collect and your device may capture voice commands and associated texts so that we can provide you with Voice Recognition features and evaluate and improve the features. Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.

In situations like these, is a disclosure on a website enough? As a consumer, I say nope. I happened to see a news story about this or would have had no idea this was even possible. Is this mass collection of spoken data even necessary, or can the data collection be turned on and off?

Facebook will soon be able to identify you in any photo

An article in Science last week revealed that, soon, using its DeepFace technology, Facebook will be able to identify you in any photo. At the same time, a Facebook spokesperson says, “you will get an alert from Facebook telling you that you appear in the picture…You can then choose to blur out your face from the picture to protect your privacy.” This of course raises the question of when consent to be identified actually occurs: when you sign the Terms of Service, or when you are presented with the option of allowing yourself to be tagged or blurring your photo?

Artificial intelligence either does or doesn’t signal the end of humanity as we know it

Dina Bass and Jack Clark of Bloomberg ran a story last week about efforts to balance the discussion about what ambient data and artificial intelligence mean for the future of humanity. On the “AI is a potential menace” side of the debate: Elon Musk, Stephen Hawking and Bill Gates. Representing “Please don’t let AI be misunderstood”: Paul Allen, Jack Ma and researchers from MIT and Stanford.

Wherever one’s convictions may lie, this is a conversation that must become more explicit and specific. It requires a deeper understanding of not only the technologically possible (what machines can do today and likely in the future) but also of the ethical implications. One useful example harks back to the old “lifeboat ethics” question: what happens when a self-driving car is in a no-win situation and must sacrifice itself and its passengers, or hit a pedestrian or bicyclist? What would you do? Who decides?

Context is everything

For most people, there is no expectation that your TV is listening to conversations in your home. Your fridge is there to keep things cold, not to provide data on your eating habits. If you happen to be photographed while going about your daily business, there’s little chance (unless you’re a celebrity) that there will be consequences. Anonymity bestows a degree of privacy. But when data is ambient, anonymity is no longer possible.

Some people call this the “useful versus creepy” problem: in the digital age, does technology have to be creepy to be useful? Or does respecting privacy and building in appropriate controls make technology inherently less useful?

I think this is a false dichotomy.

Earl Warren, former chief justice of the United States, once said “Law floats in a sea of ethics.” To that I’d add, “Ethics floats in a sea of technology.”

Jess Groopman and I are collecting use cases and working on frameworks to parse these issues and make these conversations more explicit. We welcome and will cite your contributions.

Posted in Artificial Intelligence, behavior, Big Data, Data Science, Ethics, Innovation, Internet of Things, social data ethics, Television, Uncategorized | Leave a comment

Big Boulder Initiative: What’s on the Social Data Ethics Agenda for 2015?

Big_Boulder_Ini-048Today, #snowpocalypse2015 permitting, the board of directors of the Big Boulder Initiative is meeting up in San Francisco to plan 2015 in more granular detail. As a member, I’m really proud of what we accomplished during the past year, but recognize that there is a lot of ground to cover. Here are some of the highlights of the past year, from a post last week from board director Chris Moody, VP Data Strategy at Twitter:

  1. We established the first independently operated and self-sustaining 501(c)(6) nonprofit trade association dedicated to establishing the foundation for the long-term success of the social data industry.
  2. We formed a board of directors comprised of representatives from enterprise, startups and academia within the ecosystem, whose mission is to collectively address key challenges within the industry.
  3. We published a Code of Ethics and Standards in an effort to define a set of ethical values for the treatment of social data that will be used as a benchmark for companies and individuals associated with the social data industry around the world.
  4. Earlier this month, BBI hosted a half-day workshop in Boston, hosted by Fidelity Investments. The focus of the workshop was around the ethics of social data.
  5. We added three new board members:
    • Justin DeGraaf, Global Media Insights Director at The Coca-Cola Company
    • Mark Josephson, CEO of Bitly
    • Farida Vis, Director of the Visual Social Media Lab and Faculty Research Fellow at The University for Sheffield
  6. Finally, Brandwatch, IBM, NetBase and Twitter have joined the Big Boulder Initiative as founding members. In recognition of their efforts, BBI has added the following board observers to the board of directors:
    • Will McInnes, CMO of Brandwatch
    • Jason Breed, Partner/Global Lead, Social Business at IBM
    • Pernille Bruun-Jensen, CMO of NetBase
    • Randy Almond, Head of Data Marketing at Twitter

Over the next several weeks and months, we’ll be holding events (details to come!) and publishing more about our activities. In the meantime, if you’d like to hear about membership or you have any questions about the Big Boulder Initiative overall, please contact:

Bre Zigich
Big Boulder Initiative Board Secretary
bre@twitter.com
720.212.2120

Posted in Big Boulder Initiative, Big Data, Ethics, Gnip, Social Data, social data ethics | Tagged , , | Leave a comment

New Research: What Do We Do with All This Big Data?

Screen Shot 2015-01-21 at 6.35.26 AM 1In September, I had the opportunity to speak at TED@IBM in San Francisco about the implications of a data-rich world, and what we can do, as businesspeople, citizens, and consumers, to use it to our best advantage.

Since then, I’ve had dozens of conversations–at conferences, in person, online and serendipitously–about the two main themes of the talk: how do we extract real insight from data, and how do we do so in a way that actually retains and builds trust?

These are huge questions, and they deserve serious and ongoing investigation. This will be the core of my research agenda this year. I’ll be speaking with technology users, business leaders, entrepreneurs, lawyers, ethicists, scholars and technologists to better understand how they see these challenges and what they can tell us about how to extract insight from complex data at scale. We’ll look at emerging technologies, changing organizational dynamics, research methodologies and decision-making. We’ll look at the criteria needed to deliver capabilities such as predictive analytics, and how they affect tool requirements, culture and organizational design.

And I’ll be breaking down discussions of “ethics”–so easy to push aside in favor of more “concrete” issues–into actionable themes that we, as an industry, must address. Where we get our data, how we extract and enrich it, how we mix it with other data, how we use it and how we communicate about what we’re doing–all are open to scrutiny. As part of this research, I’ll be looking at existing case law, speaking with the legal community and working with colleagues at The Big Boulder Initiative–a group of academics, brand representative and technologists–who are passionate about advancing the useful and ethical use of social data.

This document is just a first step toward setting context for the many disruptions of ubiquitous and complex data, but it includes preliminary frameworks to help us examine these issues in more detail, and recommendations on what steps to take to use data strategically and ethically in a business context.

I hope it acts as a catalyst for further discussion, and I’ll be building on and deepening these findings throughout the year.

Please weigh in with questions and feedback. I’ll link to substantive posts, as always.

Posted in Altimeter, Analytics, Big Boulder Initiative, Big Data, Data Science, Ethics, Innovation, Predictive Analytics, Research, Social Data, social data ethics, VoC | Tagged , , , , , , , | Leave a comment

Digital Ethics: New Year’s Resolutions for 2015

Source: freeimages.com

Source: freeimages.com

In my last post, I discussed some themes for 2015, one of which was an imperative for us as an industry to get serious about digital ethics.

The year was filled with stories–some surprising, some alarming, some downright nuts–about the downstream consequences of decisions about how we deal with data. Consider the following:

  • Seeking to prevent suicides, “Samaritans Radar” raises privacy concerns. In October 2014, the BBC reported that the Samaritans had launched an app that would monitor words and phrases such as “hate myself” and “depressed” on Twitter, and would notify users if any of the people they follow appear to be suicidal. While the app was developed to help people reach out to those in need, privacy advocates expressed concern that the information could be used to target and profile individuals without their consent. According to a petition filed on Change.org, the Samaritans app was monitoring approximately 900,000 Twitter accounts as of late October. By November 7, the app was suspended based on public feedback.
  • Facebook’s “Emotional Contagion” experiment provokes outrage about its methodology. In June 2014, Facebook’s Adam Kramer published a study in The Proceedings of the National Academy of Science revealing that, in their words, “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.” In other words, seeing negative stories on Facebook can make you sad. The experiment provoked outrage about the perceived lack of informed consent, the ethical repercussions of such a study, concern over appropriate peer review, privacy implications, and the precedent such a study might set for other research using digital data.
  • Uber knows when and where (possibly with whom) you’ve spent the night. In March 2012, Uber posted, and later deleted, a blog post entitled “Rides of Glory,” which revealed patterns, by city, of Uber rides after “brief overnight weekend stays,” also known as the passenger version of the “Walk of Shame.” Uber is later criticized for allegedly revealing its “God View” at an industry event, showing attendees the precise location of a particular journalist without his knowledge, while a December 1, 2014 post on Talking Points Memo disclosed the story of a job applicant who was allegedly shown individuals’ live travel information during an interview.
  • A teenager becomes an Internet celebrity—and a target—in one day. Alex Lee, a 16-year-old Target bagger, became a meme (@AlexFromTarget) and a celebrity within hours, based on a photo taken of him unawares at work. He was invited to appear on The Ellen Show, and was also reported to have received death threats on social media.

What these stories have in common is that they center on the way organizations collect, analyze, store, steward, aggregate and use data, both actively and passively, as well as how they communicate about their intentions and actions. I’ve had dozens of related conversations with folks in business and academia this year, including of course my fellow board members at the Big Boulder Initiative, on just how we develop an ethics for digital data, and one of the main themes and frustrations is just how amorphous it all is.

Earl Warren, former Chief Justice of the United States, once said, “In civilized life, law floats in a sea of ethics.” So my New Year’s resolution is to begin a process of filtering that sea so we can better understand its component elements. I’ll be starting that process in a document we’ll be publishing in the first quarter, and then in more detail in ongoing research on digital ethics. In the meantime, I wish you all a happy, safe and restful new year!

This post was written as part of the Dell Insight Partners program, which provides news and analysis about the evolving world of tech. To learn more about tech news and analysis visit TechPageOne. Dell sponsored this article, but the opinions are my own and don’t necessarily represent Dell’s positions or strategies.

 

Posted in Big Data, Data Science, Ethics, Real-Time Enterprise | Leave a comment

2015: The Year of Data Strategy

853716_90429188I’m not generally a fan of annual predictions; they always remind me of a carnival in which you’re encouraged to “pay no attention to the man behind the curtain”; you almost never win the giant teddy bear. So I apologize in advance if you were hoping to hear that your vacuum cleaner will soon become sentient, or that Google is planning to acquire Yosemite National Park and turn it into an incubator for middle-schoolers.

But I am thinking a lot these days about the impact of data: big, small, synchronous, asynchronous, structured and unstructured. I’m thinking about how we take signals from across the business, make sense of them and act upon them at scale.

I’m thinking about the challenges inherent in taking these vast rivers of human expression—what we affectionately call social networks—and analyzing them in a way that organizations can understand and from which they can extract value. And I’m thinking a lot about how we do this ethically, in a way that drives business value, builds relationships and honors both the implicit and explicit expectations of our customers, partners and audiences.

So, instead of predictions, here are the topics that I expect will be keeping us up at night in 2015. At Altimeter Group, we’re using a Watch-Plan-Act model to lay out what we think are the most important themes and priorities for the year. All of mine fit squarely into the “Plan” category for now, but, that said, it is extremely important  to monitor these trends very closely to see how public opinion, case law, and technology innovation are evolving.

Data strategy is business-critical

What should organizations focus on? Big Data, Internet of Things, Social Business, Digital Transformation, Mobile First, or a mix? Or should they just sit on the sidelines for now? In my opinion, the common thread of all these trends is data. We operate in organizations in which we no longer control the flow of information, and we’re frequently not first to know some of the most important things about our customers, our products, our brand. Siloes and incompatible technologies make things so much harder.

This is the year to sit down and really think through how we will approach data as a critical business asset.

Organize for insight

Say the word “data,” and thoughts go to IT, to analysts, to people whose job it is to process and/or analyze. With big data (and the Internet of Things), that horse has left the barn. We can no longer afford to make data the province of siloed teams who don’t talk to each other. Want to understand the customer journey? You’re looking at social, mobile, email, web, CRM, BI, market research, supply chain and soon sensor data, at a minimum.

Organizing for data intelligence should be a top priority in 2015. It will require an unprecedented level of collaboration between business and IT to ensure that business context makes its way into big data initiatives, that technical innovation inspires “the art of the possible” in business, and that it’s done rationally and at scale.

Digital ethics is a mandate

In 2014, we saw so many examples of what happens when gray areas collide: the Facebook “Emotional Contagion” experiment, recent Uber revelations, the Samaritan “Radar” app. The fact is, we have not yet as an industry truly clarified our position about who owns our digital data, how and when it can be used, what “informed consent” really looks like, what privacy means, and how as organizations we intend to keep our digital spaces safe.

I do anticipate an escalation of these issues next year, as “the law of unintended consequences” collides with our increasingly fluid use of data. In 2015, organizations should examine their risks related to digital ethics, whether it is:

  • how they act; specifically, where they get their data, their analytics methodology, and how they store, steward, aggregate and use the data
  • how they communicate disclosures related to the above
  • potential impacts to customer privacy, security and safety

I can’t emphasize enough the importance of charting a strategy for digital ethics now. 

That’s it for now. It’s going to be a tumultuous year, so let’s start it with a clear head: strategy, organization, ethics.

I look forward to discussing all of this with you throughout the remainder of 2014 and into 2015.

This post was written as part of the Dell Insight Partners program, which provides news and analysis about the evolving world of tech. To learn more about tech news and analysis visit TechPageOne. Dell sponsored this article, but the opinions are my own and don’t necessarily represent Dell’s positions or strategies.

 

Posted in Altimeter, Analytics, Big Data, Data Science, Ethics, Innovation, Predictive Analytics, Quantified Self, Real-Time Enterprise, Social Data, social data ethics | Tagged , , , , | 1 Comment