Facebook’s announcement of its Open Graph initiative and release of a variety of easy-to-use Social plugins has set off another hot debate around privacy, with even MoveOn.org and NY Senator Chuck Schumer getting involved.
From a personal perspective, I’ve always understood the tradeoff that comes with using "free" services like Facebook, Google, Twitter, Glue, et al; EVERYTHING I do is being tracked and recorded, to be mined in real-time and/or at some point in the future to "target" me for "relevant" messages, products and services.
Facebook is catching a lot of flak right now, perhaps most humorously on Twitter, where some people are announcing they’ve closed their Facebook accounts without the slightest hint of irony. As if their activity on Twitter—the people and brands they follow and engage with; the links they shorten and retweet; the pictures they post; the Foursquare check-ins—all of it tied to an email address, isn’t being tracked, recorded and mined for purposes that are mostly unknown right now, but likely as "controversial" as anything Facebook has ever announced.
I’ve always assumed this was the case, even way back in the days of CompuServe and AOL, both of which were PAID services, and we all know it to be true today in the illusory era of FREE. So in that light, Facebook’s enabling people to "Like" everything on the Internet and feeding that data back to publishers and marketers doesn’t surprise me at all. It’s actually pretty brilliant on their part, and I’m a bit surprised Google didn’t beat them to it.
What bothers me, though, is HOW they’re going about it, repeating their mistake with Beacon by choosing an opaque opt-out implementation instead of trusting in the value of the service they’re offering and allowing their users to opt-in.
Beacon, which launched in late 2007, basically tracked the activity of Facebook members on certain partner sites and then posted an item in users’ news feeds when they purchased something.
Facebook, however, did not adequately communicate to members that this information was being collected. Users were irked when purchases made on other Web sites showed up on their news feeds ("John bought a diamond ring from XYZ"), prompting MoveOn.org to suggest that Facebook had ruined Christmas.
Facebook tweaked the service and apologized, but the snafu still led to a class-action lawsuit filed by disgruntled users.
Facebook shut down Beacon two years after it launched, and the lawsuit was finally settled last month, but they apparently didn’t learn anything from it.
Anyone who’s worked in email marketing has undoubtedly faced a similar dilemma when launching a new enewsletter, especially when working with a large, unsegmented database. (Facebook, of course, has the kind of segmentation marketers dream of!) Do you take the short cut and add the entire list, perhaps under the guise of a "trial subscription", allowing subscribers to opt-out? Or, do you take the time to build the list organically via an opt-in campaign?
The temptation to choose the faster opt-out route is hard to resist, and the rationalizations for why it’s legitimate are legion, but in most cases, the value of that list will depreciate faster than a brand new Yugo.
So much is made of "trust" these days and its value in engaging one-on-one with readers via social media, and yet that same sense of trust is rarely a consideration in other areas of marketing where readers are viewed as consumers and aggregated bits of data.
The opt-out norm in Facebook—and on many other sites—is not in the better interest of people; it’s in the better interest of companies…
Many of you are playing with Facebook’s data. Others of you wish to be. The Social APIs are exciting and there are so many possibilities. Facebook has one of the most fascinating sets of Big Data out there. It reveals traces of people’s behavior around the globe and provides the richest articulated social network out there. But you’re also playing with fire. Much of the data that is publicly accessible was not meant for you to be chomping away at. And distinguishing between what should be publicized and what shouldn’t be is impossible. People are engaging with Facebook as individuals and in small groups, but the aggregate of their behavior is removed from that context. People are delighted by the information about their Friends that provides better context, but completely unaware of how their behaviors shape what their Friends see. This creates challenging ethical questions that are not going to be easy to untangle.
We’re testing Facebook’s "Like" button, but Digital Book World doesn’t have its own Facebook page primarily because of the issue of context that Boyd notes. While I agree that it’s "a potential gold mine" for publishers, and some of my colleagues use it as a public platform, I’m still grappling with the ethical questions around digging into that gold mine and likely will be for a long time.
LinkedIn, on the other hand, has a clearly professional context, and we take full advantage of it because of that.
As technology continues to evolve at a rapid pace, putting heaping amounts of data on our plates and offering a variety of utensils to slice, dice and (ideally) analyze it, Boyd’s words of caution should be taken very seriously by every publisher who sees only opportunity in that gold mine, and not the potential dangers.
The principles of "permission marketing" haven’t changed just because technology has introduced shortcuts. If we’re truly serious about interacting and engaging with our readers and not continuing to depend on 3rd party intermediaries for data, we need to think twice about how much we "like" what Facebook has to offer.