Tarleton Gillespie

Guardians of the Web

Air Date: April 21, 2018

Microsoft Research Principal Researcher and Cornell Professor of Communications Tarleton Gillespie discusses his book “Custodians of the Internet.”

READ FULL TRANSCRIPT

HEFFNER: I’m Alexander Heffner, your host on The Open Mind. Zeynep Tufekci, contributing opinion writer for the New York Times, author of “Twitter and Tear Gas,” whom we recently hosted, praises the insights of our guest today and his forthcoming book, “Custodians of the Internet: Platforms, Content, Moderation, and the Hidden Decisions that Shape Social Media.” This timely and important book deftly reveals the factors that shape social media and thus our world. Clear-eyed and incisive, it’s a must-read for anyone interested in the influence of platforms, the forces that structure this influence, and crucially, how we move forward. The author is Tarleton Gillespie. He’s principal researcher at Microsoft Research New England, and an adjunct associate professor in the department of communication at Cornell University. Today, Gillespie and I will continue what has become a frequent topic of conversation on The Open Mind, the ever-perilous anti-social media complex and its learning curve or downright recklessness in failing to address abuse that has enflamed not only its platforms but society, Tarleton, in what I would say is a fatal or potentially fatal conflagration. Welcome.

GILLESPIE: Thank you, thank you for the invitation.

HEFFNER: Thanks for being here. In your book, February 2015, CEO, then-CEO Dick Costolo, “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years. It’s no secret, and the rest of the world talks about it every day. We’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them.” February 2015. Spring, 2018, present day.

GILLESPIE: Right.

HEFFNER: The problem has only exploded and Dick Costolo not so long after that statement was ousted from leadership of Twitter. Can you give us hope?

GILLESPIE: Maybe. Maybe. I think it’s a, that quote’s a really good reminder that as we have these discussions now, whether it’s about fake news, propaganda, trolling, terrorism, and it feels like the last year has sort of popped open this question, this question’s been around for a while. And as Costolo notes, they’ve been dealing with it for a while. Now, whether you think they were dealing with it well or not for a long time, I think it’s important to remember as we go through these series of controversies, whether it’s the contemporary one right now about fake news, whether it’s the harassment that he was talking about in 2015, whether it was images of breast-feeding, whether it was self-harm, we’ve had ten years of trying to imagine what it looks like to have a social media platform that in principle lives on this idea of having an open space for user participation, and yet also has to grapple with this question of what belongs there and what doesn’t. These are immensely difficult questions. The fact that it’s exploded into public consciousness is a good thing, so maybe that’s the hope.

HEFFNER: That’s a hope. When we had Biz Stone, one of the founders of Twitter, and I think I’ve relayed this story to our viewers before, I asked him, do you ever foresee a need to impose a civic consciousness on your users? And he said to me what do you mean? Which is come to think of it kind of typical Silicon Valley lack of social or moral compass. At the time I was not so definitive in thinking that was the reality. I said, to be a good Samaritan. And he said no. I don’t think you can force that on people. You have to make it feel like they’re winning. And I thought to myself that’s exactly why Twitter was the natural host of Donald Trump and his ad hominem and invective and all of that. Because there was no social compass or consciousness that was leadership from the top, but that story to me is recognizing Twitter’s failure to adopt a system and broadly, social media’s failure to adopt a system that gives goodness, decency, currency.

GILLESPIE: It’s a great way to ask the question, do they have a civic consciousness, and I would flip the question around and say they did offer civic consciousness, it just wasn’t the one you were looking for. So, some of the social media platforms, including Twitter, were very proud of this idea that they were, what was the quote, the free speech wing of the free speech party. That is a civic consciousness, but it’s a civic consciousness that believed in a certain notion that came out of the early ideas of the web that if you make a space and you don’t guide what people say, that’s gonna produce either a kind of cornucopia, right, everything could be said or a very old idea about the marketplace of ideas, right? We can debate, we can say the good things and the bad things and somehow the good things will win out. That was the idea that animated some of those principles, right? Early ideas about virtual community, early ideas about the web. The problem was it didn’t foresee that as they built what they presented as an open space, an open space to talk, an open space to share, an open space to connect, that they were building a tool that was making it possible for us to tactically persuade each other. Sometimes that’s a great thing, right? I want to tell you something that matters to me. Sometimes as we can see it’s a really problematic thing. So now we have to have a different civic consciousness, but there always was one, and the pretense was that somehow there wasn’t one. That there was no guidance, it was free and open. I think that was always a misrepresentation.

HEFFNER: The operating philosophy from the outset was sunlight is the best disinfectant. Do they recognize now that when there is just an onslaught of fake information, misleading information, misinformation, disinformation that you can’t really handicap that if, if 90 percent of what is disseminated is malevolent, how can that compete with 10 percent?

GILLESPIE: I think that’s a great question and I think that one of the things that’s really difficult is that it’s not quite a 90-10, right? We spent a lot of time on Facebook and Twitter and Instagram sharing perfectly legitimate content. So it becomes this really interesting challenge about, and it’s always been true, how do you create a platform that is going to be hands-off in the sense of it’s not guiding what you should say, but is somehow recognizing these kind of these kind of currents inside that can be taken advantage of. I think these early platforms, when they did approach content moderation, and all of them were doing it from the beginning, the nature of the problem was of, only of a certain size. So if you imagine that you were gonna have a discussion space, and you knew that somebody was gonna try to turn it into a porn site, and your job was to get rid of the porn, and it was a clear violation of the rules, right, that was just a question of spotting it, it’s kind of a whack-a-mole problem. Now we’ve got a much different problem which says we’re not worried about actors who know perfectly well that they’re violating the rules and they’re just trying to like, you know, be vandals. What we’re dealing with now is people trying to take advantage of the way the system works, insert information or participation that looks legitimate, this is the fake news headline, this is the, it seems like it’s a political opinion from a genuine actor when in fact it’s a bot, and it’s exactly the stuff that these systems are designed to circulate, right? They’re designed to take interesting, unexpected contributions and get them out there, make them go viral, get engagement, and so the very same things that should circulate, like a thoughtful political idea, the thing that looks just like it but is nefarious or propagandistic slips right in the system. That’s a much harder thing to discern. The other problem is when it’s a case of harassment or offense, there’s a victim, right? I was offended, I was attacked. And the platform can act on their behalf. We now have a problem where I may never see a fake news headline. I may never forward a fake news headline. I may never participate in it. But the fact that that fake news is circulating harms me. That’s a public harm, right? So now whether or not I had any role in it, whether or not I received that information or had any role in circulating it, there’s still a harm to me because it’s harming the democratic process, it’s harming the public discussion. Public harms are very difficult to deal with. They require governance.

HEFFNER: Well, that’s why the title of your book is brilliant, tragically, because they are custodians and not guardians. They’re not guardians of the internet. We need guardians of the internet.

GILLESPIE: Right. I, I like the word because I think that there was a very early interview I had with someone working at YouTube and they said we wish we could just sweep up and turn the lights off. So a custodian in the sense of like, there’s just a management process that needs to happen. There’s another definition of custodian like a legal custodian. It’s someone,

HEFFNER: And a reputation associated with being a custodian.

GILLESPIE: That’s right. And it’s someone’s job, not, I don’t mean legal custodian like of a child but a custodian that says we have to manage a space where difficult issues can be contested.

HEFFNER: Right.

GILLESPIE: There has to be a process where we grapple with those kind of competing values that kind of custodianship might be very different than we sweep up the mess,

HEFFNER: Right.

GILLESPIE: Which is, we’ve talked about moderation up, up till about a year ago,

HEFFNER: Right.

GILLESPIE: Up until 20…

HEFFNER: Well and you were saying to me that there is systemically something wrong with the current philosophy or hardware within the consciousness of the moderators today. Let’s focus on solutions. You point out that there are innocuous things that were flagged in the infancy of Facebook and Twitter, a Facebook user’s photo of her breastfeeding, for instance. We still see pretty chronically that they will flag things that are innocuous and not flag things that are so obviously a source of harassment or disinformation, and it, there’s, that’s still the problem.

GILLESPIE: Right. And, and we approach this, every time we get worked up, and I don’t mean, I don’t mean that dismissively. Every time we get focused on a problem that we see on some of these platforms, in some ways the nature of that problem drives how we talk about it. So right now it’s why didn’t you do anything about Russia manipulation, fake news, terrorist content? You didn’t do enough. There are other points in our discussion where the worry has been you’ve done too much, right? You’ve been too conservative; you’ve taken things off, or a kind of inconsistency. So somehow we have to address this where we don’t just solve problem A with the shape of problem A. We have to think about moderation as a whole project, and platforms, you know, I have to acknowledge, I acknowledge in the book this is a very hard thing to do, and it’s a problem that we faced with newspapers and television, this, the, the nature of the problem is different but the question about whether it’s the commercial intermediary’s job to decide what should and shouldn’t be there, and if it is, how did they do it, and if it isn’t, you know, who does it instead? Those questions persist. We have to think about moderation in a way that’s gonna recognize that they’re gonna both have to identify the worst of the worst. That’s one job. The second is they’re gonna draw these distinctions between something that you might think is troubling, I might think is not troubling, someone else thinks is great, and they’re gonna be mediating that, and then this third category which I think has gotten very hard which is the stuff that looks exactly like the thing they most want to circulate. I think that’s the hardest one for them ‘cause it fits into their business model, right, a viral story, an exciting news bit, an enthusiastic voice.

HEFFNER: Problematically,

GILLESPIE: Yeah.

HEFFNER: Being anonymous. Anonymity. Is their business model. The way that they sold ads to people they, we couldn’t identify later associated with intelligence operations, information warfare. So the, the commercial impetus is for them to sell ads irrespective of geographic origin, irrespective of entity buying the ads, and no one seems to be challenging within the shareholding community of these companies and the larger ecosystem that there is something wrong with the hardware.

GILLESPIE: Right. I, I think that’s probably where, I mean we can talk about more moderators and more sensitive moderators and consulting, there are things that they could do, and many of the platforms are moving in that direction. But I think the structural problem is more pressing than the one you identify. One of the things that’s fascinating to me is that if you think about platforms as kind of starting with the ideal of the web and trying to be something better, right, all of that, the best of participation, the best of content, we’re gonna pull it all together and make it somewhere where it’s easy, easy to find, and exciting. They’ve spent ten years innovating in really powerful ways how to do that, how to get things in front of you, how to get, connect you to your friends. There was another aspect of what the web promised, and that was not just transparency but kind of radical data transparency. I can know the system, I can understand the participation, I can build structures such that I manage that. And they haven’t done much on that side. I think it would be incredible to imagine platforms saying okay, we’re gonna have advertising. That’s an important part of our business. But we know an immense amount about those advertisers. Where did they put that information? Who were they targeting? That could belong to users. I should be able to hover over an ad and find out exactly where they sent it, right? And I don’t just mean political ads, I mean all of them. Similarly, they rely on us to flag content, right, that’s most of how platforms find out where the violations are that’s all this data we’re providing for them but that data never comes back to us. That could be made transparent. Not just yes, we’ll report on a yearly basis that we took down X numbers, but a radical commitment to that. We have data, we the platforms have data. That data belongs to us. We could have a new level of transparency about who advertises, who sends ads in what direction, who were they trying to target and make that all visible. That would still allow an advertising-based system, it would still allow all the sort of network effects the platforms benefit from, but it would take an early commitment of the web that has somehow been dropped.

HEFFNER: Abandoned.

GILLESPIE: Yeah.

HEFFNER: Well it’s interesting you mention that. I was gonna say we went from web 2.0, really in terms of our sophistication, moral character, and withdrew to web 1.0,

GILLESPIE: Hmm.

HEFFNER: And you’ve been working on this for a while so I wonder how you consider the fact that Facebook and Twitter gave a platform not only to bombastic rhetoric but to extremists. In the earlier incarnation of the web, there were designated safe spaces for extremists, but Netscape and Internet Explorer to give props to your New England Microsoft, were not in any way promoting their agendas. So you took an earlier web that was actually much more transparent and you gave an open megaphone that was then monetized to extremists, and to intelligence services. Why can’t we go back to that 1990s web? We need to.

GILLESPIE: I mean going back is very hard. But I would point out that if we had, if somehow platforms had never developed, and the idea of the web, you put the information on the web and you have a space for it and search engines help people find it and browsers display it for you, if somehow we were still living with just the more sophisticated version of that, if the idea of a platform that coalesced that information never developed, we’d still be having a debate about, you know, how ISIS is recruiting or how, you know, hate speech is gathering and it would be hap, it would be a question about the web, right? What about these dark pockets of activity? Sure, you know, it’s harder to get to it, it’s not showing up in your feed. We would have a different debate because the structure would be different. So every time we develop a medium and a medium starts to take on public importance the way the web did, the way social media did, these questions, fundamental questions about kind of public responsibility, harm versus benefit, openness versus oversight, these questions never go away. One issue is that we have to understand it, understand really how it works, not the way newspapers worked or broadcasting worked but this says something special about how this is organized. And then ask the question about who has taken up positions of power to shape that solution, right? And it was one thing when it was sort of like websites that there were a million of ‘em. Now we do have a small number of organizations that are making immensely important decisions, and that does change the game.

HEFFNER: Ultimately, you know, it’s about these private companies supporting productive as opposed to counter-productive speech. Where, practically speaking,

GILLESPIE: Yeah.

HEFFNER: Can you make inroads?

GILLESPIE: I think when you look historically at the early days of previous media, these questions take a long time. And I’m sure at the time they felt really potent, they felt like problems unanswered. Part of the process of grappling with these really tricky questions, how do I allow vibrant speech and also protect someone from terrorist recruiting? And when we’ve envisioned a system that said we’ve just put a priority on anyone can contribute, which is very different than how do you get on television, how do you get in the newspapers, right? So we made this radical commitment to nearly everyone can speak, it’s very easy, the barriers are very low, and in fact there are these platforms that help stuff circulate. We then have to ask the question again. We have to ask the question that says where are we gonna imagine the balance? What are we willing to accept, which might be we’re faced with, you know, offensive speech, or we’re not. It might be that there are environments in which you can find it, and environments where you shouldn’t have to. It may be tools where every user can kind of you know, make their little filters and say what they want to see and not see. But I think what we’re seeing in the last year or couple is that the offer the platforms made, which was we’ll be hands off, this stuff will come to you, and by the way we’ll moderate but we won’t like, worry you about that, has proven unacceptable. There’s now a kind of implicit contract from the users and from the public that’s very different than the terms of service that they offered. We’re asking them to step in and play a, a stronger role. The thing I don’t want to be left with is that they should just take on that role but do it behind the scenes. I think that’s still a mistake, right?

HEFFNER: Right, yeah.

GILLESPIE: I think we’re grappling with the hardest questions, so it can’t be done on our behalf.

HEFFNER: Right, and it, that is the way it’s being done right now, unfortunately. And I can’t help but recognize the historical context which is that Vladimir Putin, who our intelligence service has conclusively said was responsible for an operation to sow discord among Americans in their internet consumption, was on television recently blaming Jews, Russian Jews for the disinformation. To be a prominent American Jew and to not respond to Putin’s statement given the culpability that has been identified here, it’s mind-boggling to me. There is no innocence here, on the part of Putin certainly, or on the part of Zuckerberg. Own up to that. I’m sorry. For those of us who’ve grown up with digital technology as an analog to our daily lives, I couldn’t be more passionate about this subject, so.

GILLESPIE: Right, right.

HEFFNER: Forgive me.

GILLESPIE: No I, I mean I think that the, the right question is the question of responsibility. And that responsibility, sometimes it’s a question about the platform itself, the CEO down, sometimes it’s a question of responsibility of manipulative actors who are taking advantage of that information environment. Sometimes it’s a responsibility of all of us to recognize the system we’ve committed to. My focus has been on the platforms in part because I think that I do think that the intentions have been good, but the, the recognition of the system that they’ve built and what it facilitates has been slow in coming.

HEFFNER: Right, and if, in the minutes we have remaining,

GILLESPIE: Yeah.

HEFFNER: We’ve focused in large measure on Twitter, then we got to Facebook. Let’s end with YouTube and Reddit specifically, which is one of the natural breeding grounds of conspiracy theories and some genuine information sharing and gathering, but Zeynep argues that the way that YouTube is designed, and I want you to tell us about Reddit too, but the way that YouTube is designed is that if you watch something that is legitimate, you’re gonna be fed illegitimate, even if what you’re watching is something that is legitimate.

GILLESPIE: Right, right. The, the thing that we, when I make these points about this being a long issue, we’ve always grappled with the kind of responsibility for moderation question. Every time we have a, a new medium that works a different way, and I mean like technically works a different way, economically works a different way, it’s kind of structured a different way, then the way the problem comes up is different. So the thing that Zeynep is pointing to I think absolutely correctly is that many of these platforms have designed algorithms that are eager to serve you up the next thing to watch, the next post to read, the next photo to look at. That is a fundamental part of their offer. And in a way when we say content moderation, we should mean not just deleting posts and kicking people off but also this kind of algorithmic selection process, which does moderate for me. I may never see your post, it was there, but if I didn’t see it, that was a subtle selection process. If those tools are built to privilege engagement, and that’s in some ways a code word, right? From the business side, are you on there, are you clicking, are you spending more time than ever? That’s interesting for advertisers and data collection. But we now recognize the kind of things that are engaging, right? Genuine news, important discussion, hilarious jokes, funny videos, but also clickbait, right, conspiracy theories, slight exaggeration, so I go to watch a video and it’s on a, it’s a legitimate piece of news, and it says based on the thing you were looking at, what might be the next thing you watch? What’s been exciting? What has been popular? What’s been watched a bunch of times? Those signals of popularity are picking up both things. They’re picking up something that people found genuinely interesting and informative, and it’s picking up stuff that kind of tickled that fancy. And if it, if it’s powered on engagement produces recommendations, then we’re seeing the, the kind of strange kind of side effects of that…

HEFFNER: And Reddit, quickly.

GILLESPIE: Reddit, I mean so each of these platforms has a slight different solution. Reddit handed a lot of moderation to the people who administer each group, and for some of those groups it’s been very effective. Those groups can set their own rules, they can set stricter rules than Reddit. The problem is that Reddit also has a popularity algorithm. So you’re in some discussion, you’ve got a nice set of community norms, everyone kind of agrees on what should be there and what shouldn’t, one of those posts hits the front page and a hundred new people show up and they’re not part of that community, they’re not, they haven’t agreed upon that entrance, so that algorithm is feeding, instead of feeding strange material after a good one, it’s feeding people into communities that haven’t embraced the norms. So the algorithm can cause a problem in both directions. And Reddit for a long time got to sort of wipe its hands and say well the moderators are gonna handle it, which is great except a group that says we’re fine distributing celebrity nudes and we’re all okay with it, Reddit then had to say oh, we actually have, have to not tolerate that. That was a hard transition for them.

HEFFNER: Tarleton, I, thank you for your time. Let’s really hope that there are those custodians and guardians, custodians of the internet and let’s pray for some guardians too. Angels.

GILLESPIE: I’m with you. Thanks for the conversation.

HEFFNER: Thank you.

GILLESPIE: Yeah.

HEFFNER: And thanks to you in the audience. I hope you join us again next time for a thoughtful excursion into the world of ideas. Until then, keep an open mind. Please visit The Open Mind website at Thirteen.org/OpenMind to view this program online or to access over 1,500 other interviews. And do check us out on Twitter and Facebook @OpenMindTV for updates on future programming.