We the…Bots and Trolls
Air Date: August 5, 2017
READ FULL TRANSCRIPT
HEFFNER: I’m Alexander Heffner your host on The Open Mind. Of Bots and Trolls. These are the online predators who so penetrate and sway public perception today. At the outset, let’s establish definitions of these most omni-present phenomena. According to Merriam-Webster, a troll is a person who intentionally antagonizes others online by posting “inflammatory, irrelevant, or offensive comments, or other disruptive content.” Now let me add that a troll doesn’t have to be a person, which we’ll come to in a second. A bot, according to the same Merriam-Webster definition, is a computer program that performs an automatic repetitive task. Put simply, today bots are amplifying enormously the effect of trolls. Be it harassment or bigotry or the overall ugliness of public discourse. To probe more deeply today is a leading next-generation expert, author of the recently published working paper Computational Propaganda in Taiwan: Where Digital Democracy Meets Automated Autocracy, Oxford Internet Institute researcher Nick Monaco has worked for Google and its think tank Jigsaw. The weaponization of bots during the presidential campaign of 2016 was overwhelming, but contrary to popular convention, bots have not calmed down since the election. And as Nick Monaco will testify to, they continue to interfere in our democracy. Nick. It’s a pleasure to have you here.
MONACO: Happy to be here. Thanks for having me.
HEFFNER: We’ve had some exchange off camera, uh, because I really wanted to learn as much as I could about this phenomena before we did our show. So we sat in the Starbucks and I…
HEFFNER: … swept through some Twitter IDs.
HEFFNER: Number of handles. And I asked you quite simply: “bot or not?” Right, it was this a bot or is this not? And you alerted me to the website that exists where you can actually put the name of a handle or a user in and it will spit out whether it…
HEFFNER: …thinks it’s a bot or not, which I think will be useful information for our audience. But the other thing that I asked you very directly is, you know… why, why do we have bots. And I wanted to begin there. What, what, what really… what are bots designed to do on social media?
MONACO: Sure. Um. That’s a great question. And I think it’s a question that maybe not enough people are asking. It’s kind of getting to the root of the problem um. So I think a logical place to begin is at the beginning with the word bot. Um obviously a backformation on the word robot, which etymologically stems from a Czech word, robota, which means forced labor or compulsory service. So already etymologically, inherently, we have this idea of a robot or a bot being something that’s um, carrying out a human’s will. It’s an entity that’s performing a service, uh but not of its own mind. So uh bot has been used quite commonly in the software world for decades to talk about any computer program whatsoever. You can build a bot to solve a puzzle. Build a bot to figure out um how much to tip on a bill. And this is common parlance in the software world. More recently as you have highlighted in your intro, uh, when you hear the word bot we’re talking mostly about social bots, uh which are software programs posing as humans on social media…
HEFFNER: Software programs posing as humans.
HEFFNER: On social media.
MONACO: [LAUGHS] It’s what’s called a code-to-code connection. Basically you either purchase software, or if you’re savvy enough to do it yourself you write software to connect to uh, let’s say Twitter’s API, but it could be any uh social media platform’s API, which is an application programming interface. Um… which is essentially sort of the behind the scenes DNA of the platform. What that enables you to do is directly access the platform and perform tasks like tweet or interact with users uh through code. So uh this code-to-code connect, connection is the key to how bots work. So that’s the, the technical way that they’re set up. Um and today I think I’m here mostly to talk about political bots which are slowly…
HEFFNER: Which is your specialty.
MONACO: [LAUGHS] which is my specialty, uh, which is probably the, the epicenter of research that ComProp is interested. ComProp is of course uh the computational propaganda project, which is a joint project between University of Washington and University of Oxford, um, with some funding from the European Research Council. Uh we just released nine case studies, which you mentioned one of. Uh, my Taiwan case study, in your intro. Looking at computational propaganda all around the world. And bots are a big part of that. Um. So political bots are social bots. Uh they’re bots on social media that are interacting with users. Uh and they’re deployed to affect some sort of political goal. Uh and usually it’s persuasion, but it could be heavy messaging, harassment, uh any number of things. Um… So uh I think there are a few types of political bots that are worth mentioning. Um there are a lot of experts working on different taxonomies that give quite granular uh classifying uh different types of political bots on social media.
HEFFNER: There are really factories of bot creation that are using clip art and other imagery to appear as if it is a real account right?
MONACO: Exactly. Or stealing profile pictures from other Twitter profiles or Facebook profiles, or stock photos from the internet, or even taking them themselves, um…
HEFFNER: Before you elaborate on the kinds of…
HEFFNER: bots… when we say, who is doing that? Who are, who are building… um… what countries or…
HEFFNER: bot makers are developing warehouses of these bots?
HEFFNER: Where are they coming from…
MONACO: So that’s a great question, I’m glad you asked it. Uh the diversity of bots online is uh the same as the diversity of their makers, I would say. Um there is an emerging uh, economy to buy these kinds of things, so you can buy bots on mass, uh to have fake followers, or to do whatever you want quite cheaply. Uh fiverrr.com which is ‘fiver’ spelled with I think three Rs um is a site where you can buy bots, but there are several sites on and off the dark web where you can buy bots. Um so that’s one source. Uh there are governments that have troll centers. Um. There was a good piece in The Guardian I think in November of 2016 that listed, probably 10 to 12 countries and their troll centers that they have. Um which have different goals that range from negative to positive. So some sources of bot creation um… are governmental. Um. Some are… privately contracted for governmental. Um Ecuador is a notable case of that. There’s a troll center in Ecuador that was uncovered uh a few years ago where that was happening. But really it can be, it can be anyone. It can be as simple as um an individual who has a political message or maybe just an artistic or creative message that they want to send out. Uh making a bot or making ten thousand bots if they have the energy and, and know-how to do it.
HEFFNER: The meta-data and IP addresses suggest that during the 2016 campaign and currently, many of those political bots were originating from former Soviet states or Russia. Is that documented, or is that, is that, is that not proven yet.
MONACO: Uh I’m unsure about that research. I actually haven’t looked into that specifically. That’s not,
HEFFNER: I mean that the, the… the other thing that was identified alongside the fake bots were…
HEFFNER: Fake news articles where it would be abcnews.com.ru. I mean there were, there were instances identified of…
HEFFNER: … fake news stories that…
MONACO: Absolutely. Um. There was obviously the fake news factory in Macedonia that was going on. As far as Russia goes um, we talked a little bit the other day about uh, Macron and the French election.
MONACO: Um there was an attempted hack on Macron in March that was reported in April. Um and a cyber security firm determined that it was APT 28 or Fancy Bear which is the same Russian hacking collective that hacked the DNC. Um. And then uh he was hacked of course later infamously on the Friday before the media blackout before the election um… And I mentioned when we were talking the other day about that as well. There was an Intercept article in which they mentioned, in this successful uh Macron leaks campaign um… that there were falsified documents um and in the meta—that meta-data for some of those documents uh it showed that they had been edited in languages in computers whose languages were in Cyrillic characters.
MONACO: Of which obviously Russian is a possibility.
MONACO: Um so there are signs pointing in that direction, certainly.
HEFFNER: You were going to tell us and I’m sorry that I interrupted.
MONACO: No, no.
HEFFNER: The kinds of bots. Um so there are, there are, according to this New York Times story, there are Hunter Bots.
HEFFNER: This was the way that they characterized them. Honey Pot Bots. And Comfort Bots.
HEFFNER: Um and that they… together showed how robotics can… expose racists. I mean there are some—
HEFFNER: Bots that are designed to perform functions that are um… correcting, or corrective of…
HEFFNER:… the problematic bots. Um. Certainly there are bots designed so that you are engaging with someone you believe to be real but it is actually fake. Um.
HEFFNER: So I don’t know if you use these same labels. Hunter bots, honey, honey…
MONACO: I, I don’t personally.
HEFFNER: … pot bots and… [LAUGHS]
MONACO: This is the fun part about it
HEFFNER: And comfort bots. But how, how do you… differentiate among the bots?
MONACO: Yeah. This is the fun part about it being kind of the data wild west right now, is that everyone’s creating their own terms and no one knows what’s gonna become standardized so.
HEFFNER: And which do you use?
MONACO: Uh, so the ones I wanted to highlight today um are amplifiers and dampeners. Uh transparency and protest bots. Intel gathering bots. Um. Harassment bots. And then a fifth kind of special case, which is more of a cyber security thing, which are DDoS bots and bot-nets. So amplifiers are probably what’s getting the most media coverage and uh I think that’s proper. Um but amplifiers quite logically are just megaphoning messages. Um so if we’re talking about Twitter, um, it’s quite easy to take a hashtag, say MAGO, Make America Great Again. And uh create a bot that tweets it uh a thousand times a minute. Or create it ten thousand bots that tweet it uh ten thousand times an hour, say. And quite quickly you can make that into a top-trending topic within America or within the world, if you want. Um you probably have to be a little bit smarter [LAUGHS] about it than the way I’ve quite um naively put it just now, because Twitter will catch that and probably uh stop it.
HEFFNER: With the amplifier bot how, how is Twitter intervening when its bottom line, is, if you’re… monetizing a campaign around a hashtag, how are they weeding out the ad-buyers who are engaging bots?
MONACO: Yeah I mean Twitter policy on bots is kind of a gray area right now I’d say in general. Um eh formally they state that they don’t approve of automation. Um but I think that the reality is that it’s a lot more wishy-washy. Um. I think uh, we’re going to be focusing today and already have focused on a lot of negative bots. There are good bots out there. Um what I like to call benign bots or creative bots. Um and these are legion on Twitter. There’s all kinds of artistic and creative bots that exist and um, that I think a lot of people really love, and I think Twitter loves. And I, you know, I’ve never heard of case of Twitter uh knocking down a transparently automated artistic bot. Um, they do intervene when I’ve, I’ve seen suspended uh verified Russian troll accounts, Russian bot accounts. Um. But it’s, it’s a gray area of when Twitter decides to intervene. It seems for now that they’re waiting for users to flag things so that they can look into them.
HEFFNER: When we had Johnathan Greenblatt here, who leads the Anti-Defamation League…
HEFFNER: There was a wild-fire of harassment. And I think that Twitter has actually to some extent we, we…you were agreeing with this…
HEFFNER: Uh calmed down um…intervened in those instances and been more responsive to, to harassment claims and accounts have been closed as a consequence of that. But when you think of amplifier bots um the, you know, I asked you very directly, why if you can tell me that this account is fake or a bot… why is it still here.
HEFFNER: And your answer was very interesting.
MONACO: So there’s kind of an arms race detection and design thing going on with bots. Um. So. Around 2010 I’d say most bots were, uh, they had the Twitter egg, which means they had no profile photo. Uh Twitter has recently done away with the Twitter egg, which I find to be pretty lamentable. Um but…
MONACO: Uh… they, yeah they had no profile picture and no time or location info. They tweet you know, um, 70 times a minute, um… They’d have mostly links in their tweets. They’d be pretty easy to, to spot. Um as time went on they figured out they’d get detected by say tweeting over a certain threshold in… within in one minute or within some time frame or um for not having a profile photo. So they’d start adding a little bit more sophistication to their design. Um so this, this iterates uh between detection and design. Um. And we’re at the point now where well-designed bots fly under the radar quite well. Um and if you have a network of 10,000 bots who are all doing small subtle things to promote a message, it might be hard to, to figure out that they are bots. They’re not all tweeting hashtags. They might be retweeting content. Uh they might be liking content. Um whatever the platform allows them to do. Um but uh even through automation it’s, it’s, it’s difficult to figure out if an account is a bot or not, so with the Twitter API as I was telling you earlier, uh there is more granular data on users. Um there… you can figure out uh, just a little bit more info than you could by looking at them and eyeballing it yourself. Um so you can use that data to try to classify the account through machine learning, which is what bot-or-not does, the website that you mentioned earlier, um which also has an API that you can implement to try to classify users on your own. Um… but this is a big problem for, for tech companies and for individuals. How do you classify a user as human or bot? Um detection’s difficult. And if you do classify it as a bot how can you be 100 percent sure that you’re right? Um.
HEFFNER: Well one way would be to look at some of these accounts that are impersonations that…
HEFFNER: … have their location Florida.
HEFFNER: And then you plug it in, and I was asking you if there was any way we can tell, based on the IP address that this account is not generated in Florida.
MONACO: Yeah so sometimes there are tell-tale signs. You, if you get an IP, if you get um an account in Florida that’s tweeting um during uh you know, Russian working times. Like…
MONACO: The Russian 9-5 that might be a good key.
MONACO: Uh and consistently doing that.
MONACO: There’s stuff like that for sure.
HEFFNER: These days when you Google a term… or a person… you’ll see Wikipedia, you know, usually as the second or third search result, and then… you know, the last tweets that included that word. And so the, to the extent that the, the percentage is likely… that you might click on a tweet… let’s say you go for Twitter instead of Wikipedia, and that tweet is a bot, then you’re, you know, you may be getting some kind of misinformation, disinformation.
MONACO: Absolutely um. The World Economic Forum in 2014 categorized the rapid spread of disinformation as one of the top ten perils to society worldwide. Um and I think that’s kind of what we’re dealing with now. And, and bots are a part of that. They’re not the whole story but they’re part of it. Um and I think that’s part of what you’re alluding to in your question. Um so a big question with bots, with disinformation, is what’s new? What’s new with this problem? Uh disinformation isn’t new. Uh trying to intervene in elections or intervene in uh foreign country’s governments isn’t new. Um and we’re no stranger to that. Iran and um, and Chile being two examples of the US intervening in the past century. Um. So what’s new with this phenomenon. And I think there are three things that are new. One is individual empowerment. Um the ComProp case studies that just got released, there was a world-wide executive summary in which they mention that bots democratize propaganda. Now individuals have the power to megaphone a message um to a truly global audience. Um. Global access is now possible in a way that it wasn’t before.
MONACO: Um. So that’s one thing. Um… the second is speed. Now we can communicate. We can spend information uh, as fast as a photon can, can travel through a fiber-optic tape, uh cable. Um so that’s also new. And the third thing that’s new, I kind of alluded to already, but it’s scale. Um Facebook recently released a report uh called Facebook and Information Operations, which is all about misinformation on Facebook. Um and that offers kind of a new lexicon for talking about fake news because the term is uh so loaded and misused right now. Um… and one of the things it points out is that global access is now possible. It’s now possible to reach a global audience with a message, if you have uh something you want to say and it resonates.
MONACO: Uh, enough. And that’s historically unprecedented. That’s never been the case. Um. So I think those are the three things we’re dealing with that we’ve never dealt with before, and that make bots uh very urgent to democracy and, and figuring out this problem.
HEFFNER: How, how can your research inform… answers to how we address, bot, the problem of bots beyond what Twitter and Facebook are doing now.
MONACO: Sure. Well, I think spreading awareness um is not the sexiest answer, but I think [LAUGHS] it’s one of the most effective ones. So I think number one spreading awareness about the problem. Uh getting people aware about it is something that can help um combat it on a day-to-day basis. If people know misinformation’s out there they’re more likely to be a little bit more skeptical, skeptical about what they see um online.
HEFFNER: We were also commenting that the French election, which you allude to…
HEFFNER:… did not sustain the same damage
HEFFNER: …that the US campaign did… and we were looking at the model of Brexit and the US, where there was much more time um… to wage that campaign and in France, which is unique compared to the US and the UK, there is that blackout period.
HEFFNER: Uh, which seemed to be enormously…
MONACO: Which is terrifying. [LAUGHS] Yeah.
HEFFNER: Terrifying but could be, could have been helpful to Macron in um… the public… the press uh not reporting… any uh of, any of the WikiLeaks,
HEFFNER:… findings whether they were genuine or fake…
HEFFNER: Emails, or personal details. But the fact that… the journalists in France uh had to use discretion—there was some discretion.
HEFFNER: To me, that was an answer that we don’t really experience now with our elections, that… the, the third prong that you mentioned, scale, you know, how, how you can deal with scale other than having a blackout period where… you know, at a certain point you have to trust the integrity of information and rely on that… uh how, how could we at all bring that, that French system … to bear with the flood of, of misinformation that happens on a regular basis here.
MONACO: Yeah. Yeah I, I do think that um, again spreading awareness is a big part of it. Um and maybe shortening the election cycle. A lot of people have suggested that um from the primary all the way to the Presidential election, it’s just too much time. People get worn out and they stop paying attention. Um so I think that might be a factor as well. I think France got through it a bit more successfully for other reasons too. I think um… having lived in France for a year, I think France has a very politically active culture in a way that hadn’t been the case in the US before. Um I think things have changed now with the election of Trump. People are talking about politics every day, everywhere they go, and are politically engaged in a way that, that I haven’t seen in my lifetime. Um. So there’s that. I also think that um game theory came up in the French election. You know, um, France had the convenience of sitting it out a few rounds and watching [LAUGHS] this process play out.
MONACO: Uh during Brexit, during the US Presidential election. So by the time it happened in France uh I think they were pretty savvy to what was going on. And again, Macron uh… they attempted to uh hack Macron and it was announced uh before Macron Leaks happened. So that was actually the second hacking attempt on Macron when it was quote un quote successful. Um so I think those tale tell signs and paying attention and game theory were all kind of key to the French keeping their cool and…
HEFFNER: And finally, Nick as you look at the scale of the interconnectedness that we experience,
HEFFNER: … as a society, and are grounded in the technology… you know… beyond the awareness factor, uh, what is your hope that, that um, these companies will create an incentive uh to develop a culture of technological stewardship that,
HEFFNER: We have not seen yet,
HEFFNER:… in Google and Facebook and, and Twitter… taking ownership of the problems more fully. What would that look like…
HEFFNER:… if there was some radical reinvention of the way they… engage with users.
MONACO: Yeah, so this is um part of what I hope my research and other researchers’ work will contribute to. In terms of concrete solutions, um… I read an article the other day that Facebook is hoping to use AI to root out terrorism on its network. Specifically how it’s going to do that is to use machine learning to classify users, um… into… you know… terrorist or not. And with the most black and white cases it will be able to say immediately we’re going to block this user, kick them off network, um. But with the majority of cases what it’s going to have to do is go to a human review team so that they can analyze it with expertise and um and go from there. I think the bot problem has to be the same thing because at the end of the day this isn’t just a computational problem. This is a social problem. This is a very human problem. And I think that the solution to this problem will also accordingly have to be both computational and social in nature. Um so a similar system like that uh where you know, maybe we use machine learning to analyze the network because it’s too vast for a human to dig through. Uh and… both get flagged accounts from users and from the machine learning systems, but send them on to a human team, a subset of those accounts to be analyzed, I think that’s one solution. Um… yeah.
HEFFNER: Nick. Thank you for… weeding this out with us today.
MONACO: Yeah of course.
HEFFNER: There are few people who have the kind of insight that you do, and we really appreciate you being here.
MONACO: ‘Course. Appreciate it.
HEFFNER: And thanks to you in the audience. I hope you join us again next time for a thoughtful excursion into the world of ideas. Until then, keep an open mind. Please visit The Open Mind website at Thirteen.org/Openmind to view this program online or to access over 1,500 other interviews. And do check us out on Twitter and Facebook @OpenmindTV for updates on future programming.