Julia Angwin

Confronting Algorithms of Bigotry

Air Date: June 23, 2018

Award-winning investigative reporter Julia Angwin talks about her new journalism venture exposing technologies that injure society.

READ FULL TRANSCRIPT

HEFFNER: I’m Alexander Heffner, your host on The Open Mind. My guest today is leading a new investigative effort exploring technology and algorithms. Julia Angwin most recently was senior investigative reporter at ProPublica from 2000 to 2013. She was reporter at the Wall Street Journal where she led a privacy unit that was finalists for a Pulitzer Prize in explanatory reporting in 2011. Her book “Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance” was published in 2014. And it was shortlisted for best business book of the year by the Financial Times. Announcing her new as yet unnamed venture, Julia said she wanted to see if she could scale up her style of tech-driven investigations by building a newsroom around them. That includes covering the big platform companies, but also the tech that’s used in other aspects of life. Through the lens of these investigations. At ProPublica, Julia explored racial bias in software used in criminal justice and the algorithms that generate unjustifiably higher car insurance prices in minority neighborhoods. She also plans to build tools similar to the “Facebook political ad collector” that she built at ProPublica to allow the public to understand these critical technological issues. Welcome, Julia. Congratulations on your terrific ProPublica reporting and this new venture.

ANGWIN: Thanks, it’s so great to be here.

HEFFNER: Your reporting at ProPublica really clearly discovered this. Tell us how that came about. How you were able to identify those algorithms that were feeding into a cycle of bigotry.

ANGWIN: Yeah, you know honestly, like a lot of investigations it was something we kind of stumbled on. It was about two years ago that my team, we were trying to understand what Facebook knows about you. So we built a little tool and it asked people to download the software, and when they downloaded it, when they went to Facebook it went and found the privacy settings page, which was like five menus deep that said what they thought they knew about you. And it would send it all to this public database. So we built a database of like 56,000 attributes that Facebook had collected about people. ‘Cause we wanted to get a scope of like exactly how granular were these different categories that they place people in. And we noticed that they had a category for race, African-American affinity. Hispanic-American affinity. And what affinity turned out to mean was not actually [LAUGHS] you had never said what your race was but just that they had intuited that you had an affinity [LAUGHS] for people of that race which is kind of a bizarre construct. And so we noticed also that people in the newsroom were getting different, they weren’t, you know I was given I think African-American and I’m not obviously. [LAUGHS] So… so we thought, I wonder how advertisers are using this. And we looked and saw that actually you could go in and buy an ad for an apartment, and you could, there was a dropdown menu saying exclude my ad from being shown to these categories. And you could say like never show my ad [LAUGHS] to anyone of African-American, Hispanic-American, Asian-American affinity. So we called some lawyers and they were like that sounds like a violation of the Fair Housing Act, and that sort of kicked off our investigation of discriminatory ads, which we then found other areas as well.

HEFFNER: The most documented of which or at least publicly known through the testimony of the whistleblowers was an effort to advertise to people who are engaging in some kind of bigotry. So hate-mongering directed towards African-Americans in the case of people who were being targeted or Jews, you identified. Subsets that could be advertised to that were reflecting those groups’ hatred of African-Americans or Jews or other subsets.

ANGWIN: Right.

HEFFNER: Since then, has Facebook actually grappled with this problem and changed the way that it is doing business? Or no?

ANGWIN: Yeah, Facebook has made some changes. I think it remains to be seen if they are enough. But so with the ability to drop down and exclude your ads by race, they’ve gotten rid of that dropdown menu. So you can’t do that, exclude by race, anymore. You can still exclude, however, by things like zip code, which often can be a proxy for like a minority neighborhood. So it’s not clear if that fully solves it, but it’s something. Separately, the thing we found where they had this ad category, you could target Jew-haters for instance, or people who wanted to burn Jews, right? And what that turned out to be was people had described themselves as that in their own like field of description about themselves, and Facebook just automatically turned all of those fields into advertising categories. So after we, it showed that that was a category, they turned off that ability and said we’re not going to let people just create, we’re not gonna create ad categories out of every single word that people put in their Facebook profile about themselves. And so that might solve that, but there are obviously all sorts of ways to target by hate on Facebook still because for instance, let’s say you wanted to target, you can still buy ads you know to the Nazi party, right? Because Nazis are a legitimate political party unfortunately in some places.

HEFFNER: Why do you think though we still haven’t seen a lawsuit outside of Cambridge Analytica that is forcing Facebook in effect to face the music of all of the harm done when those groups were allowed to advertise? It occurs to me that the Cambridge Analytica piece is a relatively small piece of the Facebook surveillance story. And the idea that harm was done to the public and there haven’t really been any class action lawsuits or examples of collective groups asserting that they were harmed.

ANGWIN: Mm-hmm. Right. Well the problem is you can never tell if you were harmed on Facebook. So, for instance, the harm of the discriminatory housing ads would be that you didn’t see an ad. Right? Like the harm would be all sorts of you know people of different races were not shown housing ads. And so it’s very hard for them to prove they didn’t see something. And so there are a couple cases actually pending right now so, there’s a Fair Housing group that just brought a case across several states that alleges that you know they had bought ads that were approved by Facebook that discriminated by race. And so they have a case pending, there’s a case pending based on our work actually about age discrimination because Facebook also would let employers target their job ads only by certain age ranges, which may actually violate the Age Discrimination Act. So there are a couple cases pending, but they’re all kind of in early stages. So I don’t know if we know yet what is going to happen with the result of those.

HEFFNER: How do you see this intersecting with the Free Speech Movement or the cause of free speech?

ANGWIN: Yeah, I mean it’s a confusing situation, right? Because people think of free speech and it’s certainly a value we have in America. But it doesn’t actually apply, right, to commercial platforms. And so Facebook has the right, legally of course, to set any rules it wants within its walled garden. And they do. And they have their own rules about what speech they allow. So they have secret you know hate-speech rules, actually which I think they just published last week. But we got sort of, obtained their secret files, [LAUGHS] published them last summer, and showed what the rules were ‘cause people didn’t know what were the rules, why did they delete one post but not another. And so they actually are kind of like a government in their own way, in the sense that they set the rules for speech globally. Like in every country, and countries go to Facebook to lobby. To get like, well we don’t want this kind of speech in our country, like it’s illegal. Right? And so we’re in a very weird position where Facebook is like a government in the sense that it has this ability to regulate speech and yet we don’t, as citizens, have any sort of mechanism to hold them in check. Like the way that at least we can vote out a government and get a new one, if we don’t like how they’re dealing with free speech.

HEFFNER: When he testified, Mark Zuckerberg was asked repeatedly what is free speech to you. And there were a number of conservative senators who seemed to think that free speech gives a license to, and I shouldn’t say conservative, Republican senators who were sort of suggesting that free speech gives a license to racism. It’s the whole question of the marketplace of ideas. And when bigotry is being monetized then it makes it a more competitive force in the marketplace of ideas. How did you view the Senate’s response to Zuckerberg’s testimony and Zuckerberg’s own defense of that Facebook standard of free speech.

ANGWIN: Well you know it’s interesting because I think that Facebook actually does try to to limit bigotry, right? I don’t, I think most people would say it’s like pretty it’s like moderately successful at best. But they do try, and but the thing that is- is really the main legal challenge there is that Congress gave all the tech companies a blanket-immunity from liability for speech on their platforms. Back in the 1996 Telecom Act. And so essentially, even if something on Facebook violates an actual law, right? There’s a legal question about whether they actually are liable for that because they got this special immunity. And so everything that they do to monitor speech or to monitor bad practices is actually kind of just at their discretion. Because they’re not really going to be held accountable for it under this standard of immunity they have. And Congress I think is right now considering whittling away or maybe taking away that immunity. They’ve already taken a little piece of it away. So they’ve passed a law that I think has not been signed yet by the president, but basically says that for sex trafficking, that particular category of things they are not immune from liability. Which they previously were, they could say look I didn’t know what was happening on my site. Whatever, right? And now they’re not gonna be able to do that. And that might happen for more standard, more sectors of- of bad behavior.

HEFFNER: And this perfectly explains why Richard Burr could convene that hearing, that first hearing on social media influence on the election and basically get these social media executive lawyers, the counsels, to admit that they were violating FEC laws from the outset in terms of selling ad-space and receiving a whole lot of revenue from entities that they didn’t know, that could’ve been agents of foreign governments, intelligence services, extremist organizations. But it was always notable to me that in that hearing, the very first exchange was them basically admitting, confessing that they did break the law, but liability has shielded them in a lot of respects I guess. The thing I wanted you to reflect on is there’s a kind of conventional attitude as it pertains to Facebook versus Twitter that Facebook is financially robust enough to actually engage in that audit of its own governance and to increase the standards of discourse. Whereas, Twitter, on the other hand which has struggled over time to have that same profitability has gotten nowhere near the level of control to ensure the integrity of information and to ensure the end of harassment. And you noted in a recent Tweet that hate-baiting is the new click-bait. Hate-bait is the new click-bait. So I just had hoped you could maybe position us to contrast the Facebook and Twitter situation.

ANGWIN: Sure. You know yeah I mean I do think like if you think about my Tweet on hate-bait versus click-bait, In some ways, you know, all of these platforms are designed around engagement. The more that you are, stay on it and interact, the more ads you’re gonna see. Right? And that’s essentially their business model. And so what you have is sort of the escalation of outrage ‘cause it turns out that outrage is a really great way to get you to stay… So when I was talking about hate-bait versus click-bait, right? The fact is that all these different platforms are built around the idea of increasing your engagement, and what has happened over time is they’ve realized, and people have realized, that outrage is a great way to get people engaged. The more outraged you are, the more times you click and then you write a comment, and then you get in a fight with somebody, and like that stays on the platform. And so then you’re on there longer, you see more ads, and so unfortunately outrage is like a really great business model for both of these companies, for Facebook and Twitter. Facebook has recently said they’re going to try to limit outrage and try to like keep people happier on their platform. I don’t know how you know they’re going to do that and whether it’s measurable and whether that makes a difference. What it does mean is that news websites are already suffering because they’re seeing less traffic because their stories unfortunately the definition of news is usually that it’s something bad right? So they aren’t getting as high up in there. On Twitter they’re not doing I think the same thing, although they have said they’re going to do some experiments to try to figure out how to increase better positive interactions over time. And it’s true that both platforms have different financial resources to bring to bear, right? Facebook is a behemoth and Twitter is less so. But honestly the money needed to do that isn’t so much that I think both of them could adequately do it if they wanted to. But the question is what’s their real incentive? I mean their incentive is to make the bad press go away, but in the end they still need people to stay on the platform and look at ads and that’s their business model.

HEFFNER: So that’s what I wanted to ask you. How can they incentivize engagement that is not the chest-bumping, if we thought of this not as surveillance but as kind of stewardship, could users’ actions be incentivized to stay on the platform in a way that is more healthy?

ANGWIN: I mean possibly they can be, right? I think there’s, I think it remains to be seen. I think the real question is not whether Facebook and Twitter can provide the proper incentives for good behavior. It’s really the question is do Facebook and Twitter have the proper incentives to do that experiment? Because they’re both for-profit, publicly traded companies. And in the end, like stewardship is like low on the list of priorities for all publicly traded companies, right? And so that’s where you really get to the question of, which I hate to raise but is worth raising, which is like, do they need to be regulated in some way? Because if there’s a public interest that isn’t really the for-profit interest, right? The way government has addressed that in the past is through regulation, right? And so you know we have people who are noted scholars sort of raising the issue of like maybe they have to be regulated as a utility. Right? Like maybe there’s a whole level we have to go to. And I’m not necessarily advocating for it, but I’m just saying I’m not sure how you get those incentives aligned in the for-profit model.

HEFFNER: And you’re noting the absence, there’s basically been a dead or defunked FCC in the internet age. You identified it in the fact that they’re shielded from legal culpability. But the reality of the Telecom conversion to the digital culture is that there is, the FCC became a bystander at most to all the activities in that digital arena. The Honest Ads Act, Amy Klobuchar, Mark Warner, John McCain have sponsored this, is a very basic first step. You know there have been in the chattering class who have talked about for years that the fact that Bloomberg or a verified and reputable news company needed to buy Twitter because it was refusing to convert into a kind of sensible cooperative, we’ve been having this conversation on the air for four years, and I remember when Sue Gardener was here with us explaining why they went in the direction of for-profit instead of not-for-profit. So I just, as you continue to investigate these issues, what kind of regulation could work?

ANGWIN: Right, it’s a really good question. I did write a story recently for my last article at ProPublica for the Atlantic about four small things that could be used to sort of patch some of the holes, regulatory holes, around Facebook. And the one about election ads is one I think that’s really worth thinking about. So this woman Ann Ravel who’s a former commissioner of the Federal Election Commission has suggested that the tech companies should have some sort of know your customer laws, where essentially for political advertising they need to know who’s buying them. Right? I mean that was the problem with the Russian ads is they were like who knows who these people are? Right? It’s fine. And so and she has sort of said there’s a model like the way Treasury requires the banks to know their customers. And what they have is sort of an enforcement division, and whenever the banks have somebody suspicious they have safe harbor to bring those people’s names and information to the enforcement division. And that law enforcement group can investigate, right? And that sort of tracks with what Facebook said actually in their hearings. They were like look we didn’t know these were Russians, and we would actually need the intelligence agencies or the law enforcement to tell us that because we don’t kind of know every bad actor out there. And we need more cooperation with the intel community. And so there are mechanisms in place for other industries to have that, where you have sort of like you get your immunity from liability when you bring it to the enforcement operation. And they investigate. And I think that seems like a pretty reasonable kind of first step approach for just the problem of election ads. But then you have to kind of come up with different solutions for each thing like discriminatory ads and hate speech and you know there’s all sorts of issues that we wanna solve on these platforms. And not each one of them can be solved with some blanket thing.

HEFFNER: Right, right. The, with your new outlet, what are you hoping to achieve when it comes to being solution-oriented in your reporting so that you’re targeting each one of those problems associated with the new media climate?

ANGWIN: So essentially what I’m hoping to do in my newsroom is basically I want to build a newsroom around real tech expertise. So the way I do my investigations is generally like alongside a programmer, and we together figure out how to investigate different issues that involve technology. And having technology experts involved in the investigations leads to better outcomes. And what I have found is that it also leads to what I call like a better diagnosis of the problem. So lots of people are writing articles like Facebook is too big or it’s sort of creepy or it’s too whatever. We’re writing like hey, they have a dropdown menu for racism, how ‘bout we get rid of the dropdown menu? And so it’s just much more easy to fix, and eventually Facebook was like yes I think we will get rid of the dropdown menu, and now you can’t do that. And so by diagnosing the problem specifically enough, then it’s, makes it solvable. So that’s why I think journalists’ contribution, from my point of view is to diagnose the problems correctly and in the most concrete and data-rich way possible.

HEFFNER: That’s really helpful, Julia. When I said radical transparency before, this idea it was the idea that you could hover over an ad on Twitter or Facebook or you could even hover over a handle or a Facebook page, and get some immediate sense of statistics. So the first thing that comes to mind is Dove. Dove is doing an advertisement on these channels. You hover over the ad and you see how long the ad has been in stock, how many people have brought the product. I mean how many people have been directed to it from these IP addresses, you know? How many people are accessing it from the US? The Midwest? The Northeast. Europe, Asia. This kind of view that is open book. Would that be a way to experiment with radical transparency?

ANGWIN: It’s an interesting idea. I think sometimes it’s hard with commercial speech. So like with ad, like a Dove ad, right? Because the public interest in Dove’s targeting is maybe less than it is for political advertising. So I think you know…

HEFFNER: So let’s use the other example.

ANGWIN: Yeah, so let’s start with political advertising.

HEFFNER: New example, let me give you a different example of someone who is suspected, another example could be a meme that’s circulating that is partisan in nature or that is making a fallacious or baseless claim. And you could hover over it, and ultimately could be flagged and removed. But you would be able to see where it originates, which is important especially on Twitter when so much content has been stolen to mass-produce these automated bots and troll accounts. So that’s another example where you could see beyond what Amy Klobuchar has advocated which is in the first five seconds of that online ad is gonna be held to the same standard of an ad on television, paid for by X organization. I mean a step further than just knowing who might’ve bought the ad within the embedded video.

ANGWIN: So what you’re describing is probably exactly the kind of tools I’m hoping to build in this new newsroom. So one of the things I really believe is that journalism doesn’t always have to be words. Like journalism is about educating people. And so one way you could educate people is by building a tool, right, that teaches them something about what they’re looking at. About the memes or the Twitter account, or whatever. And so we, I can’t promise you which of these things of how easy it would be to do, but that’s the kind of thing we want to build is sort of maybe we’re going to have a thing which like you- you go and look at comments. You know how like these days it feels like everything on Amazon has a hundred million good reviews. Obviously some of them must be fake, all of them, it’s unclear. Right? We’d like to do some analysis and give you kind of a rating. Like well is there gonna- what’s the likelihood of this being truthful, right? And so I think there’s a lot more room to be done from journalism to try to bring some accountability to these online platforms, but not always in the form of a story but sometimes in the form of tools.

HEFFNER: Will those tools perhaps suggest that these companies despite the new scrutiny are not going to turn at least immediately to radical transparency, and are not going to eliminate in any kind of wholesale way the perpetuation of fake misleading information?

ANGWIN: I suspect yeah, I suspect that like you know I think they’re going to try. But I suspect there’ll be plenty of work for me to do in the next couple years, I’m not worried about lack of topics to cover on these type of platforms.

HEFFNER: Right. And just to go back to the outrage factory for one more moment that I have you here. What could be done internally within the tools outside of regulation because this is particularly problematic on Twitter. Just with the, if you look at a Donald Trump Tweet, or if you look at a New York Times reporter Tweet, you’ll see in response just so much garbage, venom, toxicity but and some of it’s programmed, some of it’s maybe intelligence officers, some combination of genuine partisan tribalism and the proliferation of the trolling through bots. And what in the short term can we do with that problem?

ANGWIN: It’s hard, right? I mean the truth is these are public squares, and they’re policed by private companies. And the truth is that really what it is is there are mobs who really just want to drive certain types of people out of the public square. They want to make it too costly for them to have a voice. Right? And of course, we shouldn’t let them do that as a society. And yet the cost personally to each person who gets a million death threats is super high and many of them do opt out and stop talking publicly. And that’s a really terrible outcome. That’s, unfortunately, the First Amendment is not really written to address that issue. We don’t have a kind of legal infrastructure to understand that issue. And so I’m not sure what can be done in the short term. I will say Twitter has done a kind of nice thing where you can change your mentions so you can sort of basically not see anybody who replies to you who you don’t already follow. And so you can kind of limit you know and I think maybe more, the platforms will come out with more tools to kind of limit your view. But that’s also super sad. ‘Cause then it just brings the filter bubble even closer, like all you ever hear is the same people over and over again. And the beautifulness of the social networks was the serendipity that you might find an interesting new person to talk to. And so I wish, I don’t actually have an answer except to say that it’s a super hard challenge. And it’s probably one that we have to all as a society collectively solve.

HEFFNER: Julia, thank you for your time today.

ANGWIN: Thank you.

HEFFNER: And thanks to you in the audience. I hope you join us again next time for a thoughtful excursion into the world of ideas. Until then, keep an open mind. Please visit The Open Mind website at Thirteen.org/OpenMind to view this program online or to access over 1,500 other interviews. And do check us out on Twitter and Facebook @OpenMindTV for updates on future programming.