Cathy O'Neil

Death by Algorithm

Air Date: September 10, 2016

Cathy O’Neil talks about her new book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.


HEFFNER: I’m Alexander Heffner, your host on The Open Mind. I apologize in advance to my teachers for today’s discussion, because they diagnosed me as “mathematically challenged, disadvantaged, illiterate” years ago. Nevertheless, we are gonna discuss numbers, at least in broad strokes. Because last year, at the Personal Democracy Forum, I was fortunate enough to view a profound – and yet understandable – presentation by our guest today, Cathy O’Neil. Now converted into her debut book, “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.” A Ph.D. trained mathematician, Barnard professor turned hedge fund analyst, O’Neil is the founder of where she blogs. And she also headed Columbia University’s lead program in data science.

“The Great Recession made it all too clear that mathematics, once my refuge – was not only deeply entangled in the world’s problems, it was also fueling many of them,” O’Neil wrote. In nearly all sectors of the American economy, she contends “inhumane algorithms are multiplying chaos and misfortune,” adding that “ill conceived models now micromanage the economy, from advertising to prisons, from public schools to elections. The single feedback or incentive in these systems? Money.” Cathy, how did we get here?

O’NEIL: Wow, I think… I think it, it’s fundamentally a question of, uh, what you just said basically: People’s fear, fear of mathematics. They are apologizing for not understanding mathematics. Um, that’s a problem. I am not trying to make you feel bad for it…

HEFFNER: Please.

O’NEIL: …but it means that people feel like they don’t have the right to question certain decisions… Um, I think of algorithms as just sort of ways of making decisions – well, decision processes – that happen to be embedded in code. And anybody should be able to question decision processes if, if it’s fundamentally important to their lives. But because algorithms are considered “mathematically sophisticated” and people don’t consider themselves mathematically sophisticated, they feel like they do not have the right to question these things. And so these algorithms take on kind of a life of their own in terms of authentic, of, authority, I should say. They have authority that they don’t deserve.

HEFFNER: Hmmm. Well, [LAUGHS] I’ll take what you are saying with a grain of salt. But also acknowledge that I understand to the extent that “trickle-down economics” was a proposition – rooted in math – that never bore fruit for the majority of this country. That is a basic stipulation that I understand. But my, my question really is, if you are gonna rewrite the algorithms – and we are gonna get to the algorithms in a second – but…

O’NEIL: Sure.

HEFFNER: …if we are gonna rewrite them to advantage people of color who are impoverished…if we are gonna rewrite them to “fix” them, to rewrite the rules… You say in, at the end of your book, that if you do it on Facebook and start a survey, Facebook owns your survey, in effect. So you, you… You know, is the math gonna take us anywhere more promising and to that Promised Land when it seems like an explicit race to, um, the top for the one percent and race to the bottom for everyone else?

O’NEIL: [That’s a big, loaded question. Um, I think… Let’s back up for a second. What I am trying to describe is that fact that algorithms are not – by themselves – going to “fix” any problems, um, that we actually have to be intentional about them. Just like, you know, if you… Imagine that… Let’s personify an algorithm: Imagine that you… An alien came down from Mars. And we said to this alien, “Please make these decisions for us about who should be hired, who should get into college, uh, who should we, um, who should be fired; um, how long should someone go to jail?” Imagine that we asked the Mars alien to make these decisions for us – and they just started making these decisions – um, what would we do with that? And would we, would we scrutinize it? Would we say, “Why did you choose this? Or why did you not choose that?” Would we, would we keep track of what the Mars alien has done, so that we could, we could tell afterwards whether it was a fair process? I think we would. But because, because we have made these ourselves with these sort of “magic machine learning algorithms,” um, we – and it’s because we trust mathematics – we don’t scrutinize these processes. So when you ask me like, “Is this going to just help the, you know, increase inequality?” Um, the answer is, well, if we don’t do anything else, if we don’t intervene, of course they will. Because it’s the one percent, it’s the people in power who control these algorithms, and they make them… They set them up. They deploy these algorithms to help themselves.

Um, so I guess what I am saying is: Left alone, doing nothing – just assuming that all these “algorithms” or these “aliens” are making “fair decisions” – is not a good approach. Um, but at the same time, we can’t just snap our fingers and expect things to get fair. We have to think about them at, at, with a…giving them the, uh, the credit for how much, how much complexity there is there… And it’s a case-by-case basis. So I mean, I am sure we’ll talk about examples. But in a, in a… For a given algorithm, it’s going to be like, “Hey, this is a decision making process. How do you scrutinize a decision making process?” It’s complicated.

HEFFNER: I definitely want to grapple with that.

O’NEIL: Yeah.

HEFFNER: But isn’t the centerpiece, the centerfold problem, that grounded, uh… We are, we are, we are alien to our own mortality or immortality. We are alien to our own morals. So there is no moral underpinning in a given algorithm.

O’NEIL: People like to think of algorithms as outside of the landscape of morals. But, of course, they are not. Um, I think of… I like to say that when I am building a model… I am a modeler, so I build models, right? So when I decide – and I make this example in my book – when I decide, um, to build a model that explains how I cook food for my children, right? Uh, one of the aspects of building a model is deciding what “success” looks like. And for me, “success” is “Did my kids have enough nutrients? Did they eat broccoli or some kind of fresh fruit or vegetable?” That’s my opinion… That’s my, me projecting my values onto my model, right? And everyone does that. Models are not, not necessarily outside our bodies, so sometimes our models are inside our minds, right? So we project success onto them. That is, a moral projection, if you will; many, many times, it is. If my kids were in charge of building that model for success for food, for dinner, they would choose to eat, you know, to, “Well, was there a lot of ice cream involved?” So it would be a totally different value system… Um, in that sense… And, and it just keeps going on like that. Not just the definition of “success,” but how we decide what’s important; how we decide, um, to get to the success. Um, everything is my opinion projected onto my model. So at the end of the day, a “model” is no more than “a formal opinion embedded in code.” So if it’s “a formal opinion embedded in code,” then it has everything to do with morals. We can’t expect it to be separated from that. What, what happens, in fact, is that the morals of an algorithm – the algorithms we’ll discuss – um, are “default morals” often times…or else they are morals that are thoughtlessly introduced by the, the, the eco, the ecosystem in which they are built…or by, even by the algorithm that they are, that’s used, so…or the data that’s collected. So there is all sort of ways that morals get into these algorithms, even when we talk about them being outside of morals. Um, they are just “default morals.” So my point being that: We need to think about that. Rather than carelessly throwing them into that, that soup, we need to actually think about it.

HEFFNER: And, and where do you think the morals are being injected? Let’s talk about a few examples, so…

O’NEIL: Yeah.

HEFFNER: We were saying off camera the examples of, um, formulas being designated in public schools in Washington D.C., um, that are failing communities of color even though there are examples being touted of “success stories…”

O’NEIL: Education… I mean, it’s a long history of educational reform. But basically the, the fundamental goal was to improve education nationwide, and to set standards nationwide so that our students, our children will be more “internationally competitive,” whatever that means. Um, it almost always starts out with a relatively worthy goal, and no one really would argue against making schools better, making education better and more uniform. But what ended up happening is… The first step was, “Well, let’s test our students to know how much they know.” And large disparities were found basically along poverty lines: Like, poorer kids did a lot worse on these tests than richer kids. Um, now, you or I, right, stop right there. You or I might say, “Maybe we should equalize funding of public schools in this country, because it’s not equal.” But that’s not… That wasn’t politically like, you know, “successful,” as an argument. So instead, what we decided to do was, “Well, we are gonna do No Child Left Behind, Race to the Top,” all these different versions of it, but basically say that, well, let’s hold teachers accountable for these students to get better at their tests. Um, we couldn’t expect a teacher to take a kid who is doing very badly in one year and just do magnificently the next year. So that was tried, it failed, it was obviously unfair. So the, the… A second generation of algorithms was introduced, where there was an, a model which extends, which sort of “guessed” – given a student, um, in fourth grade – “guessed” what their grade would be at the end of fifth grade, in a standardized test. And if the teacher managed to get the kid a little bit higher than that in their fifth grade test, then they got a little bump up. If it, if their fourth, if that fourth grader in fifth grade – by the end of fifth grade – did poorer than it was expected, the, the student, the teacher got a bump down. And then the teacher’s overall score with these things called “value-added models” or “growth models,” it was more or less just the sum of those little bumps. So each student in their, in a fifth grade teacher’s class would, would be a bump up or a bump down or it’s, hmmm, it’s no bump at all, if they did exactly as expected. And the sum of all those bumps was basically a reflection of the teacher’s, uh, “ability” as a teacher. So, lots of things about that… If, you know… I’ll, I’ll go into a couple… First of all, what’s a “value” that – you know, going back to the concept of the value system – what’s a value that we see embedded in this model? Number one is that we can evaluate a teacher by the standardized test scores of his or her students. Boom, like that’s already a big stretch for a lot of people when they think about what makes a good teacher… It’s particularly difficult for certain kinds of teachers – especially if they, um, they have students that are much below grade level… Um, it’s, it’s sort of already, you know, mathematically been expressed that: If you take a bunch of fourth graders that are actually reading at a second grade level, then by the end of fifth grade – even if they read at a fourth grade level by the end of fifth grade, which is bringing them up two years or even three years – um, they don’t do very well in the fifth grade test, so the teacher isn’t really rewarded for that work. Um, so this, this… The test itself is very problematic. But the, the thing that’s most unfair, from my perspective, is how opaque the model, the model is, because the teacher – and this is, this is a widespread model, it’s used in more than half the states – the teachers in question just get a score. They get a score, and it’s months after the, the, the year ended. Um, they don’t get feedback on how to improve their score, they don’t… They just get a score, it’s usually between zero and 100, and it basically just says, you know, “You, you got a 12, you are a terrible teacher. You got a 96, you are a great teacher.” It, it… To many students, uh, teachers – and to some of the teachers I interviewed – it seems completely random… and it doesn’t have any accountability itself. That’s probably the, the, the nastiest part about it. Um, I interviewed a teacher who got fired for a bad score… Uh, she had reason to believe that the student, some of the students coming into her class had actually heard… Their teacher had cheated the previous year and changed their, their answers to get, to, to raise their score, so the, the model that gave them “expected scores” for, at the end of her year, were wrong. They were expecting inflated scores. They didn’t do that well, they did fine. But she got a terrible score. She got fired. And she couldn’t appeal it. There was, there was no accountability for the, the scoring system that was supposed to keep teachers accountable.

HEFFNER: And I think this particular quote in your book is relevant, as we contemplate these algorithms: “You can’t appeal to a weapon of math destruction, right? That’s part of their fearsome power. They don’t listen. They don’t bend. They are deaf, not only [LAUGHS] to charm, threats and cajoling, but also to logic.” So I understand now more fully, and you say in the book what you mean when there is an “underpinning of morality,” in terms of a “just cause,” right? We are gonna institute this formula, because it’s gonna have, create some kind of just outcome, just as you might say, We are holding to account these teachers. We want to protect our neighborhood so that, you said to me off-camera, the most haunting example of a weapon of math destruction is in our prison system, where – unlike education – there can be a real motive of malice in trying to imprison and pocket the most dollars in a privatized industrial complex…

O’NEIL: And not only is there a profit goal, uh, seeking, but you know, we, we know from the, uh – knowing a little bit of American history – that it’s just a racist justice system. Like, the, the fact is we, we are just used to, uh, putting people in jail based on their race, and it, it… The system gives that back to us. It, the data itself… And so this is sort of another theme that I want to cover, which is, even if your algorithm is somehow “fair” – which we can talk about what that could mean – if the data coming into it is racist…if the data coming into it is biased against, you know, African Americans…then the outcomes are gonna be biased against African Americans… So, example: Um, you know – and this is a well, a well known example. Like, white boys, teenage boys, and black teenage boys smoke pot at about the same rate, but black teenage boys get, getting arrested for it much more often. And of course, I am glad to see that, you know, pot is being more or less legalized – I want that to happen – but I am just saying, that historically speaking, who gets in trouble for doing something? Um, not white people. Um, which means that the data itself, of the arrest records – which is, you know, what data is, in this realm – the arrest records is biased. The arrest records is already racist, if you will. Now, the, the recidivism risk models that I talk about in my book, it’s a pretty deep, uh, a deep subject… I would love to talk about why we even think about “recidivism risk” and whether that’s the right thing to think about… But let me just say what it is: “Recidivism risk” is, um, is the risk that someone who is going to jail is going to come back to jail at some point. Now we know that almost all people who go to jail eventually leave jail, so the question is, are they gonna stay clean after that or are they going to end up back in jail? And it just so happens that judges – in more than half the states in the country – um, are given “recidivism risk scores” for, for criminals when they are being sentenced. And typically, if they have a higher risk of recidivism – a higher risk of coming back to jail or to prison – um, the judge will give a longer sentence. OK, so that’s the setup. Now, there is two problems with this – well, there is many problems with this – but the two ones I will point to as one is that: How is the recidivism risk model determined? How is the risk score determined? When are you considered “risky”? Well, number one is, if you have had a lot of arrest records, if you have had a lot of arrests on your record. And we have already talked about that, that’s biased already. So that, and you know, all things being equal, um, you are gonna see more, um… You are going to see more African Americans with a higher recidivism risk. But then I looked into the actual “recidivism risk” – um, other attributes that go into this model, other data – and many of the questions that are asked of the, of the, um, of the person being sentenced are in fact “proxies” for poverty and sometimes race. So what I mean by that is, um… One of the questions that determines “recidivism risk” is like: “Have you… Do your parents… Uh, do you live with your parents. Do you live in a neighborhood with crime. Are you friends with people who are criminals.” Um, all of these things are correlated to basically being poor. So what you have here is you have this “risk score” which determines the length of time somebody might spend in, in prison that is doubly biased: Once, because well, police just arrest African Americans more often, and two, because the very questions that go into the score’s answer are in fact biased against poor people and African Americans. So it’s… That’s the one that haunts me the most, ’cause it just seems so wrong. And also, it is opaque, just like the teacher’s model we described. This, the, the… In fact, it’s not entirely opaque. I think if you had a good lawyer, if you had a expensive lawyer, they could help you “game” some of these questions. But guess what… Uh, typically, poor people don’t have good lawyers… Um, it’s unappealable… It’s not even clear the extent to which each judge uses the score… Like, maybe certain judges kind of ignore the score, certain judges really rely on this score… But it’s definitely… And Sonja Starr, who is an amazing researcher in this area, described it as “a thumb on the scale of justice.” And the question is: Is that thumb – just because it’s a scoring system of algorithm – is that thumb a little bit pressed too hard, because judges assume that it’s “objective truth” rather than just some kind of biased red, yellow or green, uh, light.

HEFFNER: In writing this book, did you find any score systems that were not rigged, that were doing it right?

O’NEIL: Ohhh, you know, I… I am… I am very good at finding flaws in things… [LAUGHS]


O’NEIL: Um, but can you suggest a scoring system… I mean, yes, actually. I mean, baseball. Baseball scores, right, Baseball, and sports in general – especially televised sports – are a kind of the perfect example of the way data should work; the way algorithms should work. So we have, um… The data itself is collected by a community, and fought over for accuracy: “Was that really an error, or was that a base hit?” You know, we care about that… And everyone gets to see how it’s being collected and whether it’s fair. And then we use it to decide whether a player is good or not. And we have arguments about that. That’s what sports radio does, right, [LAUGHS] if you think about it. Um, it’s kind of the opposite of what I have been talking about: It’s not opaque. It is… I mean, to some extent, the players themselves might say it’s not that accountable because they might think it’s unfair how they are being judged… But it is as close as it gets to an ideal where you have a, a very public viewing of the process and of the data collection…

HEFFNER: Is that because it’s merit driven? I am trying to…


HEFFNER: …assess why you say that’s an exception to the rule. And I am saying that in the climate that has brought – or I should say – uh, in a climate that has wrought severe populace angst and rage during this election cycle. Uh, and that rage, underlying it are the rigged systems that you point out from the U.S. World, News and World Report rankings to loan practices of delivering loans to families… Um, and I am just wondering – in your mind – from the sports, the sports metaphor – well, well, how that represents the antidote?

O’NEIL: I wouldn’t say that it’s entirely about the extent to which a person has control, but it is a lot about that, right? And I am not… By the way, I just… A caveat. I just want to make it clear that I am not trying to say I am, I am the moral, um, guide.


O’NEIL: I want to say that morals play a part in these things and that we should identify, identify those morals… But as far as I am concerned, in terms of what seems “fair” …It seems fair to evaluate a baseball player on whether – on how many hits they got; how many, um, home runs they got, etc.

Now, obviously there is a lot of chance in that, um – and it’s understood that there is a lot of chance. You know, on a given day, the left fielder might be really hot and be able to catch your hit, your what, your would-be hit. So there is not… You can’t control… As a baseball player, you can’t control everything… But compare that to the teacher whose students are poor and they have family problems and they do badly on their tests because of their family disruptions…they have no control over that…yet they are punished for that. Or compare that to the, uh, person being sentenced…who, um, who, yeah, they grew up in a poor neighborhood…that’s counted against them. They had no control over where they were born…but it’s counted against them for the “recidivism risk model.” So I just… I, you know, I think…

HEFFNER: I see what you are saying now…

O’NEIL: Sports players agree to that, when they take a job in the Major League level.

HEFFNER: And I think, uh, pardon the pun, uh, “a level playing field.”

O’NEIL: Yeah, [LAUGHS] exactly. Yeah. It does level the playing field, right?

HEFFNER: And you need a certain set of prerequisites in order to compete in this digital economy, that’s for sure…

O’NEIL: And you know, I mean, I am not actually… And if you… You know, I know you read my book, so you know that I am not saying we shouldn’t use data in, in education, right? I am not saying that, uh, data is a useless thing… I think all teachers know they are going to be evaluated… Everybody knows they are gonna be evaluated at their job; the question is whether the process by which they are evaluated is reasonable, it’s fair, it’s accountable, it’s transparent enough for people to ask those questions.

HEFFNER: Well, when we talk about “accountability” in this context – and we are running out of time, uh, even though, you know, this is fascinating – and there is a lot of collateral damage that you [LAUGHS]…

O’NEIL: Yes.

HEFFNER: …cite as a result of these “algorithm fails.” Fundamentally, you say, in your conclusion, that it is the human in us that is going to lead us to question algorithms that are, you know, unfair. And so, my, my, uh, my parting question to you is, if the bottom line needs to be replaced with something that is more considerate of other factors besides cha ching – uh, trillions, billions, whatever you are aiming for – how does it happen?

O’NEIL: [LAUGHS] It’s, it’s gonna happen in, in three ways. Um, first of all, I think the public in general has to be more on top of, of this lack of this trust issue they have [LAUGHS] with mathematics. The fear of mathematics – which translate into trust of algorithms – they have to get over that, you know? And I am… Uh, everything I have said… I didn’t write down a formula for you, right? Everything I said is very plain English. And I want to keep it that way, because it’s a really important point for most people to say, “Hey, I want to know why this happened to me.” So the second point is, I don’t care about most algorithms, and neither do you. I could make an algorithm about my cooking my dinner for my kids, no one cares, right. So I am not… I don’t… I am, I am not saying we have to scrutinize every algorithm, that’s just too much work.


O’NEIL: I want to scrutinize certain classes of algorithms that, um, that…

HEFFNER: Affect public policy…

O’NEIL: …that affect people…

HEFFNER: In terms of what’s constraining the design of the algorithms, to be most innovative today, you, you have to cut costs. You have to cut jobs, You have to really destroy the livelihoods of people. Isn’t that fundamentally what’s happening.

O’NEIL: Say that again.

HEFFNER: Absent, uh, moneybags for, you know, being droned to you…


HEFFNER: …how do, how do you correct it. Because that… Is that not right? That, that it is the… Uh, the, the innovation is being melded around cost cutting measures that destroy people’s lives…

O’NEIL: Yeah, yeah. Well, listen, I mean, I… I don’t… Like, uh… Uh, a great example for that is the, is the algorithms that filter applications for jobs. They are simply replacing, uh, people – uh, a human resources, uh, staff that people don’t think they can afford. Um, that’s probably not gonna change, but we do have to make sure that what they are not…they are…that they are not just excluding women…or they are not just excluding Latinos… Like, we have to… Uh, we have to make sure – like we do for any process – that it’s fair. And there are laws for this. Many of the suggestions I have at the end of my book are – and that’s the third way – are like actual laws, um, that, uh…actual sort of mostly Civil Rights Era anti-discrimination laws being updated to the big data age.

HEFFNER: Cathy O’Neil, we’ll have to do another show on “pay equity,” which was the topic I wanted to discuss with you, but we ran out of time.

O’NEIL: Great. Nice to meet you. Thank you so much.

HEFFNER: Of course. And thanks to you in the audience. I hope you join us again next time for a thoughtful excursion into the world of ideas. Until then, keep an open mind. Please visit The Open Mind website at to view this program online or to access over 1,500 other interviews. And do check us out on Twitter and Facebook @OpenMindTV for updates on future programming.