Joy Buolamwini
Algorithmic Justice
Air Date: January 11, 2019
READ FULL TRANSCRIPT
I’m Alexander Heffner, your host on The Open Mind. It was heartening to learn from today’s guest about the formation of the Algorithmic Justice League at the MIT Media Lab last year. That’s because she’s building a prescription to unethical artificial intelligence with which scholar activists, Virginia Eubanks, and Cathy O’Neil most recently grappled on the open mind. We are thrilled to welcome Algorithmic Justice League founder Joy Buolamwini. Her organization’s mission is to highlight algorithmic bias through media, art and science, to provide space for people to voice concerns and experiences with coded bias and to develop practices for accountability during the design, development, and deployment of coded systems. In her efforts to document bias and restore trust to technology. Buolamwini recently delivered a presentation to the Federal Trade Commission with her MIT thesis “Findings on Gender and Racial Bias in Facial Analysis Technology” developed from IBM, Microsoft, and other cognitive service technologies. Their ultimate effect if unchecked can be a cycle of computer-generated discrimination, and I’ll ask joy to expound on that. A pleasure to see you again.
BUOLAMWINI: Great to be here. Thank you.
HEFFNER: It was like a spark went off when I was in your lab. You’re in Ethan Zuckerman’s lab and you’re giving this terrific presentation on bias that is institutionalized in effect through technologies today because we had recently hosted two authors on that very subject who were documenting the problems, but here you are addressing it with solutions. So can you give our viewers a sense of the origin of your project, how you created this organization?
BUOLAMWINI: Sure, so I didn’t start out thinking about algorithmic justice when I was at the media lab for my first semester. I took a course called science fabrication. You read science fiction and you try to build something fanciful that would probably be impractical otherwise. So I built this project called the aspire mirror. You look into what seems like a regular mirror and then when the camera detects your face, suddenly you can become a lion, or in my case, I wanted to be Serena Williams, so whatever you want to be in the mirror. And as I was building this project, I was using a Webcam that had computer vision software meant to detect the face, but it didn’t consistently detect my face until I literally put on a white mask. So it was this experience of building what was essentially an art installation and running into issues with the technology that I started questioning why am I wearing a white mask to be detected? I have lighter skinned colleagues who seem to use this just fine as it because of the lighting conditions, what’s going on. So that’s really when I started exploring facial analysis technology, which is being powered by AI techniques. So that’s where it started. And so I gave a Ted Talk about this, over a million views and I thought somebody might want to check my claims right about not having my face detected. So why don’t I check myself? So I took the Ted profile image and I ran it across various systems from IBM, Microsoft, Google, etc. And I found that some of their systems didn’t detect that image at all, and the ones that did detect the image labeled me as male. I am not male right either in gender expression or identity and so I wanted to know if it was just my unique facial features or if it was something more systematic. So that’s what ended up forming the basis of my Media Lab thesis where I wanted to see how accurate are these facial analysis systems when it comes to guessing the gender of an age and does your skin type make a difference, does your gender make a difference? And it might seem like, okay, you got mis-gendered, does that really matter? And so when I came across a report called The Perpetual Lineup, which came out of Georgetown Law, it showed that one in two adults in the us, that’s over 117 million people has their face in a face recognition network that can be searched by law enforcement unwarranted using algorithms that haven’t been audited for accuracy. And in the UK where they actually have real world performance reports, you have false positive match rates over 90 percent and they even had instances where two women were falsely identified as innocent men, so this situation, which in one context might seem like an annoyance, in the real world can actually lead to issues and so this was a bit of the backdrop which I decided to go ahead and test these systems, but I ran into a major issue which was that the existing benchmarks, the data sets of faces that are used to judge how well these systems work are not very representative.
So I started looking at various benchmarks that have been used as the gold standard to say how well are we doing in the facial analysis space? And one of the early gold standard benchmarks turned out to be 77 percent male and 83 percent white individuals. Then I looked at a benchmark coming from the National Institute for Standards and Technology. It’s a government agency that’s tasked with making benchmarks for this type of technology and I looked at their benchmark, a slight improvement, 75 percent male and 80 percent lighter skin, so I realized that if we have these pale male data sets, we’re not actually going to have a good sense of performance and so I made a new data set, one that was more inclusive called the Pilot Parliaments Benchmark and people are like, well, how did you make it more gender balanced, how did you get more skin type variation?
I went to the UN women’s website and I got a list of the top 10 nations by their representation of women in parliament. Rwanda led the way there in the sixties and you have progressive Nordic countries in there: Iceland, Finland, Sweden, a few more African countries and so I decided to choose three African countries. Three European countries to get a bit more of a gender-balanced benchmark, but also get a spread of skin types. Right? Very dark skinned individuals, very light skinned individuals. So I made this benchmark because we currently had these pale male benchmarks. Right? And with this new benchmark, this is where it started to get interesting. I tested systems from IBM from Microsoft and from Face Plus, Plus a leading billion dollar tech company in China that’s actually used by the Chinese government. So they have access to a large store of Chinese faces and I wanted to see, okay, how accurate are these systems?
So if you do it in aggregate, if you just look at the overall accuracy for that whole benchmark, you had accuracy that went from 88 percent with IBM up to 94 percent with Microsoft, which might seem okay, but once we started to rate the benchmark results by gender and by skin type, that’s when we saw a very large disparities. So for lighter skinned males, error rates were no more than one percent for guessing the gender of a face in that benchmark. When you go to lighter skinned females, no more than seven percent. When you go to darker skinned males, you get around 12 percent. And when you go to faces like mine, darker skinned females, you’re at around 34, 35 percent error rates in aggregate. If you disaggregate that and you look at the darkest skin, females, the highly melanated like myself, you actually had air rates as high as 47 percent in commercially sold products, which for me was really surprising because we were doing gender in a way that it was reduced to a binary.
So you have a 50/50 shot of getting it right by just guessing. And so I sent the results to the companies to see what their response would be. And it actually turned out to be something many people were overlooking. So after the study came out, companies have released new systems showing some marked improvement. But even if these systems are more accurate, how they’re used is just as important. So for example, IBM made maybe a tenfold improvement on their worst performing group. But the summer, the Intercept came out with a report showing that they had secretly equipped and NYPD with video analytics that could search for people by their skin tone that could search for people by their facial hair and what they were wearing. So essentially tools for racial profiling, right? So it’s not just a question of how accurate these systems are, it’s a question about how they’re being used.
HEFFNER: Well thank you, Joy, because that really is an illuminating overview and really informative for our viewers who are not familiar with AI. And let’s talk about the policy implications, that is what is most meaningful in the way in which those algorithms, if not audited, if not improved from what you described, could really cause harm to people.
BUOLAMWINI: One of the reasons I was quite concerned is I learned that in the US, there are no federal regulations for facial recognition technology. So you have…
HEFFNER: To this day?
BUOLAMWINI: To this day, there are still no federal regulation. So you have a space where companies, right, can sell systems to government, entities and other types of organizations without any kind of oversight. So let’s talk about the implications of this. Here’s one example, you have a company called Hire View. They have over 600 clients, including people like Unilever and Nike. They use AI to help with hiring decisions and in their marketing materials, their own marketing materials, say they use verbal and nonverbal cues to help you better understand people’s problem solving ability in all sorts of things from their facial movements and other cues and in reports about the system they say the way they train the system is by looking at current top performers, so given everything we know about how bias can be reflected when you have homogeneous groups of people in the data set, a worry for me is what this system which is being deployed to hopefully increase diversity and reduce bias, actually do the opposite and be in breach in Title Seven of the Civil Rights Act, right? Which says you can’t discriminate by skin type. We show these algorithms can have issues with skin type. You can’t discriminate by gender. We show these algorithms can have these sorts of issues, so here you might not necessarily know this is even going on, which is another issue and where policy can come into place. Policies say we need affirmative consent.
HEFFNER: Right.
BUOLAMWINI: If my face is going to be scanned in the first place, do I know? Let’s look at Facebook. Facebook has one of the largest stores of labeled data of faces. How and why? Well we’ve been uploading our images and tagging people right, and this has enabled and Facebook to get very valuable information and develop facial analysis technology. In 2014, Facebook came out with the paper called Deep Face that showed a marked improvement in facial recognition abilities. And where did that marked improvement come from? Having access to more data, our data. So now Facebook actually stores a unique face print of your face, your image, right? While you’re on the system and you might be able to opt out if you go to certain settings, but that doesn’t mean they’re deleting your biometric information. This biometric information could be used by law enforcement, it could be sold to other companies, and so we don’t even have a sense of what’s going on in the first place. Going back to the hiring example, you show up. They say, oh, we have this cool new app, just do the interview and you should be fine. Do you even know they’re running AI analytics in the first place and if you have a poor decision, right, how do you contest? So that’s another place that policy can come into play when it comes to due process.
HEFFNER: Right.
BUOLAMWINI: Because right now if you don’t know there’s AI in play who do you go to? What do you ask?
HEFFNER: Now that organizations like the ones you mentioned, but also Facebook and the newer companies have the storage house of data, how are you advocating that it be managed ethically now that it is in the property of the companies?
BUOLAMWINI: Sure. One of the first things is affirmative consent. Do we know how these systems are being?
HEFFNER: But it needs almost to be retroactive to every image that was ever recorded. Not. You know what I mean? Not just new consent.
BUOLAMWINI: True, but there are also systems that we don’t necessarily know how they’re being employed right now. So even in New York there’s a bill that’s on, that has been proposed to say in a place like Madison Square Garden, right, where reports have shown facial analysis technology is being used even though it hadn’t necessarily been disclosed that it’s being used in the first place, right. So I do think it’s important that we say even though these systems have been deployed without our knowing, starting at a place where we’re aware of their use is absolutely important. Now there’s another step. You know where it’s being used, right? Do you have voice? Do you have control? Do you have consent? So for Facebook, what I would like to see is the option to purge your biometric data, right? So you don’t necessarily have to say I’m using the service and by my using service you automatically get to keep and store biometric data about me.
Right. So even though they’ve been doing that in the past, it doesn’t mean we can’t change the practice moving forward.
HEFFNER: How about those other companies were the ones to which you sent your research?
BUOLAMWINI: So I sent my research to Microsoft, I sent it to IBM; I sent it to Face Plus Plus. Microsoft and IBM got back to me saying this is something we care about. And again, I mentioned the Intercept article that came out with IBM. They are selling these systems and so in some cases when I talk to companies, I use the term the under-sampled majority, so if you’re missing women and people of color, right, this isn’t necessarily a minority concern. But again, are you…
HEFFNER: So how do you insure that the new algorithms they’re coming up with are ethical?
BUOLAMWINI: So when we’re thinking about ethical algorithms, we have to think about systems and not products. So if you just think about a product, right, then you’re like, okay, how, what kind of bias does this product have, etc. and so forth, but the question we really should be asking are how are these products being designed, developed and deployed? What are the mechanisms for oversight or accountability in the first place? So to deploy ethical facial recognition, I think coming up with something called the Safe Face Project and what the Safe Face Project, there are four major things we’re asking for. The first thing is to show value for human life has dignity and rights, right, and this means not developing facial analysis technology for use for lethal autonomous weapons. Let’s say there are categorical areas we do not want to use the technology. This is one thing companies can step up and say, we won’t supply facial analysis technology in a way that could lead to bodily harm, right? Like that’s make categorical bands. Those are some steps that you can take. The other thing is to address bias continuously, which means doing it internally, right, where you’re checking throughout the entire design process, design, development, deployment of these systems, and you’re not reacting the way companies had to do when gender shades came out, but it’s actually part of your process, right? Not an end product, but a process of continuously checking for bias. The other thing is facilitating transparency, so submitting your models to the National Institute for Standards and Technologies or other standards bodies so we have a better sense of how they’re actually working and also reporting on real world deployment because the benchmarks will only tell you so much. And then the final thing is actually embedding these types of practices into their contracts, right? Where you say we provide a service that another company can integrate into their product and this means bias can propagate rapidly, right? So a company like Amazon might have a cloud service that gives you facial recognition and then you have thousands of companies that are using that service. So if you can say, okay, we also require ethical use of these systems we’re providing, I think that can go a long way.
HEFFNER: Joy, we’re talking about an Internet Bill of Rights in effect broadly, or at least the right of the user to know how the algorithm is functioning and how it might practically affect them in their life.
BUOLAMWINI: And not only to know, to have agency.
HEFFNER: And to have to agency. So how do you collect those imperatives, they’re really imperatives as you described them in a way that can be tangible beyond the internal conduct of these companies, which is important, what we’ve talked about with IBM and Facebook, beyond internal reform, what is your hope for external action?
BUOLAMWINI: GDPR the general data protection regulation that came out from the European Union, it attempts to say what does it look like to empower a data citizen? Right? So if you are a citizen of the European Union, these are certain rights that you have with your data. And that’s been interesting. You might’ve seen earlier this year when it wasn’t acted. Maybe you got a ton of emails, right, telling you our privacy policy has been updated and so forth. And I believe people who are in violation of GDPR face a 20 million Euro fine or four percent of global turnover. So it’s not without consequence to breach these systems. What’s been interesting to me is looking at conversations that have been happening in the EU, at the UN there’s the Montreal Declaration. There are quite a few declarations out there talking about principles for good governance of AI or more ethical AI. They tend to come from European countries, right? Or western countries, and I’m quite concerned about what the implications of having these systems or like a GDPR that’s focused on European citizens. What does it mean for the rest of the world? So let me give you an example. Let’s say you have data protections for European citizens, but companies want to go gather data, right? So where’s the next place you go? You go to the global south; you’re starting to see this with facial recognition systems right now, where you have Chinese companies going to African nations. So there’s one instance, we have a Chinese company going to Zimbabwe saying, we’ll give you write this facial analysis technology in return for something very precious, which is the data, right? The data of your citizens. So for me, what I’m starting to see is almost like a parallel, right? A bit to the transatlantic slave trade where you have bodies but now digital bodies being sourced and exploited.
HEFFNER:
BUOLAMWINI: right? And then being used in other systems….
HEFFNER: I had asked you when we met, if you had seen Mr. Robot, have you? I told you – I’ll invite you here if you go see Mr. Robot! (laughs) your Algorithmic Justice League is the antidote to what is a dystopian future that you describe when in effect the Chinese are harvesting, not organs, but all the data associated with people, just what you described.
BUOLAMWINI: I call this the pending exploitation of the data wealth of the global south, but it’s actually happening.
HEFFNER: So, so whether you want to call it a dystopia or not, I’ll take that liberty and say that is a kind of dystopia
BUOLAMWINI: That is happening…
HEFFNER: That is happening in real time. But from the American perspective, we have some minutes remaining. So my question more specifically is, who in American politics is demonstrably concerned about this issue and acting on it, any particular politicians?
BUOLAMWINI: So this summer I actually had the opportunity to brief staffers of the Congressional Black Caucus and also shared some of my research findings with us, Senator Kamala Harris, and she, along with seven other senators, wrote letters to the FBI, to the Federal Trade Commission and to the EEOC specifically asking that they look at the risks posed by facial analysis technologies, could these breach civil liberties and civil rights laws? And they absolutely can. So we do have people in congress who are concerned and are pushing more for regulation for more government consideration about these systems.
HEFFNER: Are there any draft pieces of legislation that would help?
BUOLAMWINI: Sure. So Georgetown Law in 2016 when they released the “Perpetual Lineup” report, right, the one showing over 100 police departments using facial analysis technology, unregulated, unwarranted, all of that.
They actually proposed draft legislation for facial analysis technology that could fill in this gap we currently have where we have no federal laws.
HEFFNER: And what did that, what did that espouse?
BUOLAMWINI: First that was to set a high standard for the use of facial analysis technology in the first place. Right. Is there imminent danger; is their immediate threat to bodily harm? Do you have a warrant to even use this technology in the first place? Another component is actually checking the accuracy of these systems in the first place, right? You say we want these systems to help, but are you sending parachutes with holes in the first place? So I think it was a good draft proposal, right? Showing some substantive action that can be taken now,
HEFFNER: How do you see if at all, the facial recognition concerns intersecting the banking and financial sector? We’ve talked about it in the context of criminal justice; employment, mentioned the EEOC your correspondence with Senator Harris. But I’m going to the point of, of Mr. Robot, which maybe you’ll watch now, (laughter) is given the inequities that exist in our society, unlike in the projected dystopia of Mr. Robot where it just all debt is cleared and you know, all banks, banks accounts are in effect, equalized. It’s more likely in the American perspective that those without means or with less means are going to suffer, in the case of a hack or in the case of an economic Pearl Harbor or 9/11 scenario. How, if at all, will AI be relevant to the future of the banking and financial sector?
BUOLAMWINI: Yes. So this summer I had the opportunity to share my research at a conference called Fund Forum International where you have some of the leading hedge fund managers, etc. excited about the possibilities of AI for financial services, right? So whether it’s seeing how creditworthy somebody might be for a particular opportunity, so as you’re having AI influence decisions about if you have access to credit, access to loans, et cetera, and so forth, any type of systematic bias that’s embedded within AI can lead to a digital redlining. So because of something about your identity but not necessarily your own actions, you are denied access to something like financial services. I think another way to look at it, if we’re thinking about economic impact, is the use of AI to look at social media to infer something about somebody’s personality. So a recent example of this is a report that came out in the Washington Post where startup called Predictim allows parents to vet babysitters or potential babysitters, so what the babysitter has to do is submit their social media account information and then the company looks through their social media to see how positive is this person, do they have any indication of potential drug use, et cetera and so forth. So you have a system that’s doing some kind of data analysis on faulty assumptions, but then is actually impacting somebody’s true job prospects.
HEFFNER: We’re running out of time. I wonder what the implications are for Uber and a host of other companies when it comes to the data analytics and the identification of prospective employers, prospective consumers just seconds left Joy but tell us….
BUOLAMWINI: Well, another thing to consider is the advent of self-driving cars. So if we’re talking about human sensing AIs and we’re talking about pedestrian tracking, what happens if you don’t track certain pedestrians in the first place, in the self driving car incident, and even if you are tracking pedestrians, there’s also the issue of moral dilemmas if you have to choose between saving the driver or saving someone who’s outside, who’s making that decision and are these d determinations based on your value as a person, as assessed by an AI…
HEFFNER: Joy. You run an organization with the coolest name of any guest I have hosted here going on five years and you’re also doing really important work. Thank you for that.
BUOLAMWINI: Thank you, appreciate it.
HEFFNER: And thanks to you in the audience. I hope you join us again next time for thoughtful excursion into the world of ideas. Until then, keep an open mind. Please visit The Open Mind website at Thirteen.org/OpenMind to view this program online or to access over 1,500 other interviews and do check us out on Twitter and Facebook @OpenMindTV for updates on future programming.