Making Humane Tech a Reality

Air Date: May 10, 2021

Algorithmic Justice League founder Joy Buolamwini discusses legislation to ban and redress the harm from weaponized data systems.

READ FULL TRANSCRIPT

I’m Alexander Heffner, your host on The Open Mind. I’m delighted to welcome back to our broadcast Joy Buolamwini. She is the founder of the Algorithmic Justice League, and she is the leading protagonist in the film, which you can find on PBS or Netflix or any of your video providers, Coded Bias. Joy, it’s a pleasure to host you again.

 

BUOLAMWINI: Thank you for having me back. I really appreciate it.

 

HEFFNER: And we first met at the MIT Media Lab where Joy is a poet of conscience and of code and has done scholarship. And you could have seen her recently on Jeopardy, as I did, teaching us about algorithmic bias in a series of clues. Joy you last joined us in early 2019, late 2018, when we recorded. A lot has changed in terms of the amplification of your work and ensuring that legislators and regulators care about it. What do you think are some of the most profound changes that have occurred since we last gathered?

 

BUOLAMWINI: Yes. So I think of major profound changes from one we recorded, we have legislation that was not existent in 2018. So for example we have even Minneapolis just in February banning certain uses of facial recognition technologies, joining Boston and joining San Francisco and over a dozen other cities across the nation that have put some kind of limitation on these technologies. And I certainly want to call out Portland, Oregon, because they have done groundbreaking legislation that focuses not just on government use, but also private use of facial recognition technologies, specifically following from the American with Disabilities Act to say, if this is a public space, right, this is not a place for facial recognition technologies. And on the state side, we’ve also seen progress legislatively. So in New York, they’ve suspended the use of facial recognition technologies in some schools there. And then in California, there’s also a suspension of facial recognition on police worn body cams. One thing that still remains, which was the case in 2018, is we do not yet have federal regulations for facial recognition technologies, but we do have a very promising bill introduced by Senator Markey. It’s the Facial Recognition and Biometric Technology Moratorium Act of 2020. ACLU and more than 40 civil rights organizations have signed a letter to the Biden-Harris administration urging that it be passed. And so from when we filmed in late 2018 to where we are now, we have seen an awakening with people starting to have more of an understanding of what it means when the face is the final frontier of privacy, but more importantly, the need to push for algorithmic justice.

 

HEFFNER: And that is a testament to your diligent efforts and those of your colleagues. But your organization has been at the forefront of this issue, and I congratulate you on your really important advocacy. Do you attribute the fast-paced movement of reform in those places, New York, California, and then you’re mentioning at the municipal level Portland, do you attribute it specifically to the reborn criminal justice and civil rights movement of last year? Or do you think it was already active before that, and, you know, the racial justice and equality movement re-galvanized or galvanized it anew?

 

BUOLAMWINI: So when I look at the space, right, some of the legislation I’m talking about, you can see that the killing of George Floyd added fuel to the fire. So even the fact that last summer, we had IBM, Microsoft, and Amazon back away from facial recognition technologies in different forms. You can absolutely link a straight thread, right to the racial reckoning. And even that racial reckoning, right, is preceded by many other events. So when I look at the push for racial justice, I’m very clear that you can’t have racial justice without algorithmic justice. And conversely you can’t have algorithmic justice without racial justice. And we know that the fight for racial justice has been ongoing for quite some time. So I view the work that so many in the ecosystem, whether we’re talking about Data for Black Lives, we’re talking about Fight for the Future, we’re talking about ACLU, we’re talking about EFF, so many, Media Justice. There’ve been so many people pushing to resist these technologies. So I see it as a confluence that certainly has been galvanized by the events of 2020, no doubt in my mind, for sure.

 

HEFFNER: And what about specific acts on the part of companies that were abusing these technologies or misusing these technologies in ways that were inhumane? When you think of the whistleblowers that have come forward and the exposes of ways in which companies have mined that data and used it nefariously, or I don’t know if you agree with the word inhumane here, but used it inhumanely?

 

BUOLAMWINI: Yeah. So I mean, what we’re seeing are the receipts worldwide the how data systems can be weaponized, how algorithms can be used for oppression. And so I do think we’re in a place where there’s no longer a question of is there algorithmic bias, right, but really starting to contend with, what does it look like to have algorithmic harms? And what we’re seeing are that, you know, we’re also seeing people are pushing back. In the film we see the Brooklyn tenants, you know tenant activists like TranaĆ© Moran and others pushing back on the installation of a facial recognition system in their home, by a landlord, right. And so when I see these sorts of moves, it is no longer the case that companies can just say, trust us, or there’s really not that, there’s a bias here. And we’re also seeing that researchers inside companies are facing the opposition. Our research, for example, when it came out and we analyze some of Amazon’s performance metrics, which were on par with their peers, not looking so great, we received pushback, but we were doing this externally.

 

We’ve also seen pushback internally, Google, specifically with ousting of Dr. Timnit Gebru who coauthored the Gender Shades initial paper with me and her collaborator and fellow co-lead Dr. Margaret Mitchell. So there’s also real risk in doing the kind of research that we’ve done at the Algorithmic Justice League that others are attempting to do within companies. But it’s also showing the power of that kind of research, where we can say, we’ve looked into it, we’ve done the analysis and the analysis along with the lived experience of the ex-coded – those who are being harmed by algorithmic systems, usually already, those who are most marginalized by society, show us that we’re no longer in a place we can just say we will trust what companies say and that the AI technology is neutral. The conversation has moved beyond that part, where we’re not just talking about algorithmic bias, but we’re talking about algorithmic harms. And beyond algorithmic harms, we need to be talking about algorithmic redress. What happens when somebody is impacted by one of these systems? What happens when you’re assigned an automated grade that doesn’t reflect your performance? What happens when you’re denied medical treatment? What happens when you are not promoted or you’re not hired, or you’re fired due to algorithmic decision-making? And so, because we have AI systems infiltrating our real lives, it’s absolutely paramount that the people have a voice, and the people have a choice.

 

HEFFNER: And because we don’t have that Markey legislation as law, and because there are not federal regulations enforced, you’re describing, you know, probably years-worth of human rights abuses and, you know, legal challenges that will be forged against companies that ought to be paying reparations for digital crimes to, you know, in that’s that’s essential, but what may be more essential is getting that federal legislation achieved. You mentioned New York, California, Oregon, you know, Portland, Oregon those all happen to be more Democratic or liberal constituencies, not entirely, but they they’re run by Democratic governors and mayors more so than Republicans. When it comes to the passage of legislation in this year, 2021, are you hopeful that there can be bipartisan compromise on this issue and that there can be some legislative output this year?

 

BUOLAMWINI: I am hopeful. And that hope comes from my own experiences of actually going to DC. In the film you’ll see that I’m testifying at a congressional hearing on facial recognition. And I was pleasantly surprised that lawmakers from both sides of the aisle were asking questions and very engaged in the conversation. And so I remember Jim Jordan and AOC both on the same page, right, when it comes to, we need pushback on facial recognition technologies, one from a privacy perspective, one from a civil rights perspective. I even remember one lawmaker asking something to do with the fact of, if you go to a gun show, right, can your face be tracked? And so everyone has a face. So everyone has a place in this conversation. I don’t think that has to be on one side of the aisle or another when we’re talking about what does the future of democracy look like, which implicates all of us.

 

HEFFNER: Right? So tangibly now take me through your process because there is the template of the Markey legislation from last year, but there’s also the possibility of incorporating this into economic security or infrastructure or some existing legislation that may be a priority. But speaking from the perspective of trying to actually accomplish this federal legislation this year, and having a lot of partners now in the business community who are, who have suspended bad practices and are engaged with you and trying to achieve this, what are the steps you’re taking?

 

BUOLAMWINI: Well here it has to be a multi-pronged step, right? We’ve talked about the passage of legislation at the municipal level. And I think that needs to continue. We have many examples, legislation that other cities are taking and modifying. So I do think you want to continue that push to put safeguards or to put bands where possible. So to me, we push where we can. And at the same time, I think we have to be really careful with this narrative of companies leading on legislation. We’ve seen legislation that’s been introduced by various tech companies and non-surprisingly, oftentimes the ex-coded, right, those who face discrimination algorithm make harms are not well considered with this kind of legislative effort. So who’s actually holding the pen and writing the laws makes quite a bit of difference. We are seeing a push towards business practices with responsible AI or ethical AI, but we also have to make sure that that’s not simply lip service to say we acknowledge the problem, but our actions show us otherwise. So we see again, the ousting of people like Dr. Tim, Nick Gabri, or the silencing of or the attempted silencing of critical research. And so I do think we want to take a multi-prong approach. At the federal level again, we have the Facial Recognition and Biometric Technology Moratorium Act. I do believe that creates a great starting point for pushing forward legislation that focuses on facial recognition technology. So we can halt what we’ve already shown, right, to have racial bias, to have gender bias, to have age bias. It’s already led to false arrest. You had Robert Williams, detained for 30 hours and arrested in front of his two young girls, you know, due to a false face recognition mismatch. And so the examples of the harms are already out there. People are already experiencing it. It’s not just the case of one bad algorithm gone rogue, but we are seeing how systematic racism, right, systemic racism can become systematized in algorithmic systems.

 

HEFFNER: So your approach basically is you’ll take the reform piecemeal or comprehensively on the federal level. You’ll encourage steps in those directions. It doesn’t have to all be in one landmark legislative act. It could be connected with legislation for infrastructure or, you know, economic security. It could be incorporated into some reconciliation in the future. You see a multitude of ways and avenues for achieving it. But you mentioned all the harms that have already been enacted. If you’re living in a state that doesn’t legally have those vehicles for you to pursue recourse, what are ways that you’re recommending folks to not just petition their government for legislative change, municipal or state law, but actually be part of a solution to demand that they are either compensated or acknowledged for the harm that’s been caused. There are, you know, of course there is a plethora of class action suits on any given day, but is there a way that you’ve conceived of how those people who’ve been harmed, who want to pursue damages ought to do so?

 

BUOLAMWINI: Now this is a great question about what those recourse look like for algorithmic harm. We are working on a project that the Algorithmic Justice League does even focus on how do you identify algorithmic harms in the first place? How do you discover them? How do you report them? And then also, how do you have pathways for redress? And it truly depends on the type of harm there is. One thing that everybody can do no matter where they are, what their background is, is to share their story. And it might seem very simple, but it’s also extremely powerful. In the film Coded Bias we see how my sharing my story of coding in a white mask eventually then leads to starting the Algorithmic Justice League. The importance of sharing your experiences, your various stories, or sharing those of others that you hear about is oftentimes we hear at the Algorithmic Justice League that, oh, I thought this was an isolated event, or I had a suspicion something was happening, but I didn’t really know what it was, or I didn’t have the background to really investigate it.

 

And so I do think, again, amplifying what these harms are, keeping your receipts, recording them, allows us to also establish the case for legislation, but also for pathways to redress. And so that’s something that everybody can do. If you want to learn more about what algorithmic harms look like in the real world, whether we’re talking about criminal justice or we’re talking about finance, or we’re talking about healthcare, we have so many examples in the film, Coded Bias, and it’s exactly to get at this point, right, where people have a voice and people know that they’re not alone in experiencing these harms. And what we’re doing at the Algorithmic Justice League is working again on a system that allows people to report these harms and seek redress. That will take some time to put in place. And as we’re putting that together, sharing those stories continues to be a vital and essential way to resist.

 

HEFFNER: Joy, let me ask you about the vaccine or vaccination passport conversation and debate. Where do you stand on either internally, you know, within the domestic United States requiring vaccine passports or internationally requiring them? There are opponents of the passports who believe that that information will be used in a discriminatory fashion, but I wanted your assessment of the, of the landscape.

 

BUOLAMWINI: Yes, no, I absolutely see the motivation behind the passports, but I absolutely support the concerns where we can end up with COVID creep. Right? So we’re bringing technological solutionism to areas that require more than a technical fix. So I think the risks that come up from collecting that kind of data, and then using that data to inform access is going to be filled with all kinds of discrimination and prejudice because of the systems in which they are operating. And I do not blame people who say, I don’t want my data there. I don’t want to be tracked. I don’t want to be surveyed. I, I think it makes a lot of sense.

 

HEFFNER: So is there a way, you know, in overcoming this public health crisis to ensure that people are vaccinated, even if you’re not using a passport system, using some kind of system that can be ethically employed and deployed so that we have the safety and security in our communities to know, you know, who’s vaccinated. It strikes me that if the passport isn’t the solution, there still have to be ways to measure public health because we got in this crisis in the first place, because we didn’t follow the mitigation model, mitigation and actually elimination model of countries like New Zealand and Taiwan and Singapore. So, if we’re not using the passport, it strikes me that we need something.

 

BUOLAMWINI: No, I absolutely agree. I wish I had a magic wand to say, here is an alternative a solution. I don’t, but I will say in terms of trust, one thing that we’ve come up against, right, is we don’t trust other people. We don’t trust what people are saying. And so we try to enforce trust, right, with these kinds of vaccinate vaccine passports and protocols. So my, my approach here would not be to go with a process that requires surveillance.

 

HEFFNER: So are there systems you can implement that would not be surveilled where you would still in effect testify to the fact that you were vaccinated?

 

BUOLAMWINI: Yeah. And that’s why I’m saying that that is the system of trust. Right. You know, and that is, that would be a risk where somebody could potentially lie, but I do think that could be an alternative.

 

HEFFNER: Right. It does seem though that, you know, if the pandemic persists for years, that I don’t know how much trust, in the abstract, and I know you’re not talking about just in the abstract, but how much we can, we can rely just on the principle of trust, right, I mean, in the sense that we’ve experienced COVID for over a year now, in all likelihood, even with the advent of these new vaccines, there’s going to be you know, public health concerns for a long time. I’m just wondering if there’s a way for those who were engaged in the process of vaccinating, we already know that Walgreens and CVS they’re collecting data on these, on people who are getting vaccinated. I mean, there are war chests of data. The question is how you can have efficient, you know, systems that are not going to surveil, but still have some trust in outcomes that is, you know, data with trust. And I’m just trying to understand how the data can be amassed, how we can know each person is vaccinated in any kind of system that is not surveilled that has a basic level of trust, but there, there is some storage. I mean, can you have storage without surveillance, I guess is my question.

BUOLAMWINI: Yeah. I really appreciate the work that’s coming out of Data for Black Lives, which talks about no data weapons and the ways in which we can also use data to serve communities. So one of the things we saw with COVID, right, was the fact that the statistics that were coming out oftentimes did not give you statistics in terms of breaking it down by the impacts of communities of color. And we know communities of color are bearing the brunt of COVID, the spread of the, of disease. And I think the other part of the virus, but I think the other part of it, that’s also fascinating to me to see is that the very communities that we’re looking at that have been most devastated are also the very communities that have been highly surveilled in the past and have seen the devastating effects of that. So that’s why I don’t see a surveillance-based approach being the answer. So there are ways you can aggregate the data, try to anonymize the data but I do think you’re going to have to come up with a system that is opt-in that is not based on coercion.

 

HEFFNER: Right, right. Anonymize is interesting. I think you possibly can store that information and have the essential knowledge about who’s vaccinated without having the more detailed personal data exposed and surveilled. Maybe there’s a way to do that. I know we’ve run out of time. Last time we were together I asked you if you had watched any of the Mr. Robot series, which is now concluded. I wanted to know if you’ve seen that since we last met, or if you’ve watched Devs or The One, all shows that I think of your league when I watched, and I say, what is, what is Joy thinking about this?

 

BUOLAMWINI: No, I’m sorry. I still have not watched it. I went and I filmed Coded Bias.

 

HEFFNER: I know. Well, everybody should watch Coded Bias, but if you’re looking for an escape from reality, that is still dystopian then, then maybe you’ll watch one of those series.

 

BUOLAMWINI : A dystopian escape!

 

HEFFNER: Yeah. Right! Joy Buolamwini, founder of the Algorithmic Justice League, thank you so much for your insight today and for joining me again.

 

BUOLAMWINI: Alright. Thank you.

 

HEFFNER: Please visit The Open Mind website at Thirteen.org/OpenMind to view this program online or to access over 1,500 other interviews. And do check us out on Twitter and Facebook @OpenMindTV for updates on future programming.