Wendell Wallach

Angels and Demons of A.I.

Air Date: April 16, 2016

Bioethicist Wendell Wallach talks about the moral challenges of Artificial Intelligence

READ FULL TRANSCRIPT

HEFFNER: I’m Alexander Heffner, your host on The Open Mind. TED Talk curator Chris Anderson joined us recently to consider the danger of artificial intelligence, namely its potential to drive away or make obsolete the moral compass of human beings and civilization as we know it. Of course sometimes, we’re our own worst enemy, and we would rather not embrace the present reality. From, from the eugenics movement of Nazi Germany to the contemporary age of Trumpean bigotry. So I’ve invited today the leading ethicist in the arena of innovation. He’s going to help us understand the term techno sapiens as he calls it, with our drones, our supercomputers, our designer babies, and now our 3D printers too. Wendell Wallach is the author of A Dangerous Master: How to Keep Technology From Slipping Beyond Our Control. He’s also a scholar at Yale University’s Interdisciplinary Center for Bioethics. So as he sees it at this juncture, I want to ask Wendell, what controls are being implemented in his mind to ensure that we’re the master of our better angels, as Lincoln would say, rather than our most, ah, villainous demons? Welcome.

WALLACH: Well thank you very much for having me. I would, I’m afraid I’m going to have to say not very many controls are in place, that uh, even recently this often referred-to Stephen Hawking’s editorial said well with all these promises but these dangerous dangers, somebody is doing something, aren’t they, we’re taking all measures and they say no, we aren’t. And so that’s my concern also, that there’s an awful lot of benefits from these emerging technologies, that’s why they’re all being developed but there are also risks, dangers, things that we could address but in most cases we don’t because in America we often wait till after something goes wrong before we put precautionary measures in place.

HEFFNER: Well you were telling me, now reminding me that you’re a Wesleyan alum and they, in their own spirit, are activists, reformers, um, of social causes. This is its own social cause so when you say we won’t or we’re unwilling to put those controls in place, when I had cofounder of Twitter, Biz Stone on the show, I asked him, is it possible to impose any kind of civic ethos or core in what you’re doing? I mean ‘cause if you’re going to scroll through Twitter as an example of social media maybe got, ro—run amok. Uh, you’ll, you’ll see examples of anti-social behavior antithetical to our values, so from the starting point of ethical treatment, how do you get uh, companies who are predominantly the sponsors of or the creators of this new technology to define in their mind what is the moral compass that will drive the future of their products?

WALLACH: Well it’s certainly not easy, but I think the main thing you can do is help them envisage how things could go wrong, and if things could go wrong what that would do to their business. So in the case of our real estate meltdown, nobody really looked in advance at the derivative crisis or what would happen if real estate prices went down, or if they did, they did it so cynically that they just felt they could get away with it and maybe get a bailout or something like that, which they finally did. So I think similar things exist with technology where there is a degree of responsibility coming out of many of the tech leaders. Particularly in the past year, it has been fascinating to watch people in the AI industry bring out these various warnings about or concerns about everything from lethal autonomous weapons to technological unemployment to perhaps some future singularity, some future form of artificial intelligence that’s smarter than humans and what impact that could happen. And the same thing has happened in biotechnology, so in this past year, uh, there’s been a lot of attention given to this new way of editing genes known as CRISPR-Cas9 which dramatically speeds up the ability to alter the genome in all of its forms, not just plants and animals but also the human genome. So a lot of scientists and some business leaders have stepped forward and say we should begin addressing this. That’s heartening.

HEFFNER: But in what way do you think they’re addressing it?

WALLACH: Well I think right now they’re just waving a few red flags and perhaps looking at investing a little bit of money. But Elon Musk put ten million dollars behind the concern he’d raised about future artificial intelligence being uh, smarter than humans so he gave the Future of Life Institute, uh, this money, they went out for research proposals, they got three hundred back. They were only distributing at that point seven point two million and they gave it to thirty-seven proposals, but the very fact that there was somebody going out for proposals, there were three hundred proposals, that’s at least a first step in doing some of the research and thinking through both the technical and policy issues that will come up in ensuring that artificial intelligence for example stays our good servant but doesn’t turn into our dangerous master.

HEFFNER: Right, so to your mind, what is the dangerous master?

WALLACH: Well the dangerous master is technology that we just surrender to. So the, the quote I just made actually goes back to the 1920s Norwegian peace activist Christian Lous Lange and he said technology is a good servant but a dangerous master, and I’ve juxtaposed that with a quote from uh, from a comedian that some of your older, uh, some of your older viewers might remember, uh, Professor Irwin Corey, and he said we better change directions soon if we don’t want to end up where we’re headed. So in my thoughts, the basic idea here is how do we ensure that technology does not become the dominant, inevitable dictating force of our future? Google cars or self-driving cars in general give us a wonderful or apt metaphor for all this. Is technology moving into the driver’s seat as the primary determinant of human destiny? And that for me, that’s the dangerous master. What’s needed is more concer—concerted effort on our part to try and help manage, shape, and also adapt to emerging technologies in a way that they stay our, stay our tools and not dictate who or what we should be.

HEFFNER: They do dictate the mechanisms through which we learn, through which we, um, travel, um, through which we operate as, as a functioning society today. Uh, there was an interesting op-ed recently in The New York Times about um, this kind of self-fulfilling prophecy of when you Google something for an answer, you expect to, to know it and that is a translation of an informed citizenry. But when you really find that underpinning all the Googling is a lack of informed consent, a lack of knowledge, um, then are we not talking about the most important vehicle for this technology to be to democratize, um, information and skills so that a wider spectrum of people can harness them in the future, otherwise it’s really just gonna be the Musks and the Zuckerbergs who are in control of that destiny?

WALLACH: People readily see the benefits they’re deriving from Google for example, but they may be oblivious to the fact of what’s happening to that data and what might happen to that data in the future, how it might be used in ways that they would not appreciate. And that’s okay as long as we stay, it may be okay to the extent that we stay, uh, a free society, but you can imagine circumstances under which that knowledge about you could be used to, to destroy many lives, you know, particularly if we move down the road of, of fear of terrorism justifying um, a new, a new form of repression within America. So the, the difficulty here is informed citizens can’t be just ones that are knowledgeable enough about the technology to get what they want out of it but they need to be knowledgeable enough about the technologies to, to truly recognize what could go wrong and to demand that their governments put in place the kinds of measures to allow us to maximize the benefits but truly minimize the risks of these emerging technologies.

HEFFNER: We have this fascinating if paradoxical reality which is we are I think by all intents and purposes, um, morally adrift and, and yet the dominant thrust of what we do is not through our own control. You know, is, we, we are, I say when I’m speaking to college campuses sometimes that we are in a faux libertarian state with a corporate overlord that we sometimes forget in the case of Amazon Prime or Facebook. My concern is you know, if we’re so morally adrift, uh, that we’re poised to nominate as a major party candidate Donald Trump and accept, um, bigotry and hate-mongering, um, that by further relinquishing that human spirit in the, in the angels, not the demons, uh, the technology is even more susceptible to um, hijacking our civilization in a way that we haven’t seen before.

WALLACH: Well that may be what we’re already seeing in this election. It’s uh, we have somebody who was created by social media, knows how to utilize social media, so um…

HEFFNER: The Apprentice was a, a faux boardroom simulation of executive decisions that…

WALLACH: Right.

HEFFNER: In reality does not govern how one would operate in the situation room or the White House.

WALLACH: Right. And so I think that’s, that’s the real concern is we’re creating this illusory universe and treating it as if it’s the real one. And that’s not just a political reality that we’re grappling with today but that’s also in terms of what we’re projecting for tomorrow.

HEFFNER: Mm-hmm.

WALLACH: And that’s true for nearly all the projections, so it’s true for the ones that are being created by Hollywood, the singularity kinds of projections which are often so off the wall that they don’t reflect any scientific reality. It’s true for the techno-optimists’ predictions that suggest that we are going to reinvent our way into a utopian future and it’s also true in, I think, public understanding which is commonly so rooted to the present that it lose—loses touch with the battles we had to win to get to where we are today and that we are in the midst of a world that is radically changing in ways that will uproot their notion of what is normal now. So I just happen to have uh, seen a recent Pew Trust survey in which 80 percent of Americans felt that their jobs were going be more or less secure and the same as they are 50 years from now. Most jobs today have been totally restructured by the advent of the kinds of technologies we’ve had over the last 10 or 20 years, so it’s a kind of naïve way, all of these models are naïve ways of, of grasping what we are in the midst of. We are in the midst of a, of a transformative moment in history, and it could go in a lot of different directions and it’s largely whether we’re going to just surrender to the forces of either technological possibility or the ways in which technology possibilities push power into the hands of a very few or whether we are going to demand some redress of that, some way in which our broader collective needs are met and not just those that serve either some simplistic economic model of what capitalism should be or serve some naïve optimism of what a few people hope they will get out of a technological utopia.

HEFFNER: There is this moral minority though, we talk about moral majority, moral minority, but the Musks and the Zuckerbergs, um, have a certain perception of the way that they can operate their social media communities, uh, to create, to build consensus. Uh, but it hasn’t been in any kind of formalized way. Do you see the potential there for creating a uh, a kind of social, um, consciousness that, you know, can, can, can build on the foundation which was that social contract that we entered in growing up in America as a democracy?

WALLACH: Well I’m starting to wonder perhaps like you whether we can or whether the uh, the way the media gets dominated by, by certain groups or individuals or even the way in which it’s being bifurcated in giving perhaps too much voice to some of the worse angels…

HEFFNER: Right.

WALLACH: Of our nature, uh, whether that can happen. But I think it usually does happen when there is a clear perception that the way society is developing is not healthy.

HEFFNER: Do—

WALLACH: Now on the other hand, maybe that’s what we’re seeing in this present presidential election,

HEFFNER: Right.

WALLACH: And that sense that how it’s, how our society isn’t developing in a healthy way where on one hand it may be undermining traditional establishments who a lot of the public doesn’t see as having served them, that doesn’t seem to alter the fact that it can still be manipulated by those who are masters of the media.

HEFFNER: Do you see potential for artificial intelligence to bring more equity into American society or universally? In other words, you know, we think of technology as the vehicle through which we can enhance our lives more superficially, but when you think of the way that we self-moderate online posts so that uh, those offensive comments are removed from an article, uh, the way that we, we try to improve the discourse, um, through automated mechanisms,

WALLACH: Mm-hmm.

HEFFNER: In the same way that someone like Bernie Sanders and a lot of those people who support Donald Trump would say we need an artificial intelligence model to bring, bring jobs back to this country, uh, to radically alter the economic landscape so that you know, the, the concentration of wealth is not in so few hands, so I, I’m almost wondering if there’s, uh, uh, a deprived moral core in the United States if, if artificial intelligence can actually be our friend in furthering the objectives of you know, non sibi and loco parentis, uh, the golden rule…

WALLACH: Mm-hmm. Well I mean that’s a very interesting question and, and there are many different layers to it, but I, I’m not optimistic about artificial intelligence in that way. I’m optimistic that self-driving cars can lower deaths on the road, I’m optimistic that um, that the homebound and elderly can perhaps get a kind of support that they per—are not getting now, though I think it’s sad that they aren’t getting that support now from human hands. So, I see ar—artificial intelligence as useful tools that have a broad array of functional benefits but I don’t, don’t see that artificial intelligence is no—gonna directly invigorate our moral core but it might indirectly. Particularly if for example what uh, John Maynard Keynes referred to as technological unemployment, that was his 1930s term for the 200-year-old luddite fear that each new technology will rob more jobs than it creates hasn’t happened yet, but a lot of people believe and I’m one of them that it is actually different this time, and it’s not just artificial intelligence, it’s also people are growing older and retiring later, living longer, excuse me, and they’re retiring later, so, and therefore not opening up jobs for the younger generation. So I think in that sense artificial intelligence could precipitate a bit of a social upheaval if people start to see that the jobs they now so depend upon won’t be there in the future and demand that our societies give them a viable future.

HEFFNER: Well you said it, that really is the fundamental, uh, tipping point,

WALLACH: Yeah.

HEFFNER: That can lead to the turmoil, um, that you described.

WALLACH: Tipping points can go either way.

HEFFNER: Right.

WALLACH: So there, tipping points can be used to bring in new authoritarian regimes,

HEFFNER: Right.

WALLACH: Tipping points can be ways in which responsible leaders look at creative ways to remake a society and reinvigorate a society.

HEFFNER: Right.

WALLACH: And I think that’s a little bit about what we’re battling over in this election.

HEFFNER: Well, when you think of whether or not robotics or artificial intelligence can be um, used to advance what, what we think of as um, human values,

WALLACH: Mm-hmm.

HEFFNER: Um, and um, dampen human suffering, you know, you don’t, you don’t think of the example that Mark Zuckerberg uses of creating a robot nanny who’s going to read and have some sort of emotive relationship with his daughter. I mean, you know, you see the, the robotics as a way to enhance labor, um, enhance the bottom line of, of corporations and, and not enhance the livelihood of you know, the largest global community, and in that, in that sense I have to ask you Wendell, you know, when are we going to see a Geneva Convention or some kind of international summit, uh, definitions drawn out for the way we utilize uh, technology in you know, furthering um, the human endeavor? Whether that is in the context of a 3D printer or whether that’s, uh, the case of genetic modification or engineering. There, there doesn’t seem to be a unified way of thinking about this.

WALLACH: No there isn’t a unified way of thinking and you know, it, one might be hopeful but then there’s always the problem about whose values are going to be implemented by that convention so I don’t, I don’t see that we’re going to have one in the near term but I do see that we’re already for example having discussions in Geneva over whether to um, to ban lethal autonomous weapons, weapons making life and death decisions.

HEFFNER: Drones are, is the most uh…

WALLACH: It doesn’t have to be a drone, it can be any kind of a, a weapons system. You can, you could mount a payload on this big dog, you know, this headless dog that Boston Scientific created, now owned by Google by the way, you could, you could load a payload on that, you could give it a GPS system and have it find its way to a particular locale and then blow up when it gets to that locale. That’s kind of a dumb, um, lethal autonomous weapon but we could, we could have weapons like that. And the point here is not this is solving the broader values concerns that you’re raising but if we don’t, if we don’t put in place a ban on life and death decisions made, being made by machines then all bets are off in terms of the development of artificial intelligence anyways. But that…

HEFFNER: What do you mean by that.

WALLACH: What I mean by that is the warnings are not being made because we should be scared of artificial intelligence. The warnings aren’t, aren’t being brought up because we need to take care in how we develop this technology. But as soon as this technology just becomes a servant of a new forms of arm, a new form of arms race, and a form of arms race in which humans can’t really participate. There’ll be no virtue in being a soldier because a robot will be able to respond in a few milliseconds where it takes us ro—roughly a quarter of a second to respond. So, if we are going to so imbalance or unbalance the development of artificial intelligence because we fail to put restraints on how it will be used in, in the military context if we so undermine the humanity of war by making it quicker than any human can even act in it, if we give it an inhuman pace, then that will distort the way in which artificial intelligence is developed and I don’t think we’ll even see the measures taken to let’s say to ensure that artificial intelligence is safe, controllable, truly beneficial. So that’s just one crude example. So I’m not sure yet that I’m seeing anything in the technology itself that legislates toward re-humanizing us or at least reinvigorating our better nature. Though there are clearly examples where that could happen. But there’s no guarantee of that. There may be a Geneva Convention we, it…

HEFFNER: What, what’s an example? Oh you were about to say there may be a Geneva Convention.

WALLACH: Yeah, yeah. There may be a Geneva Con—Convention if um, there is cloning of the human genome or there’s tinkering with the human genome and we get a lot of Thalidomide babies, babies who are, who are malformed because of that tinkering. There may be something that so outrages the public conscience that um, there is a consolidation of the world around at least restricting or taking more care in how that technology is developed and focusing it more on things that we can truly demonstrate to be beneficial. But that’s a far cry from what you’re asking for.

HEFFNER: Would you say that the eugenics movement engendered that, that same kind of outrage?

WALLACH: Well it did historically. I mean uh, but let us not forget that many of our most valued leaders were eugenicists. So eugenics is coming back. Um, we’re going to re-debate whether eugenics is bad in and of itself. Uh, but what history showed us is that eugenics became an excuse to do some really destructive things to humanity in the name of somebody’s idea of what would be better, of who deserved to exist and who doesn’t deserve to exist. And if that becomes the same thing in terms of people who are enhanced are, deserve to exist but us mere mortals maybe don’t deserve to exist…

HEFFNER: And that…

WALLACH: Perhaps that could, that could you know, stir the water up, up quickly.

HEFFNER: And that’s really the existential question that we’ll leave this conversation on, because in effect Wendell, if you have moral relatives, the, the drone that is hitting a chemical weapons factory that is programming, um, a, a computerized assault on you know, a segment of the population, more significant than the individuals being attacked in that instance, there are still moral relatives in, in you know, the use of a drone to, to, to uh, to stop um, harm on a more massive scale. Or do you not see it that way?

WALLACH: No I mean this is the difficulty and this is the difficulty with even, um, banning lethal autonomous weapons. It’s one of these cases where there are some moral goods that those weapons could be used for, but I think the short-term benefits will be far outweighed by the long-term risks and it’s incumbent upon us to start to recognize that and to actively shape our future, not let technological possibility determine our future.

HEFFNER: And we’ll leave on that note. An intriguing conversation.

WALLACH: Thank you very much.

HEFFNER: And thanks to you in the audience. I hope you join us again next time for a thoughtful excursion into the world of ideas. Until then, keep an open mind. Please visit The Open Mind website at Thirteen.org/openmind to view this program online or to access over 1,500 other interviews. And do check us out on Twitter and Facebook @Open MindTV for updates on future programming.