James Brusseau (PhD, Philosophy) is author of books, articles, and media in the history of philosophy and ethics. He has taught in Europe, Mexico, and currently at Pace University near his home in New York City. As Director of AI Ethics Site, a research institute currently incubating at Pace University, he explores the human experience of artificial intelligence in the areas of privacy, autonomy, and authenticity, and as applied to finance, healthcare and media. Current applied ethics project is AI Human Impact: AI-intensive companies rated in human and ethical terms for investment purposes.View the profile
About the talk
Will humans conform to AI, or will AI conform to us? One way for investors to participate in the decision is by privileging those AI-intensive companies that serve humanitarian purposes. To develop this investment strategy, several dilemmas and paradoxes of today’s AI ethics will be illustrated. Then they will be resolved within a set of categories for evaluation that produce real-world determinations about which companies are worth human-centered investment.
Can be understood in objective terms for investment purposes. That is how can we eat? The question I'm asking is how can we understand the source of human ethical issues that surround the application of artificial intelligence today. How can we understand that in an objective way which allows people to consider? The kinds of effects that artificial intelligence has when they make investment to Bishops. So that's the the overall goal and get their what I thought we would do is first we will look at a few. Pieces of
artificial intelligence ethics and human dilemmas some of these the first one in the screen now some of these are so thought experiments others are real and occurring in the world right now. Even with that on the table with the sense of the dynamic, we will move on and talk about how we could develop a structure for understanding these dilemmas in the second part in end of the third part, but we will do if we will actually use that structure briefly to score or evaluate her survey or understand the
first a dishes smart glasses company and then we'll look at an issue from from Tesla. So we'll have a repetitious casing it in a real case to see how this structure could work. That's the the big picture for what I have planned looking right here at this first slide it. It seems to me that this is a very interesting case. There have been several attempts to create. Smart glasses that have more or less failed the Google account the Snapchat attempt to measure how well that worked. There are
startups. There was one I was looking at a full Kohl's North which I think is something subsequently been purchased by Google Facebook is also developing some artificial intelligence. Smart glasses are augmented reality glasses there different ways to clean up that distinction. What is important is the idea that it is quite easy to imagine with facial recognition. Technology. Could you got it with the processing power and smart spectacles? It's quite easy to imagine a case. Like the one I have listed here. That
is one where when you are walking around your glasses can recognize the faces of the people you passed instantly run them through a Google Search and you will know about everybody around you. I wish I was a terrific that's a superpower you would like to have but the problem is of course that if you do that, then other people will do that to show him this question about privacy and about what we reveal about ourselves. This is extremely difficult to way or to maneuver in a given the way we usually think about the world.
So we'll come back to that. Another question involves not privacy so much but safety, this is a Picture of from what I believe is the first Tesla Fatality and in this case really a horror movie type accident the car Miss truck the truck that you see Miss took the side of the truck for the sky because it was in a cast off and it cloudy afternoon day. You can actually see some the slide how the side of the truck white as it is could almost appear as a cloud. So somewhat horrifyingly the car did not even slow down literally right
underneath the truck steering the top of the car as I say kind of a Hollywood fatality accident. What is the genre here it seems to me is this imbalance of power on the one hand this machine the autopilot can calculate pi to prep 31 trillion digits of the Fibonacci hydration. If you do that one in a long time, anyway, it can do those kinds of superhuman things. But then on the other hand the car is incapable of avoiding an accident that even a five-year-old would be able to void right even a five-year-old on a skateboard around a board of some
kind of stop before running into a truck. So there's a question of humor about safety. Then here we have a question about dignity. I think this is like the Tesla case. This is a real an active. The debate is happening. Now. There are in these psychological chat box that do help especially the elderly who are frequently alone with depression and there are some studies that indicate not surprisingly that these devices work function better. If those were using them
believe that the person that they responses they are getting on their phone actually come from a real person. So on the one hand, it makes perfect sense to perhaps mislead the patient because there is a positive value to that. The patient is better the patient depression is better managed another hand unavoidably. There's kind of a manipulation there right that the patient is being pulled something that's not true presumably for the patient's own benefit, but
it's impossible to escape the reality with the patient is being manipulated or light. Another issue here. Also in artificial intelligence Health there. Is this question about explainability and how important it is that we understand how the say an AI driven diagnostic tool how it come to the conclusion that does I was working on a recent project with an A N A I start up in what they were doing was they were using AI to analyze electrocardiogram which are the waves of your of your heartbeat and unlike in the Hollywood was just to have one up and down zigzag how the truth is
an electrocardiogram is a set of large and small waves which are all related in various ways. And this is a perfect application for artificial intelligence because there are so many small measurements that can be taken among these various parts of the Waves these measurements to be analyzed and then if you have enough data, you can begin to identify patterns which are associated with coronary disease or impending heart attack. So in this case, the company is doing somewhat well at predicting future coronary disease of
heart attacks, but because the I know if this is built entirely upon correspondence and pattern recognition. The company is having a very hard time explaining why it is the machine is producing a positive or A negative result. My question is at what point do we demand that we understand why we are receiving a warning we might have a heart attack a human doctor might say for example that we have high cholesterol or they might notice something specific in our he kg weigh so they can point out to us it as a
weakness and one of the valves but the machine in this case is pretty much just saying yes or no. So, where is the law how much information do we need before we start how much certainty I buy diagnosis do we need? Before we will say I don't care. Weather Lyon Durst and why I'm getting the diagnosis or not. I only care that I'm getting the right type stuff. This is another issue and then this will be the last time we will look at its my I think it's my favorite here we
can imagine LinkedIn and let's just take my kids. For example LinkedIn analyzes my academic and psychological profile and determines that I'm the kind of person who lets a does very well working by himself for long stretches of time. This is it extraordinarily important skill for soldiers and academics as I am I spend hours reading single paragraph trying to get to the bottom of it. There's stuff in the chest of Nature and it just hired to work by oneself. That's that's that's my role. Well, make sure your LinkedIn. Weasley's things about me. It will provide
or it could provide ideally let's say it could provide very appropriate and gratifying job opportunities in one place or another however, as it is I know it here if I feed this data to LinkedIn, right? It's also true perhaps other companies will have access to this data in there for the kinds of jobs. That will be offered me will not only be gratifying a good for me, but they will be limited to that that is no company will want to hire me for the kind of job that demands people skills. For example, I was
reading a there was an advertisement from a insurance company called lemonade. They were looking for a spearing this officer. And the description demanded that someone be of course as you would imagine in the tech World enthusiastic energetic work with the team and all those kinds of things. I know. Right now in 2020. I could write the letter to this company and say it wouldn't be internally misleading but I could obscure the fact that I just spent a lot of time alone and let them believe that at least part of the
day. I do well energetic cleaning Susie Astic Lee working in a team but not kind of possibility that I could make two. Maybe that's a make a career change that kind of possibility will be shut off to me and I shut off because of some dark horse that wants to obscure my potential with someone with the opposite. It will be shut off to me because of how well LinkedIn is satisfying because of how well LinkedIn is orienting me in guiding me was the kinds of professional job
opportunities that fit my academic work history of my psychological profile. Does the strange situation where you have what you want but you're bound to it. You can't escape it and then I know it's here at the bottom. It's easy to move that Dilemma to you like Tinder right? You are constantly match with a kind of partner who is a you enjoy being with it, but you can't try anything new. You can't escape that. I'm not sure if that's good or bad. I'm not sure if that's positive or negative but It's definitely an aspect of human aspect in
an ethical aspect having to do with our autonomy our freedom. It's an ethical aspect that needs to be accounted for if a company is going to understand. Well how their product will actually work in the world in the medium and long-term. That's what we're looking at doing in this presentation. We're going to talk a little bit about how the novel ethical dilemmas surrounding artificial intelligence can be incorporated into an investment strategy. That's how this information about these dilemmas that can be presented
to investors in the way that help them make decisions. Hence, the immediate question. What why should Financial professional care about this? Why should they look Beyond Simple Financial returns? For example, why do we care a Tesla is producing safety if they're making profits? Why do we care if LinkedIn is firming and individuals are turning an individual's ability to try new and different things one we care about that if the users are happy and profits are going up. Well, there are
two reasons both having to do with this General truth. That investment is increasingly driven by I9 Financial criteria sustainable investing responsible investing ethical investing in general. What these approaches have in common is that they add or they seek to add to the financial reports they seek to add at least human in ethical element to the investors understanding of a company now traditionally the way this has been done to take the ESG case for example Is through environmental social
and governance concerns and set those apart from Financial concerns and say let's look at companies in this other Focus or in this other life. And when I am trying to say here, is that the kind of human ethical inserts? That's around artificial intelligence are like ESG investing in the sense that their humanistic nine Financial. But they are unlike ESG Finance ESG approaches in that the kinds of dilemmas that we have surrounding artificial intelligence companies are companies that work
with artificial intelligence at their core. The kinds of dilemmas are very different. And for that reason we need this new structure for understanding the ethics of artificial intelligence that can then be used in a way which is parallel with let's say ESG reporting. And then put me to wrap this up. There is some evidence that investors who incorporate the ethical and human aspects of companies into their decisions. There's so much evidence that they have better
long-term returns and it is also certain that investors that understand these kinds of factors are more free have more opportunities in the marketplace to invest in those companies that I cook here with their own personal values or the values of that asset class like a good example of this would be the only Federated he remains companies out of London. They consider themselves to be stewards. Of their investment portfolios and what they trying to do is
not only get good returns, but they try to save that. We will invest only in those companies which meet certain humanistic an ethical criteria. What I want to do is inserts and help them by presenting them with ways of using that kind of Investment strategy, but has applied to strictly artificial intelligence companies or companies that work with artificial intelligence at their core. Hence the title of the paper real human impact this but is the name I have for this on a strategy and what we're doing is worth looking at a outdoor serving a sentence with
companies in human ethical terms for investment purposes. Let's do that how we going to do that. The first thing we need to do is develop a set of principles or a structure for looking at these companies as we need to have a structure for understanding. And the good news here is that we have a lot of help as noted in this essay here. This is from last year. I believe there have been 84 to sting Publications by serious academic institutions private companies research institutions on I-84 different documents presenting a is ethical
principles that they have to go principles has been since the wild west for the last five years. A lot of different companies in groups presenting their own ideas many Divergent kinds of views, but what we have seen, Over the course of the last 2 years or even maybe just a year. is that we have come to focus on assets for a few sets of key in central principles. I have two of them here the German. Ethics commission and even more importantly the ethics guidelines for trustworthy AI published by the European Council. More and more attempts to
understand AI intensive companies in ethical and human terms more and more these efforts are now centering on this incentive. Theories are very similar to the ethics guidelines for trustworthy are Now you see next to the trustworthy AI guidelines you see those and I am using AI human impact and I believe that this set is just about as close as you can possibly get right now in 22 late 2020. To a mainstream or Central listing of the principles that are used by professionals to evaluate an ethical in
human terms. Hey iron tencel companies that you can see if you look across from the German. Ethics commission their self determination at the word for autonomy trustworthy AI autonomy and you can cross the list for yourself dignity privacy and snowing and you see that what I have in my list is almost identical to the other two lists or the combination of them as Central as you can get. I have to put points to make about my list. The first is for everybody. Who is doing our
AI ethics and AI humanism privacy is extraordinary important and this is a good example of how different the world is from that of the ethics of traditional industrial companies. If you think of for example Henry Ford who built the Model T sailor, you can have whatever color you want as long as it's black Ford and the assembly line kind of Industries. They had no interest in the first no information of consumers. They did not want to breach the privacy and personalized consumer just the opposite. What's Ford wanted and what in is wanted by companies within the industrial economy
generally homogenisation not personalization. They don't want to know about individual consumers. They want to sell one thing for everybody. By contrast artificial intelligence companies are those who work with AI of their Court they function through personalization not homogenisation. That's a critical in a example of why we need in my opinion a new way of talking in ethical humanist to terms about a I intend to accompany a way that simply is overlaps
with that is significantly distinct from traditional business. Ethics. That's one point the second point. I wanted to make him this slide very quickly. The ECU the Europeans are they are not big into how well in AI performs. Why is that well, because these are academic and public institutions generally understand the task of AI ethics to be to protect consumers from what can go wrong. That's not my Approach. I want to protect consumers but also I want the AI to work.
I wanted to serve consumers. So for me how well an AI works. The performance is an ethical criteria a car a driverless car that works better than the ones we have. Now that deserves a positive ethical score performance matters, if you're going to be an investor that he category performance and that should be included in the way you look at the world. Don't you conclude this. We have a set of principles for evaluation here. This is a structure for evaluating companies in ethical and humanistic terms when they function when those companies function with AI at their core what we can do if you can
go through and look we can say how well does this company do in terms of respecting human. Time? I will disrespect human Tiffany. What does the company or the AI applications due for privacy for fairness solidarity sustainability performance safety and accountability going through and doing that measuring those things is the way that we can develop an AI ethics core on a i f x Math course survey of companies that would be useful for investors and also for consumers for that matter and people and
users in general if this is the way of understanding what this CIA applications are doing to us not just with us. So how is this going to work? Go back to the first case that we had at the beginning we've talked about these glasses that could reflect the the faces and then the information about the people near us there except in the elevator with us. Is it person available romantically is this person have a job that could help me advance to this person dangerous to come up for some reason. This is the kind of
information that we could have flashed into our glasses when we step into the elevator with someone. And it seems to me that there are three to evaluate this kind of company is so this is just a fictitious AI startup for smart classes. Next will do a real company Tesla stock with this one. It seems to me that there are three at least three categories that we should look at 2 grade this kind of company privacy performance. Not time. Now privacy what is privacy but we know the Privacy is control over access to our personal information is
privacy is the power I have right where you have not to share or not share information with others the test. I always used to make sure the people understand privacy is I say that Kim Kardashian has the most private person in the world if you understand that then you understand but privacy it if privacy has control over access your personal information then someone like Kim Kardashian who shares it with everybody. So that's why she just chooses to share with everyone. Most of the rest of us don't share so broadly but regardless of how much information
we share about ourselves. What privacy is is the ability that I have to control how much you know about me and why do we want privacy? I think this isn't always important to know the reason why Privacy if it's so we can't be someone and be someone else. What do I mean by that first free sample take my life with with my children the way I act with my children vocabulary. I use and someone would be somewhat embarrassing. I were I to act that way with adults and that part of being a parent right back with your child in the circle by which is not the way you act with other adults if it could have
goofiness and so on. Similarly the way I act with my family and friends is very different from the way I act with Alex now acting in these different ways depends upon me protecting information about myself. The reason I can act the way I'm acting for you now in this talk is because I am hiding from you the way I act with my children. I'm protecting that information you wouldn't take me seriously, if you apply if you knew those things about me in the same would be true of any parent or parents. I have a kind of goofiness to pay Express with their children. So imma see the control over access
for personal information is the way we are who we are and the way we become certain people in the world the way I become a presenter at a conference or a colleague at a workplace or a father in a family. So that's why we want privacy and what these glasses do. This is why I have a little negative one down there at the bottom of the slide what these glasses do really is. They robbed my ability to do that. If you all had those glasses and could see everything about me giant spreadsheet of every aspect of my existence. It would be very hard for me
to present myself as a professional in the way. I'm doing now. So there is tremendous privacy loss that goes with the proliferation of these kinds of glasses. There's a tremendous human lost. If you see all these things about me lose the ability to control who I am for you that the negative score that have to be presented to or have to be attached to these companies and presented to investors when they make a decision about whether they want to invest money in any smart glasses at startup.
On the other hand, there's also performance in here positive one. That's this is a thought experiment about to start classes smart classes start up door Sumit it works really well and if it does well and that's a possum score me people do want to know about those around them and it's certainly true that this could be very useful for individuals. If you get into an elevator in your glasses noticed that this person is someone who is a violent criminal. Well, that's a good smell good stuff to know what I'd hop out of that elevator quickly are performing your doing what they
should so that also it's extremely important for investors to know. So when you look at it this kind of company this kind of AI intensive company, which is using artificial intelligence to report upon the people around us. And you want to draw up an objective measure of how that company functions in the humanistic. Love forever. Then you're going to say look in terms of privacy. That's a negative score -1 in terms of performance that other question my ability to make decisions about myself for my
own life. Do these glasses help or hurt? I'm not sure that's a hard question on one hand. They certainly helped my ability to make decisions for myself if they help me pop out of an elevator before I'm trapped in there with a violent crime. But I'm the other hand if everybody knows everything about me. Then it's much more difficult for me to do a presentation like this. So thinking about the proliferation of this AI technology within smart glasses. This is served human autonomy or no. I'm not sure what the answer is to that but I am pretty sure that
investors that say if a company like a tradition or mace was trying to make a decision on whether to invest a large pension fund in this company, they could pressure the company to help warm and understanding about that and they could pressure the company to report themselves. But what did they believe and want that their product helps or hurts she went out time. It seems to be important information for investors. AI impact survey would provide that information All right. So that's one example of how it might impact
investing works. We locate problem. No problem. We locate specific areas where human mythical Dynamics are very are very forceful where the Currents Restaurant where people are being carried one way or the other and we help investors understand what those currents are and how they're working in the workd the NM in the side means not Material. I think some of those other things might be material and depend upon how we look at it. But to keep things simple performance while privacy natural well and then the question of what
time Now that was a fictitious case. Let's move onto a real case. This is Tesla and there are two questions. We could ask her from our list of nine categories. They involve safety and accountability. As I noted the start what happened is that this car driven by the autopilot? And I just went right underneath the truck sheared off the top Hollywood horror fatality accident and Tesla in lamenting this event noted that there were 137 million miles per gallon Rotella T.
So it seems as though the autopilot in Tesla is doing better than human drivers in the US where there's one fatality every 94 million miles and better than the world average, which is 1/60 Miles. But that's not enough for for a human understanding of safety with Tesla if we're going to have a human understanding of safety with Tesla then. Then what we were doing the end of my time here I see if we will have to move quickly to the time when much more rapidly than I had thought. I'm sorry
about that cuz I'm a just a quick Lee Soffer Tesla. We want to talk about safety. You might want to say like how she safe to be understood on the one hand. We have the numbers the risk another hand there. Is this reality that artificial intelligence as I noted just at the start some things that does extremely well, but other things it does extremely poorly and this in French part about needs to be understood when we talked about safety for Tesla in Sochi. Here's my report then we need to go for a quick with me now because I say we
overshot my time here Lee I'm sorry. How to spell for Tesla for sure like the car Lexus do more with our lives we can work on our work on the way to work because we're not driving the car I'll performances good at least in some sense ride. The car does drive better than human beings it seems but what about safety and I don't have time to talk about accountability for the question-and-answer. I think investors could demand a response from Tesla on that bunch of what about accountability? So I also talked about these other issues briefly, but let me
conclude a human. There are three reasons why I believe that this is a an important step forward in the world of artificial intelligence in the world of artificial intelligence ethics and in the world of investing it's important financially and because I believe that outstanding returns will be yielded from Investments that account for the human effects and the reputational Regulatory and legal risks of artificial intelligence. It's important technically because I believe this kind of approach will in fact catalyze more and faster
AI Innovation it will because it will help engineers and companies for sea areas of potential social wrecking rejection. Like you please say, yes, and that's not just in 20 more seconds. It will also help us map dilemmas. I can identify engineering the expansion of potential and I will increase or so for those three reasons, I believe AI human impact is an important step forward. Thank you for being here this morning.
Buy this talk
Buy this video
With ConferenceCast.tv, you get access to our library of the world's best conference talks.