About the talk
There is unanimous agreement that AI should be designed according to certain ethical guidelines and moral principles. However, how these can be brought into practice often remains enigmatic since no practical and clear implementation guidelines for these challenges have been established yet. Our panelists discuss how to bridge the gap between normative discourse and practical application in companies.
JOIN over 30.000 AI Enthusiasts Receiving the Rise of AI News with news, updates and special offers before everyone else.
Rise of AI Summit 2020 | Berlin
Something probably everyone agrees on. Is that a, I should be designed following ethical values and moral principles, but there are no practical and clear guidelines of how to do that yet. So, how should we Implement ethics in a? I and how can be bridged, the gap between philosophical discourse, and practical application in companies, Pilar question that our panelists of the next time I went to TECO, Carlos, that is the head of the ethics of algorithms project at the foundation. Has consulted the German Parliament. A ion kept Commission on the issue of algorithmic transparency. Also on the
panel of his member of the group Board of management of fox back. Now, be a responsible for integrity and legal Affairs. Dr. Andrea Stables will be one of the participants. He is the co-founder of kid projects software for secure and privacy. Focus data science and machine learning and they will be joined by a professor. Dr. Michael Scott is the chair of epidemiology modern and contemporary philosophy and the director of the International Center. Philosophy at the University of fun. So, quite an exciting panel will be moderated by a Eye. Consultant author and fellow podcaster. Play
bazzi back, and let me have the stage over to you pick up with something. Benedict, have almost said it is not enough to have a good AI. The main thing is to use it. Well. The floor is yours. Also a warm welcome from my side through the ethics of algorithms final hosted by Ki Park Germany, promotes the application of AI in Germany. And if you dear listener have a specific question about how to buy an industrial-size say. I want to learn more about the
initiative, please visit... Ki party, Eeyore John d'aquino tomorrow at 5 p.m. For. Now. We are going to address our ethics of algorithms from a theological discussion. Open to practical application Topic at guide and question, which is our algorithms biased and I'm going to start with you, Marcus or starter. Please tell us what is ethics. And therefore, what is ethics of intelligence in general. Ethics is the philosophical discipline which that is the nature of morals, ethics and principles. Tomorrow.
Facts are facts about what humans insofar as we are humans are to do or not to do but we absolutely all to do with what we called. The good luck. We absolutely are not to do is what we call radical evil, and the rest lies between and so the discipline of Ethics price to figure out what that is. So how Universal value can be discovered in tandem with different discipline. So Ethics in the 21st century can only proceed in a trance disciplinary, fashion, including, of course, politics society and the business. Well, so, What we do in the discipline of Ethics.
Now a I need for the simple reason, that AI systems are used in the context of human machine, interface systems, such as companies newsfeeds Etc. And these applications as structured data on the subway. And that far, right? Make it more likely that a certain decisions will be taken or some other decision. This is why I systems are subject to be subject to a sickle concentrations and the future of discipline of Ethics off. A guy has just started. Thank you for this introduction La Cascada. What are the sources of
ethical problems occurring in AI systems? When we talked about diverse make, this is making actually tried to avoid the Treme. I feel like it gives the wrong impression of what they can do and what they cannot do. We also call them social technological systems for the reason that it's not just the technology that is the cause of fires of mistakes, but it's mostly the human decisions, decisions that lie behind it. So it's the goals of the system. Of course, it's the data that goes into the burning of a system that can be decoded cells or it can often times. Also be the
way. The system is implemented, the way its structure has many applications in the health sector education to sign up to you. Rhonda Benfield. It's still schumann's at the end that interact with the machine. Thanks for that. We just hurts in the hour before already from their Antics, as the example of the medical applications, moving to health food, do applications exist. That should be excluded from the beginning for ethical Reasons from the involvement of algorithms. I could think of, in the case of minors, autonomous
drones elections or in the priest selection of applicants. Hannaford's, I see, you still have your mute on yet. They want to start answering from the beginning that this is the case or do. I believe it needs to be decided individually for each specific topic for the automotive industry. For example, we do have Tyler my situations in traffic in which such codified rules at Esther. Just explained must not be by the door. I guess I'm moving to our fourth participant Andria's to algorithms appear biased here and there, because the
responsible data scientist has selected the wrong or the non-representative characteristics. Are you the features from the data? That's only part of it mean, if you look at for a camper supervised learning, what you want to predict or not,, That's good example, would be hiring. So we have a system. For example, the trance try to reproduce hiring Decisions by people and we train the system by giving it to start cuz they don't like hiring decisions that were made and make the call of discrimination against
women or decisions made by units in the process and it can do that to better. The more data. It has that can predict the outcome if the information like this. I think you can do that better. And kind of course also reproduce this and it's discrimination. Yes,, but it always depends on what kind of make it to us from the day that we have provided. Thank you very much, too. And I asked this concludes kind of our first round as us listener will realize. It's not that easy in
a long line. Fashion to do a real discussion in between the members. But I do propose that with the upcoming questions that we have that. You do try maybe to answer or if you have a different opinion and you are answering your phone. You just have. So moving back to Marcus. Will we ever get a strong away? Is according to John serle? The Chinese room? That will think like humans and will eventually overtake Us in terms of human intelligence? And is not when it comes to the ethics of algorithms, should we eat? Maybe concentrate on the
now? I'm thinking the lawn understanding. I eat the weak AI. Well, yes, I think you don't like their eye sight for very different reasons, the arguments of floss and drums, Chinese room argument that. So I don't think that there will ever be such a thing. That's what's wrong cell has called strong AI. So, you know, you mean level or you can super intelligent systems at the reason that I get for this very different reasons. And that's the real reason is that in his
teacher of certain living system in the system Allison Eskimo, SD capacity problem in a finite amount of time. For instance, you hungry and therefore, you need to solve that problem. And you have evolved in such a way to find to the to select from a potentially infinite solution, space. The best solution and you're more intelligent than some other system. If you find the same solution past the right. So we can measure that property and build models that. Replicade intelligent behavior and animal in a technical question. That's nothing
but AI system I doing a research, right? And we have advanced Way Beyond the semantic kind, I'll call this entire engine stats on sale had in mind but as long as our you know, systems are not literally based on a neuronal biological material, which they are not going to be intelligent in The Stranger in this. The real problems. Do not lie their way since which are not soluble, right? Because they're strictly dial-o-matic, therefore, to be a bother, right. So we have good reasons. For instance, not to go to level five nights at driving in certain conditions
for adults at the Caribe reitzel, automate as far as possible. But that limits automation ethical limits, right? And I think this is what we also discuss. So the real impact of week. When's the preselection a terrific by his new fees at cetera? And in order to do so, we need to understand the integration of a week AI systems in the human life world, because our intelligence has changed in the last decade. We have become more intelligent, due to the use of the system, because that's the real intelligence explosion. And we have yet to find a way of dealing
with the fact that we have made our ethics explicit. Right? I mean, we now know much better how we judge because we can see patterns in those data. Thanks to the use of weak AI systems. This is why I think where the action is very much for putting that perspective, which I personally cannot support. But nevertheless, I do have the other are on service members, do maybe give their perspective as well, Carla, which organizations, and which persons in the organization are responsible for the implementation. Trustworthy AI. Is it? The producers of
AI system operators? All of them actually, like I like I mentioned before the sources of the problems are multiples and so are the solutions. So I think they're certain technical issues. Of course bases. That's a lie in the hands of the organizations and individuals that built the technical parts of the system. And of course, those organizations and individuals also need to communicate how the system function. But they can do in particularly what they cannot do towards the people, who then use the assistant. The
organization has frequently analyze cases where things have gone wrong with good intentions. And what we see in this case is actually and communication misunderstanding problems between different stakeholders. And that's the people who understand the social problems that need to be soft. Like an education, sector Healthcare security. Police forces. They understand if that's the right from the simple fix for complex problems. They do not take the necessary measures to implement the system. Well, since like competence building among people using the
system seems like transparency, which is not just a technical issue, but it's also about communicating with the people affected. So what we really need is better communication between those different stakeholders and all of them understanding their responsibilities and the role they play in the air of the system. And of course, we need politics to kind of create the right framework to make all of this possible. Okay, so everybody takes their responsibilities within the ecosystem and needs to communicate his food. How does that work?
Trustworthy AI at working with more or less biased. Algorithms were talking about that today. What needs to be done, so that we can get towards this trustworthy. A is desired by the EU Commission. Yes, I mean we have filled from for Marcus and Andreas already that is based also on those who develop. So it's what some factors that need to be considered and, and Colour mentioned that already also is that we need a national or at the National Institute of Standards for these arguments. But we also need etiquette training for developers and programmers of a I
related software and we also might need especially for developers and programmers of a I related software in critical situations. Some background checks, and we need parent Concepts that the quality traits of a I look, not only for technical but also for ethical quality. Thank you for that sound real stupid. A in Europe, is my personal feeling. We experience a kind of a biased zone between the law on one side and morale of the other side. We are legally obliged. Just as one example of how to differentiate between
men and women. If AI systems will be going through certification in the future. Will that mean that you have PMS will become more firmly streamlined in their thinking and actions. I would say almost similar like, no, let's take the example of China, but only on the basis of out of values. So will that mean that eventually Europeans also would feel like big brother is watching. I think, I mean, what would definitely happened is that as we move to digitize more problem problems and more processes and use machine-learning
or any kind of positions. We need to come and tell me what it's like, fine. So I'm color of those two problems that I didn't know, because so many forces are still down in the manual and they already is discrimination today. They already. It's like fires and also announced in a more structured way about how to solve them, you know, and I think if you look at data privacy, we can also take that as a good example of how we could do regulation in any way. I am because their privacy or privacy generous to you in life. I'm so we have to drive to
your in-laws European regulation from this principle that privacy. Requirements that we need to process person that they say, the technical requirements that are implemented by the engineers. And I think, or, I don't see a reason why it shouldn't work like that from just thinking about how we can do the technology to kill steps to ensure that it is actually a complete until in Europe European examples. Go to say, Marcus, come back to you in a little bit of a difference as well as the police patrol more in areas where
there have been more crimes in the past. She'll cry, be correlated, therefore with residential areas at necessity and eventually the human face and even more Shall we as humans involved in pre-crime I eat in predictive or preventive police work. Well, these are some tough questions. So, we know that predictive policing and complicated ways. Where are you? No, safety of you. And example, in the United States has been in use for almost a decade. I used to listen San Francisco and, you know, even back in 2013 or now, you already had to go to
ask for it from New York City where which phrase the, you know, your probability of getting, you know, Mark in the certain product of the hoods etcetera, right. According mostly for algorithm sent earlier date. So this is already in use. Of course, and this is highly problematic to certain extent because we do know that existing correlation between in a certain population is socio-economic conditions and the probability of illegal Behavior, right? That we increase the tie between those correlation. So, and this might then lead to forms of Winston's racism
racism. Let me just say what the mistake is, I mean, So many things wrong with racism, but the fundamental mistake is that the racist pump fuses, a correlation between a, the phenotype of a human animal and a certain kind of behavior. Right with a genetic explanation. That's what racism is, if you think that, you know, white or Hispanic or whatever person does a certain thing because she has that phenotype. That's just a bad explanation for what's happened to the reality. However, that sometimes our statistical correlations, between the phenotype of a certain populations, a certain kind
of behavior. But the explanation for the correlation is purely sociological, right? This is how you dispel. This is how we refute race is, not a problem with predicted, the ethical problem with predictive policing in particular, the use of Clearview AI, which is off of models with the American all try it. So that it has some particularly bad aspects about love you. Be that as it may, the general problem is going to be that you reinforce write these car relay. Rather than overcome their but we shouldn't have criticized AI that, right? They should rather in been positive. This is a
business case for a system, which precisely the opposite, right? So why not use that, they get out the method in order to go into a specific neighborhood where you measure these correlation and take appropriate advacal right decision. So that's always a positive. You so predictive policing as it's not practice is ethically, harmful, but we can use the same algorithmic systems in order to do the morally good. And I think that we need to change the discourse right from risk assessment discourse, which is what we're mostly doing an ethics of right now to applied at the meeting
at the Caltrain and in, and buy Design, This is how we prevent bad things from happening, not by criticizing them after the fact. But it is Time, did you want to finish your sentence early? Sorry you lost for the digital age. And if so, what role can the you all go. Rules that you coordinated a solid foundation play. I do not think that we do need new fundamental, right? Actually quite well set on this regards. However, when we need to question is whether the mechanisms we have currently in place to protect fundamental rights and freedoms. What are they? Still work in
automated processes and often times that is not the case, for example, because oversight bodies, do not have the, the right to access certain situations. They do not have the right amount of people to deal with increased efficiency. And we need to analyze whether these oversight mechanism still work. When you just strong civil society that can take on the role of Watchdog organizations that can spark. This is how I told you guys about. All these questions that were just race was said, many times in this discussion about the use of
automation, kind of forces us to have this discussion. Again, what's between the Super Evil and the very good. But what we see often times, is that when something goes wrong, What's Technical Systems people say, okay, it went wrong. Let's stop using the system and then the debate is over and we failed to actually take that opportunity and in order to do so, I think it's reading it. You and yeah, it has to be paid Focus. Not just so much unlike, the economic questions. But Ricky has two types of questions, and to build a strong basis for that. We need to come and still dating and
society when you become stronger and also interactive the digital sphere. And now, I didn't answer the second line with what I just said. Define. What's there is a, what's unfairness of the situation? Because that depends so much on the application case, but they'd rather just start recommendations on how to deal with possible mistakes. For example, by making sure that systems, the people informed, when the machine is making decisions about them or that the decision is
explained to them, and that's free. Shuffle Bears mechanism, Scorpio questions off again, it security or privacy a part of the ugliest. So those are mostly process standards rather than standards that describe the system itself. Thank you very much, Carla. I had my beeper. Go off. Your dad means that we need to slowly but clearly moved to our closing statements and I want to start with you. I think you've always thought about something, you have your personal views that you're going to share with us. Now, one question that I want to give
each of you with is that as I've been listening this morning, the rise of AI speakers that was kind of a general almost like a red line, which is always about weed. Europeans. We Germans, we Europeans we make norms and we do ethics while our American friends or the Chinese Asian people start for us doing things. So maybe if you do your closing statement, if you can, but you don't have to refer to this is the theme song starting with u, What we have learned today is that we have to incorporate every stakeholder in our decision-making and competence
building of a i and yes, the European Union has a very special chance challenge. Here. We are as a lot of development. In China is given by the government or in the US. It's something by corporations. We can drive everything that we developed based on the needs of the people. And I think we have a good condition for that. We do have many ethical stakeholders in our society from follow supposed to see ologist and and also a good super National Institute.
And I'm pretty optimistic that our ethical standards Will Serve the People. Thank you very much. He'll do it. Andreas moving to you with your closing statement. Meet with a lot of privacy concerns are holding us back in Europe, but I think we should really think about it. And that way, because it's really not the case that we can only have a message to successful products and platforms. Like, for example, Facebook and Twitter. Are you feel except that they violate your privacy? And
that's not true that we cannot have a ice machine learning systems that ultimate lot of things. If we accept only surveillance or the other two, paradigms like the Chinese model, for example, for exploiting and using personal very good foundation in the world. Keep like Sweet Craving for this thing. So by the Sea forts in take me to being in control technology and not the other way around and I think we can if you can deliver on that we would have a big deal on testing.
Thank you. Audrey has Marcus your perspective? Well, I think of ethical value as constitutively universal, and Global. So, I know sometimes some Chinese philosophers are morally, more advanced than some call. I think that Global challenge or Humanity. So, I don't want to think about this at the Europeans, if we come up with your fault fault, no value. That is local, can be an ethical values. So torture camps evil in China, as they would be in, whatever and back. And that means that ethical value, if it exists is universal. Anyhow, so
if there's anything that we can do spell specifically here, it has something to do with societal training, Etc. And let's not forget that way more professional athlete in the United States of America operations than we even have to fess up about 15, Germany, ride Solo, or China. I know if when some people in the world in Shanghai, they're all fully trained professional philosophers. I think it's an illusion that we are leaders in ethical training. We can become both leaders for the reasons that are typically given by the think, it's also very crucial to see how our own mistakes
Point. Since we do have racism in our basic law in the contest, which is why the doors are now changing it. And I think we should also stop by criticizing ourselves, rather than fantasizing about evil. China, or evil United States of America thinks the Chinese Communist party is evil, but this has nothing to do with our Chinese friends. Thank you. Marcus. Final words. 4, You Carle, an ethical development of technology is not just something that's morally, right, but I think it's actually the only long-term sustainable way to move forward with
digitization because if we do not do this without the trust of the people, it will definitely affect fire in the long term. And secondly, I think when I look at the Essex today than I've been following the development of this in Germany and Europe for the past four years, and I've I'm happy that it's becoming like raising on the political agenda, but it's so urgent that we kind of move from all this, these passwords like transparency, that can mean so many different things to concrete action. And that, of course, means that we knew you need to find ways to deal with mistakes, that will always
occur these systems, that we find a regulatory mechanisms that allow transparency over side. Accountability. But it also means that we need to find a vision for what's digitization means in Europe, that is not just in contrast to what's happening in the US and what's happening in China because right now, digital policy-making on the European level or soft very reactive. It's always like one step behind and the reason for that is mostly because we kind of looking at the technology and then trying to fix the mistakes that
happened. But we do not have a society Division and this is really key. We need to use technology kind of as a mean to dream like we do with science fiction, but in the focus should be the change. We want to see in society, not the technology itself. We always need to see it as a tour at the end. Thank you very much caught on Facebook, Twitter and Google have all switch to algorithms that now moderate the content of their platforms. Instead of people. I sincerely hope that you dare. Listen as well. Have enjoyed me as a human. Being motivating is Palo Alto as I suggested
to, let me say thank you to our participants. And of course the listeners to make one final call to action by our sponsor Ki Park, Germany, to share with AI practitioners, and especially startups to participate in an AI ethics left to pay iPod. Want to make a cycle questions tangible for developers and bridge the gap between actual source code and normative ethics guidelines. A transfer and open source project should help to shape best practices for AI
petitioners to route to NASA. Design Frameworks, you want to join, please? Stop at KI Park a message or join them at Swap card where you are. At this very moment. I say thank you to our sponsor Ki Park. Germany, has represented by Deloitte in pasta and cilantro, as well as to Fabian. And is Veronica from Rise of the Yokai for making this panel possible. Looking forward to meeting and seeing all of you. The next time.
Buy this talk
Buy this video
Our other topics
With ConferenceCast.tv, you get access to our library of the world's best conference talks.