Events Add an event Speakers Talks Collections
 
Duration 40:01
16+
Play
Video

Top Seven AI Breaches: Learning to Protect Against Unsafe Learning

Davi Ottenheimer
Founder and President at Flyingpenguin LLC
  • Video
  • Table of contents
  • Video
RSAC 2021
May 20, 2021, Online, USA
RSAC 2021
Request Q&A
RSAC 2021
From the conference
RSAC 2021
Request Q&A
Video
Top Seven AI Breaches: Learning to Protect Against Unsafe Learning
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
45
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speaker

Davi Ottenheimer
Founder and President at Flyingpenguin LLC

More than twenty years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. Co-author of the book “Securing the Virtual Environment: How to Defend the Enterprise Against Attack,” published in May 2012 by Wiley. Author of forthcoming book "Realities of Securing Big Data" http://www.flyingpenguin.com/?page_id=87

View the profile

About the talk

Davi Ottenheimer, VP Trust and Digital Ethics, Inrupt - Top Rated Speaker AI for decades promised great gains in productivity. However many groups accounting for risks in AI have revealed stunning results, showing that without careful planning the security risks from automation far outweigh any benefit. This session boils down the rapidly expanding AI topic to clarify what can be expected, what has been delivered, and where things most often go wrong (top 7 breaches).

Share

LOL, thanks for doing today. My name is Davi ottenheimer. I'm the VP of trust into Gillette 6 at interrupt, and I'm going to talk today about the top seven, AI breaches, which is really learning to protect against unsafe. Learning a bit tongue-in-cheek. There. We will see you in a minute power. What we're talking about today is what should have happened with an AI? What was expected? And then what actually was delivered and then we'll dive in a little deeper into examples of where things go wrong. So, you understand, essentially why we have these breaches of trust in artificial intelligence

systems. I have to go trigger warning. Unfortunately, a lot of damage. Can we talk about today is disturbing upsetting to people because the nature of the intelligence we were talking about, can be very upsetting. And so with that jumping First, I want to give you a couple quick stories to set the context. The first story is the top 87, which is a lot of people apparently don't know about maybe people in the car cultures do. But this is a very famous car. In the sense that it was way ahead of its time. It was a very interesting concept and how fast it went and how good it was, you know, what

mile per gallon efficiencies and so forth, but the Nazis invaded Czechoslovakia, the basically stole the design. You may recognize it from the porch of the Volkswagen research-based me where the Germans got the idea from, but also, it killed more Nazi Generals in World War II itself. And the checks are, they called it, their secret weapon because so many Germans were killing themselves with this technology and so important, keep in mind it. When people say, they want to rush into new technology when I get ahold of the latest greatest because I have some new features and benefits, it may end up

doing more harm to themselves than good. In this case. You know, that actually is interesting, use case because of the competition of, or the competitive nature of what's going on in technology off and you think you're going to be the good guys, and do the great things with you. How to enter preventing yourself Real Talk. They actually had the generals Nazis had to ban people from driving the car because they were so bad at it. But the second example or the second story really takes that even further to say that in 1977. There was this theory of John McCarthy, who some people say is

the father of AI or one of the fathers and the point three theories, I call it is. When you said you want, 1.7 Einsteins and point three of the Manhattan Project and he said you want the Einstein's first, but you also as a caveat that I can take the five hundred years. So who knows where we are in the spectrum of time but you definitely know that what you want is something that is more collaborative. Then it is a competitive, my first story. So if you have collaboration the more natural state, then you can actually see if you're on the team is other. People were all humans working

together. Then we get a lot of benefits out of the eye, but if you're not competitive State and trying to build nuclear weapons and kill each other than you get to a very bad State, you know, in the competitive state of the Nazis, trying to destroy the Czech Republic in Czechoslovakia right to cause a lot of hard when people get ahead of themselves in terms of Technology. To go with the stuff. So I can I find this to be true more and more across science, the more I study, all the examples and intelligence all the used. Today. I find it as collaboration with competition and more

collaboration. Usually the better outcomes wherever it's used for competition out of the purposes at least, a pretty sad outcomes. So in modern context, 1970 7.3 Theory a, I should prevent harm and you look at the afro feminist concerns about day to use. This actually comes up quite a bit in Africa, when you look at all the places that data has been collected and how it's being used in one should be used for in hell. Yeah. Integrates with that, right? You want a 1.7 approach to this as opposed to a point three approach to this, if that makes sense and even more. So when you talk about,

you know, a I should stop catastrophe or catastrophic events for Humanity in human rights law, you donate. I was talking about how to use the eye and where is ethical, where should be used? And what it would look like, again, take collaboration versus competition and they do is fight nature. It's it's defined by such as being a militarized. Competition is trying to protect some states against other states. So it's in a very interesting and dangerous space. So from there, I just want to play In 2016, I mean, our estate conference talk, where I basically laid out Facebook, as a Doomsday

Machine, and I've seen this topic, Boardman, or the Atlantic, even wrote up an article. That was very good about Facebook being a Doomsday Machine in 2020. And they are every situation. They found extremist violence. They found Facebook there, as well as more and more evidence that the actual AI algorithm to the platforms and they're doing things that ultimately caused tons of harm. And I would say this is not that different. Do you find the AI 1936 day isn't risk of being that actual horrible thing that's going to bring the end of the world? And do we really have to think carefully about

what these breaches mean? Very circumspectly as opposed to in there or are we winning this competition or that? But more like are we doing good in the 1.7 cents or a week precipitously going to end up in the three since launching. A nuclear weapon that we would not want to Social Security Missile Crisis event, which is why you talk about 2016. So today I went on to talk about his first. What should be expected of a I and I give this in three different sections incompleteness imitation and efficiency. The first one might sound a little harsh but We get into it, you'll see that

incompleteness really is 41929 girdle. Bring famously said that in any consistent axiomatic mathematical system, propositions that can't be proved or disproved. And what he saying that you don't read the German, it's even harder to understand. But the English translation really is that a computer can't answer all the questions itself to achieve its objective. It needs some extra info that need some oversight and need some extra Heritage sources. And another way somebody put this to me recently is intelligence, is the spice in the soup. It's not the stupid self. So you need to have a

perspective on this that it is incomplete by definition is that when you pick simple thoughts and try to make them into binary circuits as 1937, you see, there's evidence of this. From Shannon, it was credited with basic birthing the computer industry. You have a Cozy's equals a simple logic and if you say Baba text toothpaste, therefore, he'll buy a toothbrush. This is not the same as human reasoning. Are you say a cloudleap to rain? You can't say a rain. Needs to Cloud, you know, violence leads to pain to cancel a pain leads to violence necessarily in logic and in the way that we represent

the completeness of the human mind in the way that works. We have an incomplete this situation with computers. The second point is that the imitation game or the way that it's been framed around. Alan Turing 1950s, basically saying, you know computers can imitate the human in the game and try to figure out if there if you're talking to a machine or a human. And I do want to point out here that attorney was essentially persecuted for his beliefs are his choices. And so he was actually killed by his own country and ceremoniously, and he was led to his own death, but that, you know, is

fascinating to think about how we need to respect people difference between today. And so, we base, you know, a lot of r. A i a lot of the invitation and a lot of the ideas Right and wrong. And a, I will figure out what's good and bad on a person who is essentially Prosper. Persecuted for who he was and you seen the 50s rapid advancement of machines and AI of the term was coined, the IBM comes out with a machine that perceptron is supposed to recognize right from left may as well, been right from wrong, right? And so you get very quickly, this sort of excitement and from their 1958, towards

the end. You find the Navy United States saying, hey, we can recognize people. We can easily Translate speech, things are going to be amazing. Here is an IQ 58. You see a robot being developed that has a human attributes looks very much. Like it's going to turn into this thing that's going to replace humans. And actually, from there, you even seeing 1964, the idea that you have to have you no recognition in systems like anti-aircraft systems or a long-range missiles or artillery and that very naturally translates into people think you off, we can do facial recognition than the micro-level. We

can do, all sorts of things to protect ourselves from harm as well. 3 old concept and I don't think people realize their 1966. Even when people are talking about license plate recognition, where tapes are being recorded and sent to a Mainframe in this is 1.4 million dollars equivalent in. Today's terms, New York state that was used on Bridges. To try to figure out, you know, who's driving and who's driving out, try to find criminals. So, a lot of the stuff that was promised, you know, out of these ideas and efficiency suddenly came in 1979 to sort of a screeching halt. People said wait a

minute, the efficiency around this, into the fairness of this by ceding agency. The machines, giving incompleteness given the invitation by ceding this. You end up in efficiency running to something else which is racism. And so that really, I think set people back. So when I get to the next section here, which is good segue of, what has been delivered by a, i in fact, once been delivered by an AI has been, not just in complete. That's not just, all you need a helping hand, her to figure out what's going on. But total gibberish. And what's been delivered in terms of imitation,

has been basically fraud. And then finally Breaking Bad, another presentation I gave at RSA conference in the past deficiency. Such a serious breaks in the trust relationship. So first thing completeness, pretty simply as Shannon told us of making 37, you know, you have this ability to say marijuana causes cancer, you could translate that but it look around cancer causes marijuana. The machine can't tell that I absolutely can't tell. When words are shuffled around to the point where if you take an actual oppositional, bottle of very complex ideas in the human world that machine has

absolutely no idea how to deal with it. So deep, it is essentially useless. When you talk about incompleteness. That's all we talk about imitation, interesting. You find that was not a fraud deepfakes. And some of that, I think fast, any terms of power struggle, because you think about white men have to listen to non white men. It's actually scary proposition. So I think about people saying, oh my God, what is Tom Cruise's? Fake are really saying. What, if someone else who's not, Tom Cruise, has that voice and can talk to me as though they are. Conkers. That's much deeper problem that it

is. Just the fact that someone looks like someone else, that white men fear loss of control essentially. And so, they lose a lot of the, the privilege of they had it being And some more importantly, I think we see a lot of machines actually failing. In a very severe level. They actually killing people even learning machines are not doing this and they're getting held in a much higher consequence level. Human a little children. Little little child was picking a flower and was basically taking a court over it at a bus stop, where has a car that's learning it not even at a child level can

run over people killed him and not be held responsible to sew two branches in cases of the invitation and efficiency. You know, what we've seen in factories, in terms of the efficiency and the dangers of it is that things get much worse with robots. Me, see a lot of death. We see a lot of injuries. We actually see it worse to work with robots, then without and you see the optimization of our freedom. Basically, squeezing people to the point where sing repeat of what we saw. I'll let you know, 6, which is hugely, unsafe factories, unsafe working conditions. Wiggly Amazon was called

out for this several times. And so, if you just want to look at the whole history of efficiency with robots, you see that there's a lot of deaths, you know, Across the Spectrum but they've been happening more and more recently. In fact, if you look at the Tesla quarter-by-quarter accidents, it's been increasing dramatically versus relative to all the other vehicles on the road there. More than anyone seem to have more and more accidents per mile when you do. The simple math is go wrong. We're supposed to be seeing What what happened? And so

that's bad. That's what I was talking about where things go wrong and only caveat. There that there's a book about a surveillance capitalism, which I believe to be dangerously wrong. Because surveillance, when you think about it, is watching over supervision. It's basically this idea of being a parent is supervision, soup surveillance, very similar Concepts, you serve ale things and sign that petition at the emphasize in collaboration, very dangerous, but I'm not going to say, you can't compete. Of course. Competition is a place in the world. So, you know what, to get rid of competition

that you don't want to get rid of surveillance, really be like, saying, you want to get rid of sinus really bad, cuz authorized knowledge and fair play and ownership are fundamental to our scientific systems. And so he should think instead perhaps about the harms from debt capitalism. That's really the problem where people are put into a situation. They can't get out of it. So they were indebted there to Aunt, has violated their success of knowledge. They didn't approve their exploitation of them. They're controlled in a way that they can't get out and they can't A pitch there on that

they're playing the game. They don't want to play there in competition when they don't want to be in competition, like being a slave and told you have to compete with the others. If you want to eat food as opposed to have to compete for my food. Like this, just a ridiculous contacts people has capitalism any more than I would say that the slavery was about surveillance. It's much more about debt and it's about that. Capitalism, be talking. So if you criticize things too much of the surveillance capitalism, I believe you get into mysticism because you're basically going back to really

simple observations, unless you have the power of surveillance unless you have the power of competition. And so I would illustrate this recently as if you're in the binary World, things are very easy routine minimal judgment, you get rid of all the surveillance, you get rid of all the overseeing and supervision, but you lose all the knowledge that's necessary for since the advancements. And if I put it back in here, you can see politicians, existing to rank the world's contrarians exist in heat map, world, critics were essential to good science, and essential to good social advancement,

and you need to Critics who use absolute ranking will, they would cease to exist, essentially, if you get rid of all those abilities to, to talk about that to survey land to compete. And the Philosopher's, of course, when the highest orders, I believe a thought. So never mind is basically becomes accepting. What is versus what should be and that's what we have to be careful about. We need to have that information to gather that information. We need to look at information as humans know to figure out where we want to be as opposed to just collecting things for a minimally and not

understanding and say well this is what's been in the past. So this is what's going to be in the future. The fundamental building blocks in United States, for example, are white supremacy. And misogyny. We don't want that to be the future. We want to change that. And so the future shouldn't just resemble the past. It's in contact with our moral values. We should make sure that we change the future based on our past. Right? So we can't just accept things as they are the way the machines do and what makes intelligent human. So dangerous is pregnant. What kind games where you narrow down

the games that you played to a point where people are successful because they defined away. All of the things that would hold them responsible, the consequences of their actions Edison. For example would take other people's inventions, but somehow weed label him a genius at inventions. It's just not true. I appropriated other people's ideas and the more you do research on him the more you realize what an inhumane, horrible person. He was an American Civil Rights struggle against races history makes it especially vulnerable to a i risk because if you allow people like Edison to be given

credit for things called intelligent, then you're allowing machines that are as bad or worse than Edison to also be given credit for doing terrible things. And so you get dangerous, increasing through Tech domain shift. We're basically repeating rifles. I'll become effective in out, the example of our machine guns. Even become a perfect example, technology that shifts the entire domain and in digital slavery. That kind of very dangerous shift. From one domain to the other is the new form of this technology that can seriously up and change the way the power is balanced. To

really talk about power with anything else. We talked about AI breeches and risks to people. So, real AI or intelligent machine games where they've gained specifications. The same way to Edison wood or a colonialist wood or somebody wants to do hard. So what do you see? For example, is my favorite example ever really was you want to make machines flip pancakes and they realize that they drop them on the floor. So they start flipping them so high, they never land. Nobody eats everyone, starves to death, but it's okay because they're winning because pancakes aren't touching the

floor. They do the sort of game of a keishin because they find loopholes eventually, and this is true in the human world to Russia. For example, was trying to bombard for an athlete's to where they've quit by just hitting them with lots of Formation that would disrupt their thinking and disrupt their ability to perform people were giving up in the same way. That way. I was winning a tic-tac-toe game for what it's worth. So it figured out that if it did infinite moves that exhausted the opponent could figure out where on the board because there's an infant size board. They can figure out on

the board where their next move was. They were just give up and that's how tic-tac-toe is meet one by AI. So, I think a lot of these things are useful to understand because like I said, a-ipower abuse is really about civil rights. And what it comes down to is that things are changing because of the domain shift in the way that he is being used it. So, these brakes are, especially important understand if you really want to prevent harm at the mouth of levels. That's what you want to ask yourself. What can I do about this while this is really going to get bad really fast. You obviously don't

want a nuclear explosion. You obviously don't want catastrophic arms, you know, what, races we don't all these horrible things. Well, you need to figure out, not just how do you fix these algorithms brake on a question and you need to not just figure out. How does the Auburn interact with Society at large is doing bad? Things isn't flipping pancakes and starving people by throwing them too high. But you need to figure out how you interact with Society at large and where you get Authority from when you add your completeness to the incompleteness of a I. So, what is the

background of the people who work in the stuff? And where do they make their assumptions is much deeper issues. So, let's find a, I first of all, where is this happening? And ultimately, I'd have to say, it's like software, it's everywhere, but I got it to security. Basically, it was everywhere because everyone use hardware and software the same as an AI. I find that everywhere. I find data everywhere to find interface with the data. So I was trying to slap on some AI, are you some sort of artificial intelligence machine learning some kind? So, expected to be everywhere. You want to test it

again? Like what I got started security, you got to jump in and start testing, everything everywhere and find where things are broken. So find vulnerabilities. And every process you can find any, I it's really quite fun. Is a lot of work to be done. So, for example, you can use a strategic checklist. You have a test plan or you say, let's look for the fairness, the impact on the transparency. This is different because used to be, just looking at consequences, primarily. I feel like now you have to look at the motives of the people were in the game and so fairness or inherited

rights, you know, this incompleteness Theory allowing people to figure out where the rebs are coming from outside really helps frame the competitions and if you look at the Daytona 500, quote by Darrell Waltrip, it helps let you know that you ever was cheating all the time, anytime you have any kind of competition? Siri look for the people who are trying to use that third bandage using the sort of fit model. But if you look for fairness and transparency, I think you're on your way to testing effectively using a simple checklist. And so what would that look like? You can do the fit of

data? You can do the fit of the objectives. You can look at the assessment team itself, you know, the blue team red team can look at the process. You can look at the operational oversight or a board. If you will, all people have to have a review board if they're going to use the eye because without it just like without having auditing how would you know, the books are cooked. It's the only don't want to become the authority to be a referee in Russia, might be you know, being up the players. You going to be a person who Allows people to play so that they can ultimately be competitive in a

collaborative way. That makes sense. So let's go on in a regulation, fundamentally is pretty worthless without tests. You can enforce things, unless you can prove things are wrong. So this is more and more an essential aspect of learning intelligence, artificial intelligence and I can be successful in life. People able to go in and test and prove. It is Breaking Bad. You want to if you want to apply and make enforceable engineering actions for safe learning. You can do things tactically or you tell engineering for example that in an easy routine minimal judgment World. Exactly. Don't know

much about what's going on, but you know, their needs to be off but other needs to be reset. But these are fundamental, right? If the machine isn't working in the case of a car, not driving the right way. He hit the brake. It turns it off or you take control back. A reset button. Says erase all the decisions. You make you learned wrong. How do you re correct or change? The things that had learned? Because it was learning the wrong way and you don't to be penalized for that strategic is much bigger, right? You have an Ethics review board as I start to allude to already start. Figure out how

to identify going forward, where things are going to be risky, or where they're going to cause problems. And you start the gate releases through these tests an audit. So you said a process and procedures and ultimately, have to have something, you got to have the ability to drop the hammer on stuff that's broken. And so, let me just make a big very important caveat there. When you start to do these tests. They may be very unpopular just like security always runs into these problems Executives in to Amazon, in particular said they do not like testing run because they're inconsistent have

something supposed to be used. Yeah. That's the whole point if I can do things that are inconsistent with how this is designed to be used. You got a problem until you got us some time to educate places like Amazon about what the right way to fix things is. So it's jumping to the top 7 a.m. Preaches. We don't have a lot of time so I'm going to blow through these pretty fast, but I think they'll give you a taste of what's out there. And there's actually really great report that just came out from the first day. I would says, you know, they think there's Vision analytics language and

autonomy. Four categories. I went with seven categories to die a little deeper, but fundamentally can see that there's Harding base. At least from their perspective, how things are being used. But I would actually say that they're targeting percentage has come a lot from what they see in the in the industry in up, from where we're going to see things going. So I buy as mine towards where I think things are going to go. I supposed to just what we've been seeing one translation. Translate doesn't really want to because as you remember in the beginning in the 1950s and 1960s, it was supposed

to be done instantaneously and quickly replacing people who do translation. In fact, we're finding a lot of interest in cases where it's totally broken. And here's just a simple example, in Hungarian predict the gender. If you try to add a no, as you write program, the word programmed, programmer in English would be equivalent. It flips. The gender of the sentence. So she's an amazing programmer, you know, he's an amazing programmer. And you got to ask her. Why, why is he suddenly showing up? What's that prediction coming from those duyvil deeper in Finish? You can see that this

pack I would just a male name, suddenly becomes different. If you put some emotion in there, loves car inspected loves cars, he and then on changes to female changed again to angry. It goes back to him so I could go on and on about these are so many examples, for example, if I put beautiful and sexy, it flips are gender from. He to, she write the word beautiful forces, female override. So you can start a predictably figure out where you want, you know, genders to be envious. Obviously have effect on people. If they read something and how it's translated that he, she gender is very important

terms of people's perception of self. So car is gender. For example, except when he had loved it, becomes she Gingers are inverted. I start off with. He she he she write back and forth, but then it flips over to. He, he, she he so you can see that's related to work in some languages. Of course. They try to warn you by giving you some sense here that Turkish engender. I mean person in Turkish have a gender warning that they basically he she gender-neutral but that even break. So I can flip it back and forth and he is and she's disappear. If really reliably to I can predict it. Basically

what gender I'm going to get based on the algorithm. So she he ends up being she, she and here you can see Persian flip back and forth. All right, so obviously translations breaking in a lot of different ways for a lot of years that seems to be getting better in some ways, but mainly worse, because more people recognizing it as a problem and wondering, should they even use the translation if they can't trust it, so, to talk about written speech, Now this is interesting because what you find is that there's a lot of a bad language on the internet. Of course, and so, a lot of machines are

supposed to be dealing with this chatbots in particular supposed to be protecting you against this sort of thing. But instead we find is the Korean chat button. This example starts his sort of spread back to the example of Alan Turing, right? 1950 5458. You know, she has people talking about how sensitive is important is that we remember his life and his role was here. You can see that verbally abusive language is targeting, lgbtq groups in particular. Just leaving Target right away. It's not like they came out of this and it happened over time. It's like right on the box. These things

are being discovered. Here's another one from Google. Where you can see that, I am a gay black woman is 80 / 7%, toxic, but if he starts spewing out, white supremacist, narratives toxic another problem, but that's just not right, right? The way it is being interpreted. So here's another one, where the Amazon staff were targeting lesbian gay, literature, disproportionately. They said The software glitch, but that didn't seem to be reliable explanation for the problem. They accidentally were sent during all the gay literature off of their their site using a software glitch and then here's

another example. Where Facebook is amplifying hate 10 billion Fallsview Austin. Toxic content being spread by not releasing the algorithms. They had that supposedly were capable of getting rid of this. And we're talking about like Facebook being 75% of online are Beyond any other platform. Far are 75% of those were her ass said it was happening on Facebook where I should say that far, far beyond any other platform just in there. Only ago following a horrible, horrible toxic content, even though they say they have algorithms, that can work on this, just real talk about emotion. Guy

really Health recognition. And this is a fascinating space. I went through with some examples, myself, or I found, this has been the same for years. I can basically fool any system into thinking, whatever I want. My emotion is so I can make it say that ninety-nine percent. Happy 98%, I had neither of those emotions when I was doing this years ago. I gave a presentation, we talked about I could take data from World Wrestling Federation. I think or WWE and I could show that the guy in the bottom is being choked and stomped on, has a 91% have a 92%, happiness, ridiculous. This is important because

we think about drones flying around now, they're doing emotional. Now. This is a huge crowds. Very large crowd, its base. Is there going to be assessing everyone's emotions and Reporting? What they think the state of a crowd is which is going to lead to some response and even targeting based on those emotions, right? Finally, happy people take out all the angry people respond because somebody's going to be in a fight. I've worked a lot of the stuff in like individual cases. Like if you have a sensor that turns on it says, hey look at the Spotify throughout the Brew because we hear

anger. That's something people have to rely on and they can respond to that. Maybe you would Woodforest. So another part of health is of course using his Diagnostics to try to improve people's lives. We find the clinical systems are offline again, because of bias, so that would be reporting things in ways that are such in a high-risk and buy us that you can't even use them. They're so flawed, the underlined biases are so poorly understood that you can't rely on the counter by force of gender classification. Again, you can see these are themes are so intertwined repeating ourselves a little

bit. But if you took a picture like this, it's fundamentally the wrong question, even though the computers are ranked in the way people. Look, they do things in a way that don't make a lot of sense, because binary classification, erase people. Here is an algorithm that I was talking to people about. They were developing. This, that was this determining gender from behind in the back of my head and it said, men women counted but it didn't have a non-binary. And if it did just assumed, it wasn't done yet as opposed. You saying okay to send non-binary, the grey space, right? So you really want

to stop at racing people and allowing them to live in in Andres binary space if that's what they actually are. That's more fun medley. Respectful of the human. It's an authorization question, who's giving you the authorization to find them in a way that they would Define themselves without very important in understanding you classifications in gender. Are you get into a little bit further? You can see Amazon, for example, with training. It's a recruitment tool, and men, only basically, so it taught itself. The only male candidates were preferable, actually. Penalize, anyone, who, who was

recognized as a woman, anytime. They showed up as a woman is binary. Boom, you're less desirable to us. And so, that's the problem is, they're trying to be a fair hiring company. And so, if you look at Facebook versus LinkedIn, this is very interesting study. They found it huge difference, you know, they found that Facebook was biasing software engineer ads. Resembled Mothra as they tested, lots of ads were biased. And so, here's just one example, where they said, you have 63% of men were being shown this at 36% of females or when you were showing this ad. Equitable distribution factors almost

the opposite. You had more women seem to add the men and so is beyond what can be legally Justified, which means they're breaking the law. And so the sort of algorithm is supposed to be fair and meet more efficient and you know, impersonate or ends up being so incomplete has to be completely broken. It illegally, not broken. So let's talk about the visual recognition. Here's an interesting one because I think what people often don't understand is that even in the the human world. This is a really big problems on machines. Being able to do it is almost impossible. I think of the idea that the

recognizing that the problem is so great, that they can do a very small subset, is the first step to understanding it. So go ahead and write down what you see here. Wait for a second. As you write down the word that you think of this image represents or where it's at. So I ran it through the algorithms to see what it would come up with and here you can see that it thinks it's a military. Practical or a sight, you know for shooting things. Okay. So I'm looking at a Gun Site. That's I'm definitely dangerous feedback. So I hadn't been out a little bit of the white space. That's just crap

that image size down a little bit but it's a happy, smiley fun game that, you know, so maybe it's a a kid and I was playing with star and has a smiley face. The reduce paste completely changes. The sentiment here, and completely changed the recognition of what we're dealing with, which you no tongue and cheek, you can see things changed dramatically based on what's a resumption of being made it. All I did was change white space. I made it a square. Instead of a rectangle. Nothing has changed. So in reality, this is an actual test and is given to humans and it's a smart a smart test to get

people. Because what you're looking at is a submarine port hole with the fish still or whatever you want it to be, but it never usually, when people think it is. And so what, what does that look like in real AI terms again, where you can find that Twitter's Auto cropping e? I Sarah supposed to be finding faces, but in fact, it wasn't finding black face is only finding white faces unless you flipped, you know, the contractor messed with the grayscale and then it would find them cuz they're very interesting the way it was being triggered. And so you can see here also that when you do some

grayscale adjustment, The image of the right turns out the person look black, instead of wait for the ACLU filed, a lawsuit based on the false arrest of blacks based on these crazy recognition systems for the computers completely get wrong yet. And then when the people appeal and say what, this is an incomplete assessments, the people have even more. If they double down the police double down and say we should trust technology tomorrow. You should go Google it to get an answer from a computer instead of really talking about what the problem is School exam. For example, try to recognize

faces in order to reduce fraud and cheating in the schoolroom validate tests. And it basically just stop seeing Black Faces. That's obviously a disaster. I do believe it was so insensitive about its own races in. Those are blocking searches to actually took people out of our took, all the searches out that I thought would reveal. The fact that it was being racist hoober terminated workers at couldn't see. Again. I'm trying to validate people in order to use the system in the system, couldn't see them until it actually terminated them unfairly. And so it was talked about the threat

detection now because you can see all the problems with language, written language, and spoken language, and facial detection, that motion detection. So what happens if you try to figure out who was a threat to you? Well, trust me in this examples of refund paper, you know, if a boat coming to attack or trains coming to attack, you could probably find figure it out. But if you have a person coming to you on a horse and they carry a copyright, notice good luck, detecting them. Another example here, as if you put a simple, the fur coat on a Chihuahua, then they disappear because

all you see is a feather boa. You don't see the dog. Basic camouflage and that's obviously, but even more to the point. If you took an image like this from history, which is very important image. And you say, Dulce black or white people in this actually got the wrong question because what you're seeing is an inability to recognize the true threat, which is white supremacy or white insecurity. And so, when you see that a lot actually in the news, the AI systems are not being trained and not being used to see that. There's a lot of people who are very dangerous threats, the term

internment. In fact, when you look at things that they I is not doing, right? So they're not looking, you know, the dogs are not looking at evidence of white supremacist and I'll get evidence of people using things like that white ethnostate of Rhodesia as a signal to each other to organize, hands up the good example of this used to mean in 1979 estate. It was you put your hands up. If you don't we shoot you, right? That's basically way they roll their disguise themselves as blacks and they go around shooting blacks and just hang hands up people. Very confused and they would shoot them

based on the reaction that don't change in 2021. The same people are doing the same things in the same area of the world and suddenly they're accused of war crimes. Now at a point that out because the next thing that happens is we talked about how a i is learning from the past or to figure out what it should be doing in the future. And so looks like hands up, don't shoot kind of lay this out. But if you look at an unlawful on by a police practices in the date of the training them, you see, it's learning from Brave bad practices in the past and the idea of his hands up. Don't shoot

model. The past can lead to serious, serious. Warcraft, serious, serious problems. And if AI learns and then I'm going to make the same mistakes and so completely false claims in the last point on this. I want to make me sleep before you serve close out. The section is you people going to clean baby II when they don't and all, they're really doing is perpetuating. Buy us. All. They're doing is trying to chew a to sort of power and balance, and none of the stuff in the, in the banjo, what company was making other stuff. They're claiming was actually true. They were just trying to get some

money in order to build systems and the person that obvious the former New Nazi connections. Another one, just before we go, it is Andy Byer. Virus in antivirus by us where people can actually make a simple change as I was doing earlier. And but now you're trying to take real threats and it's real situation. So now I was able to slip through by trying to look like just basically. Anything else is simple game and another example, I just threw in here as you have people trying to cry wolf. Well, if you're detecting wolves by looking at snow, you have a very broken threat detection algorithm to

famous case. So I thought it closed out as I said the last example number seven to the top of the the food chart here in terms of where the worst case. For example, terms of breeches. I I believed to be Transit safety and I talked about this a lot in the past. I get a lot of presentations by basically saying, hey, if you can't see what's going on, if you claim to be 90% effective and I can throw some tests at you and you're completely stopped, then that's a fail. And if a pedestrian, jumped out in front of you easy to test, if you throw up in my alarm the street and you run it over,

that's if you're not allowed to drive anymore. You should be allowed to be on the road. But in fact use the overtime 2016-2017 multiple tests have shown that these cars cannot see. Can't see signs. They can't see objects, easily gained easily fooled and it's not just a test or not. Just talk about academics. We see in tweets and evidence in the news that people are having problems with their machines that are supposedly in production. You can put a sign on the back of a bus and I mean, you can put it on the back of anything, really, a car and the car behind. You won't read that as a

street sign. That's ridiculous. Now, you can see that amount of confusion that a Tesla has between science is unacceptable. You would not be able to drive. If you had this kind of problems in human more to the point, you know, humans are being tested more and more than ever to had to prove that. They actually are, who they say they are. Where is the machine can't even see a flashing lights on the police car in big letters that say don't crash into me, right to get to keep driving. And this big difference in reactions, unit with me a failing in the eye, to an Uber failed to see, red lights

and pedestrians. It said, hey final. Go to Arizona where they ended up killing a pedestrian. Very predictable. You could have told people that was what's going to happen. But I think a lot of people did, and that's exactly what happened. So, 2018 Tesla killed a pedestrian, and they had a very different response than Uber. They basically said, hey, we're going to shut down. Our PR department, were going to stop at returning calls and we're going to start up selling this. Technology isn't even more expensive. One. And so I guess you could say predictably test overtime. Just made the same

mistake over and over again with interesting though as they replaced a lot of technology. So they got rid of the mobile high and they had a falling-out with that company. Went with Nvidia, had another accident. Now they have another accident. So I keep having the same accent even though they changed all the technology which brings it back to, its not really in the algorithm is not really the people riding the algorithm. It's the world that the people come from her riding the algorithm in assumptions. Are they making a safety or they putting into the system right for that? She

trending worse over time. And it's not just me saying this and I'm not trying to be provocative here. When you look at the Nvidia folks themselves, you put the technology in the car when they look at, when they talk about the leaders in the chance of safety industry. They don't protest at the top of the chart, their way down at the bottom. And everybody else is at the top of the chart test. What's the liger that way behind everybody in terms of safety? So, that's what we see in the AI systems. The tests that are being done in the real world that could cause accidents and can kill people, they

can see the double yellow lines. They're driving on the wrong side of the road. They're having near collisions. They are blind in basic tasks. People go out on the road to see if the amazing stuff is working in this month. That is completely failing was seen from the air and what scene from the car is completely different types. Of cars are going to run into you. In this case in particular are there was a car that the day I tried to log into it would have killed damaged the people in a couple times. And so it's easily documented, how flooded is really shouldn't be on the road at this point

when you think about what this is doing. And so here's an example of what happens. Next is the AR test plows into people kill people that's really sad outcome for basically what we're doing, you know, when you talk about this be going to the kind of trust being put into technology and all the preachers that we talked about so far. It's going to set us back if we don't do proper testing. And basically put this into mode where you can respond rationally, when people have a flower breach and say, this is what happened as what we're doing to fix it. But in fact, we find what test does the

opposite? They've had a major breach major catastrophe. And the response is incomprehensible. The car itself has been designed to drive without a human in it. There's nobody in the car. And so it's called summoning the car. It when it starts to get some and it tries to run over somebody and Tess. The responses while the person in the car can obviously, Take Back Control that doesn't work. And so, what you see, instead his claims made that it's the safest record of any car based on no records at all. You see claims made against the nhtsa, which don't seem to map to anything that reality

in the last three quarters. You should test the AI having massive problems in the news. You may have heard about people dying and casting the car and ended up in a fire and dead. And you also see that there's a decline rapid decline in the ability, the card actually be safe. So other cars were not seeing that it rashly seeing other brands have much more a safety more perfect safety records than ever as Tesla. Seems to be getting worse and worse and all of his overconfidence. And so what does that really mean? To sort of wrap it up? You got these breeches. You got all these problems. You got

these risks again. I would say to apply this. You need to go back to the basics, which is do you have an off button? Get a reset button. You don't just tell me that there's a person in the car, when there's no purse in the car. That's not an acceptable off, but a reset button. Did it try to run somebody over. You need to go back to the basics and say, don't try to run people over learn differently or reset it to go back to before we thought that was acceptable because that's obviously a wrong outcome. And strategic is back to you. Are you trapped modeling. So you don't end up in that

situation, don't be designing. A AI for a treeless moonscape. Where is perfect light from the sun? When you done, put it into a forest and expect, you know, the cartoon navigate instead of running into a tree, that's just not a threat modeling works for you. And so, set up an Ethics review board to make sure these are things are being considered you're thinking through the process, so that you can date things before they get released to production. People should not be human guinea pigs, you know, people should not be harmed. They should be told her she when they're heated. They

should be told to kill themselves when they have a particular particular sexual preference. They shouldn't be bullied. They should be harms write. All these things come from understanding the social context under, which a is being developed and how people who are developing, it have carry with them, a sense of power, a sense of right and wrong, a sense of up and down until ultimately again a sign. Enforcement. So that you have some independent way of holding people, responsible consequences is key and see what you find in companies like Facebook and Amazon. They don't seem to have any sense of

consequence responsibility. And so they continuously car people Technologies, and that's why things are getting worse. So, back to the point 3.7 or 1.7., We definitely want to be collaborating more. We definitely want to be using the 1.70 Einsteins. It may take us five hundred years, but I think we can get there faster if we stop trying to be so competitive and try to get rid of the the situation where people bringing in the the tatra 87, just trying to drive as fast as possible to prove that you can kill yourself. Not a good way to, to handle a new technology. Let's try to work better at

making things safer. Thank you.

Cackle comments for the website

Buy this talk

Access to the talk “Top Seven AI Breaches: Learning to Protect Against Unsafe Learning”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “RSAC 2021”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Similar talks

Sherin Mathews
Research Scientist at McAfee
+ 1 speaker
Amanda House
Data Scientist at Apple
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Don Murdoch
Senior Security Engineer at Blue Cross Blue Shield Association
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Kelly Shortridge
VP of Product Strategy, at Capsule8
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Top Seven AI Breaches: Learning to Protect Against Unsafe Learning”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
816 conferences
32658 speakers
12329 hours of content