Test Leadership Congress
June 28, 2019, New York, USA
Test Leadership Congress
Video
John P. Thomas - Opening Keynote: A Systems Approach to Software, Testing, and Test Leadership
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
240
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speaker

John P. Thomas
Executive Director, Engineering Systems Laboratory Safety and Cybersecurity Group at MIT

Dr. John P. Thomas’s work involves creating structured processes for analyzing cyber-physical systems, especially systems that may behave in unanticipated, unsafe, or otherwise undesirable ways through complex interactions with each other and their environment. By using control theory and systems theory, more efficient and effective design and analysis processes can be created to prevent flaws that lead to unexpected and undesirable behaviors when integrated with other systems. He has been applying these techniques to automated systems that are heavily dependent on human-computer interactions to achieve safety and security goals.He holds a Ph.D. in Engineering Systems and is a member of the aeronautics and astronautics department at MIT. He is also the executive director of the MIT Partnership for Systems Approaches to Safety and Security (PSASS).

View the profile

About the talk

There are problems with software and software testing. We realize that every time we experience accidents and losses in software-enabled systems: hacking, financial losses, autonomous vehicle crashes, airplane accidents, and other losses where software plays a role.

At MIT’s Department of Aeronautics and Astronautics, we work with a systems approach to safety. We assume that accidents and losses can be caused not only by component failures but by unsafe interactions of system components that have not failed. These ‘components’ include humans.

How do we identify potential flaws, identify the most critical test cases, do targeted testing for software that is very complex, and identify important test cases that include human interactions?

How do we engineer the role of testers and management? Testers and test leadership obviously impact safety, but how are those captured when we analyze safety? How do we take into account the human actions and beliefs of testers, testing managers, and of operators in the systems we are building? How do we account for the safety-critical decisions they make and ensure they are receiving adequate feedback to make the correct decisions? This keynote will introduce the Systems-Theoretic Process Analysis (STPA) methodology and how these factors can be addressed in modern testing.

Share

Alright, thank you. 00:03 I'm just going to say a few words before I go through the slides. 00:09 I'm really fascinated by engineering mistakes that's driven mycareer, 00:13 so far at MIT. 00:18 That's my primary research area. 00:18 We've been developing techniques to help engineers recognize and prevent their mistakes. 00:21 We also need techniques help testers figure out what the task of course. 00:27 Engineering mistakes is right at the heart of what we do right. 00:32

A lot of my work happens to be in safety critical and security critical systems where these mistakes. 00:36 Are the most costly? 00:41 One of the first things people ask after an accident. 00:41 How in the world did that get through testing? 00:47 Why didn't they test that? 00:51 When you see it one accident. 00:53 And you see a second accident. 00:56 Same question comes up in the 3rd and the 4th and hundreds of accidents in the same question comes up you start to think. 00:57

It wasn't just one stupid tester here. 01:03 Got a broader issue. 01:06 You start to see a pattern. 01:11 It's often that we have smart people trying to do their job, 01:13 but they're handicapped. 01:17 With methods don't target the kind of problem that we have. 01:17 Without the proper methods to find that critical thing to test before an accident. 01:22

And from a leadership perspective the types of procedures and methods and resources and techniques that we didn't put in place did not set them up for success. 01:27 Not that we were malicious everyone is trying to do their job, 01:36 but it's what we don't know that gets us. 01:39 Both for testers and engineers an for leaders. 01:42 So we really need techniques that can help us do a better job managing complexity. 01:47

I'm including complexity of the technical system that's a big problem, 01:52 but what about the social. 01:56 Sociotechnical systems, we need techniques to analyze that I've seen a lot of test hazard analysis and engineering safety analysis. 01:58 They always look at what if this thing fails. 02:05 Let's inject a failure here? 02:08 Where is the analysis of the test organization in the test leadership I don't think? 02:10

He will claim and safety critical system has to leadership is not important we all know that's important. 02:15 But it's never analyzed. 02:21 Do you have methods to analyze? 02:21 It's very, very ad hoc, 02:25 sometimes testing is very handle ad hoc. 02:27 That's not always a bad thing I love. 02:29 Working in an ad hoc environment, 02:31 but when we are doing an application that safety critical with lines on the line. 02:33 Not be enough. 02:39

So I'm going to show you a technique to do exactly that. 02:43 Handle complexity in fully autonomous vehicles and other advanced systems that were building and technical complexity, 02:47 but also human. 02:55 Complexity and how humans interact with those systems that's a big gap on a lot of times we test systems. 02:55 We think that we're testing just the technical piece. 03:02 With the human removed I don't think that's our job anymore. 03:05 We've got a test? 03:08

How does 'cause there? 03:08 They're related we can't just throw things over the fence and expect the human factors group to handle the human. 03:10 These 2 things affect each other, 03:16 we've got a test how systems affect humans. 03:18 Such as how humans affect system. 03:21 How they can induce human error here there is no longer a property of the human? 03:24 The property of the system we put them in. 03:29 That's true for leadership as well. 03:32 To be successful. 03:34

We gotta manage complexity, 03:34 engineers and testers are very consumed with all the nitty gritty details. 03:37 That's what we do well. 03:41 But we often miss the forest for the trees so a key objective of the technique that I'm going to show you this become very popular snowing industry standards. 03:43 Is all about managing complex? 03:52 Here's a block diagram. 03:57 This happens to be for a satellite system, 03:57 but the detailed doesn't matter. 04:00

We're really good engineers in test is really good at looking at this thing picking apart the details should not really be connected. 04:02 But what if I inject something over here. 04:08 That's good, but it's also how you miss things and it's not how you manage complexity for very large distance. 04:10 We've got thousands of pages of this. 04:16 So here's the key insight the technique. 04:18 I'm going to show you. 04:21

Recognize first of all we've gotten able abstraction that's how humans deal with complexity. 04:23 Systems theory comes from psychology cognitive science. 04:28 Everybody is in agreement. 04:31 We've gotta abstract, 04:31 but how do you abstract effectively well? 04:34 And this satellite system you could say this group of stuff right here. 04:36 That's the stuff that we're trying to control the physical stuff that moves. 04:39 The aircraft about and we've got different. 04:43

Ways to do that. 04:45 Let's group that stuff together. 04:45 We'll call it a control process. 04:47 And Hey this group of stuff here, 04:49 there's some coordination is large details. 04:50 But as together as a functional group. 04:52 That's basically a controller. 04:54 It's a functional controller may not exist that way in on the Silicon but it's a functional controller. 04:54

It and we have a control relationship here, 05:02 so let's try using control loops to abstract one of the problem with a traditional thinking is, 05:04 we tend to focus on one component by itself, 05:09 and what gets. 05:11 Left out the interactions. 05:11 We missed the interactions. 05:13 It's critical. 05:13 I'm going to start by the way at the technical detail around quickly move. 05:15

All the way up through this presentation on the same principles apply to humans as well as to management. 05:18 So we have set of controllers here. 05:26 This happens to be maybe a piece of software, 05:28 or controllers do well controls provide control actions right. 05:30 They make decisions outputs instructions try to achieve a goal is a control process controllers often receive feedback not always but usually it's required to be stable, 05:34 you have to. 05:43 Understand? 05:43

What is the effect of my controller? 05:43 What's going on in the system at the feedback is incorrect or incomplete or misleading. 05:46 We can have some problems or if it's delayed. 05:50 There's a key piece of every controller. 05:53 In the war. 05:55 It's called a process models comes from control theory. 05:55 We basically what it says is controllers have a belief. 05:59 About the world around them and if that belief is incorrect or mismatched that's classic way that controllers. 06:02

They provide incorrect controller. 06:09 Control algorithms kind of software term, 06:09 but it says even when I believe what do I do? 06:15 Maybe the controller fly OK, 06:18 so this is a high level abstraction to think about the bigger picture right. 06:21 This maybe isn't better starting point than this. 06:27 In fact and we can go through there's some definitions. 06:31 We can go through. 06:35 I don't think we need to spend much time on this. 06:35

To go a little bit fast, 06:40 paced, but know that there's a lot, 06:41 that I'm kind of skipping over in this technique. 06:42 Of course in real systems. 06:45 We don't just have one controller do we have a hierarchy of control right? 06:46 And it's different. 06:49 The process model believes in the information and so on in the goals are different at each level in the hierarchy so here we have. 06:49 Physical actuators that you move around and move the spacecraft. 06:57

This guy says figures out how to point to the new direction in space. 07:02 Not smart enough to figure out where to we should point next. 07:07 It just figures out how to do it how to activate it. 07:10 This is going to have certain beliefs at this level, 07:13 and we might think about how this can cause problems in this controller. 07:15 But it's a higher level, 07:18 we've got a navigation controller that doesn't know how to actually do it, 07:19 but it knows. 07:23

Where we should point next or tries to figure that out right? 07:23 They're going to have different beliefs different types of feedback and potentially different. 07:26 Really interesting thing that we've got to get better at, 07:31 is a non failure problem. 07:35 We're so focused on individual components and individual component failures and failure injection in those things that's that's one piece of the problem, 07:36 but what if the system works. 07:43

What if everything works as designed? 07:46 That's the problem? 07:50 How do we catch that? 07:50 Other than just having smart people look at we need more rigorous ways assistance again more and more complex. 07:54 Causing most major accidents in the last 10 years. 08:00 The thing was designed that way. 08:03 We so we build on the right what's called a control structure, 08:07 it's a hierarchy of controls in abstractions. 08:11

Functional abstractions, the vertical axis indicates something you did, 08:13 it gets control, so boxes stores the Top have more control and authority over boxes. 08:17 On that means every downward arrow is going to be a controlled action. 08:22 Construction output directive to try to achieve a goal every upward arrow is going to be a piece of feedback. 08:26 It's a different about what's going on. 08:32

Charlie needs to use to do their job and we could think about flaws not just a sensor failures and component. 08:35 Yeah, that's one cause of incorrect feedback in a process awful on a bad decision. 08:41 But what if there's no flying a sensor. 08:46 But we never had a sense of there in the beginning. 08:48 We never thought that this piece of feedback would be important. 08:50 What's missing? 08:53

We care about that, too or if we put a sensor in there, 08:55 but it was wrong type of sensor. 08:58 Or it didn't have the right resolution where there was a delay 'cause. 09:01 We put it over Ethernet. 09:03 He's commercial off the shelf device is in all these things. 09:04 We have got to really get better at this. 09:06 It's well within the domain of testers it is engineers should get it right first of all but it's also. 09:08 Our job. 09:16 The double check and find what they missed. 09:16

In fact, while we're on the topic of abstractions. 09:20 Why should we start here? 09:23 Why don't we start putting this entire thing? 09:25 In A box and say, 09:28 Hey, Let's call that an automated controller and before we dive into the details. 09:29 'cause we might not know what we're looking for. 09:33 Let's think about this control appear and who controls the automated testers. 09:35 I mean, their automated controllers. 09:39 What kinds of control actions? 09:40

Do we have to accept and how could those control actions cause us to do something unsafe. 09:42 What kind of information? 09:46 Do we have to provide to operators and if we do it wrong or misleading or conflicting or delayed or whatever. 09:46 How can we mislead them and actually induce human error? 09:53 More and more important maybe we should start here 'cause. 09:57 We don't necessarily know what we're looking for in the? 09:59 More detailed level if we don't start here. 10:02

While we're at it and by the way this is the difference between. 10:04 Difference between a component based view assistant space. 10:06 Going to get better at assistant space view in fact. 10:10 This control loop works very well for software one of the biggest challenges we have in safety critical systems today. 10:14 And things that were missing software behaves. 10:22

Works very well software controls have a belief that explains a lot of access that we're having always not all about. 10:27 In this technique is just something that's easy to latch onto. 10:34 What about humans? 10:38 Hey. 10:38 We can understand human error this way too. 10:38 I hate the word human Erica sounds like it's like humans. 10:43 But you all know what I'm talking about or if the human we call it an unsafe control action. 10:46

Very factual statement or if they provide something because they believe that was the right thing to do. 10:51 How could they come that we should be asking ourselves? 10:56 How could they come to believe that that was the right thing to do. 10:59 And say woo recognize that those beliefs come from feedback among other things feedback that our system is providing. 11:02 How could our system did that should be a test scenario? 11:10

Let's see if it's providing right so this works for the 2nd largest problem we've had recently causing accidents human being. 11:13 In fact, why should we start here. 11:22 Maybe we've got a whole joint whole fleet of operations to think about. 11:24 There's a massive coordination problem here before we get into the details and necessarily know what kind of behavior is actually matter. 11:28

What kind of feedback do we have to provide up to the Joint Command of the fleet and what kind of coordination problems they have to deal with, 11:34 and could we provide conflicting information. 11:41 To make their problem their job much, 11:44 much harder and induce air at that level. 11:45 In fact, we should have really start here. 11:48 This is basic system engineering, 11:50 but that's not always how test network so as well recognize that this is. 11:51

I have a way to manage complexity, 11:55 but it's not. 11:57 I totally injected into the safety into this testing community and now there's a little bit of details. 11:57 In the method. 12:04 I'm just going to go quickly just to give you? 12:04 Taste we can define 4 categories of control actions that are relevant for unintended behavior of the system. 12:07 We could have a control action is required. 12:13 But it's not. 12:16 Provide it. 12:16

Think about a break command in it in a car you don't provide the break command could that be dangerous. 12:18 Yeah, in some situation, it could be dangerous. 12:23 You don't stop for an obstacle. 12:25 What if you provide case to break a man could that be dangerous. 12:26 Sure can you do it on a highway aircraft you do it on the active runway in comes with stuff? 12:29 What about K St maybe provide their exact right command? 12:34

In the right situation, but it's delayed a little. 12:38 3 seconds, too late, yeah, 12:42 we care about those as well in case 4? 12:44 What if you provide a break command. 12:46 Do it at the right time in the right context, 12:47 but you let go too soon. 12:50 Immediately, like off. 12:52 Well, that's another way, 12:52 this is drive from accents that we've had over the last 20 years, 12:55 it's probably. 12:58 There are sub cases for each of these you may be thinking of some subcases now a lot of them. 12:58

Anyway so we can start to build a method out of this to try to anticipate these things build test scenarios, 13:04 and try to find flaws in the design to look for. 13:10 This technique is called St PA system thread process. 13:13 Now the idea is that retesting notice a failure problem known as a component based problem, 13:16 but as? 13:21 It control problem. 13:21

Different way of thinking controls user process model this works really, 13:24 really well for software very, 13:28 very complex software. 13:30 Machine learning smartest people in the world don't know what those coefficients being by looking at him and exactly how they got there. 13:30 What we can abstract machine has a belief doesn't it's going to formally forms of longleaf that's an explanation for this. 13:38

We so we can identify causes related to component failure component interactions human behavior software behavior design errors flawed requirements. 13:46 Big thing in testing is requirements based testing isn't it. 13:54 Guess what kind of accents were having access where requirements were wrong or missing? 13:57 How do you catch that? 14:01 That's really difficult to catch. 14:03 Part of our job needs to be. 14:05 Finding the missing requirements and no one else thought of. 14:08

And use those to drive a test case don't just prove what the engineers assumed to be true. 14:11 I don't just assume what the engineers assumed we've got a question that. 14:17 Do a PhD thesis that we had a couple of years ago for the Air Force Test Center, 14:23 where I just came from yesterday. 14:27 In fact, they've been using this for a few years now. 14:29 You could build a control structure for the testing group. 14:32 In fact, we've been talking down here. 14:34

Maybe you have an aircraft. 14:36 Or maybe an autopilot maybe a pilot their control. 14:37 Loops here there in a very important, 14:39 and we've got to think about those interactions. 14:40 Who controls the pilot? 14:42 We've got operating procedure sorry power would be a we've got. 14:45 We've got pilots and test pilots who controls the pilots. 14:48 We've got procedures that we put in place with good air traffic control. 14:51

Over here, we have instructions we have engineers that do a test plan and they review that and so on. 14:54 How many of you have ever worked for a manager or supervisor that was in common? 15:01 If they're in the room, 15:06 you don't have to raise your head. 15:10 I'll look you've got 2 reactions to this. 15:15 Is the component based view and there's the system's view? 15:19 Component based view says. 15:22 That guys an idiot? 15:22 How did he get his position? 15:26

Unfortunately, I don't have control? 15:28 Maybe I get him. 15:30 Tired maybe he needs to go. 15:30 Maybe I just have to get out of here, 15:32 but this is a really bad situation. 15:35 And it that's the component based view base? 15:37 What's the systems based view? 15:39 What does this get you? 15:40 What step back a minute alright let's not be judgmental? 15:42 Let's not say human error that's not say that there are malicious and so let's be factual. 15:45

What was the control action that the managers providing that was undesirable or unsafe? 15:49 What will actually let state it will call that control action that's a downward arrow? 15:54 Now. 15:59 Or most often. 15:59 They were trying to do their job. 15:59 They were trying to achieve an objective. 16:04 They thought they were doing the right thing. 16:05 Let's try under. 16:07 I believe that they haven't had time. 16:07

That made him think that was the right answer made and think that was achieving their goal what was their process model. 16:10 And once we identify the process models that may or may not match reality. 16:17 Where do they come from? 16:21 It will come out of the blue. 16:22 They don't flip a coin to come to a belief there's some information. 16:24 Where is a primary source of information it's feedback whereas feedback come from? 16:27 It comes from us. 16:32

I think what kind of feedback have I been providing either did not correct the process model flaw that with. 16:32 Or maybe created it in the 1st place. 16:40 Or maybe was conflicting or contradictory in some way and what can I do to fix at a system level of this kind of? 16:43 That's the question, we've got to ask so it's the same thing that I've been talking about down here it applies up here to management. 16:50

You can go all the way up to Congress and I can share some interesting stories, 16:57 there, but I don't have time. 17:01 The boy do we have some inadequate in unsafe control actions up there? 17:02 All right. 17:08 Now let's get back to let me show you some examples of all of this. 17:10 We have student project. 17:14 This is a student in our class that we teach this technique system safety and testing at MIT. 17:14

We had a couple of students and they apply this technique, 17:21 they chose. 17:25 Uh something inspired by Tesla autopilot. 17:25 I've been advised that is a good way to put it. 17:29 Um. 17:33 So let's see what they did here is the process you've got 4 steps on apologize. 17:33 I'm going to go through very quickly but. 17:38 Maybe it will place the hooks for you to be inspired to learn more if you're interested step. 17:40 One is worth. 17:45

What is the purpose of the losses that we want to prevent we've got to have some goal to begin with? 17:45 Otherwise, what are we trying to achieve what we want to prevent loss of life that's the traditional safety objective? 17:52 What about damage to vehicle but we don't kill anybody yet would care about that? 17:56 What about loss of mission there's always a loss of vision. 17:59

What if we don't kill anybody we don't damage the vehicle but somehow this automation doesn't get it a point beer makes you. 18:02 More efficient, yeah, we care about that customer satisfaction so in this method, 18:08 you can apply not just the safety and security but. 18:11 To any Los stakeholders get the defined in Las? 18:14 OK, now we build a model, 18:18 the model is going to be a functional control structure. 18:19

Here is a very simple one that we could cover in the next 15 minutes or so. 18:22 So here we have a driver at the Top making some high level decisions, 18:26 they can do. 18:29 Execute manual commands to the vehicle they can enable and disable the automation. 18:29 This is Elaine Management System. 18:34 A generic title for this type of automation. 18:36 It they automation, the software here can change lanes. 18:38

It can accelerate it can break much simpler than what we're used to dealing with this. 18:41 Let's see if this can get you anything useful. 18:45 So what we do notice the controllers in the vertical axis of the downward arrow's are the control actions are the outputs let's start there. 18:47 That's the next step in the process we identify what's called unsafe control actions. 18:54 By the way the word unsafe we've defined it. 18:58 Whatever losses are. 19:02

It's not just us Mars Polar Lander that NASA sent up to Mars unmanned know, 19:02 people they had a huge safety Department looking at. 19:08 Hazardous scenarios. 19:11 What is no people? 19:11 What are they doing what safety for them is law submission? 19:14 Just don't be misled him. 19:18 Anytime I slip up and say safety hazards or or words like that I mean. 19:19 Las olas identify unsafe control actions. 19:24

What what's one control actually we've got a break come in coming from this Lane management system remember the fork ways that command can be unsafe. 19:27 One of the things we that somehow we've got to have controls in place to prevent this behavior and testing. 19:34 We want to try to get this behavior to happen. 19:39 I have to show that if it's unsafe so how can I not providing a break command cause a problem? 19:41

It hit something if you don't break when there's something in front of you there, 19:49 we go that's how we would write it. 19:52 We tagged these is called UCS. 19:53 This is you CA one. 19:55 This is an output. 19:56 LMS does not provide break command when vehicle path is obstructed this links back. 19:56 The losses that we started with every result in this process is full traceability, 20:02 so you know where these things came from. 20:06

This statement is not just something you write down an ad hoc. 20:08 There's a structure behind this. 20:12 I don't have much time to go over, 20:14 but the first part is always source controller. 20:16 Then we have the type of control action. 20:19 We're talking about then we have the command and we have the context. 20:20 Contacts can be further refined anyway. 20:23 There's a lot of structure here that I'm glazing over, 20:25 but I want to give you a taste. 20:28

This structure is in place to make sure we very carefully going through the system to see what needs to be prevented. 20:29 OK, but I haven't blown your mind. 20:35 Yet with this. 20:37 You CA that's something we already we already know. 20:37 Will go through this table? 20:41 Find the others? 20:42 Let's move on. 20:42 Let's go to Step 4, 20:43 which is to identify scenarios alright we've got these. 20:44

Control actions that must be prevented somehow we can immediately by the way get some requirements out of those. 20:47 Every one of those should be translated into requirement to prevent that behavior if you already have a set of requirements. 20:53 They gave to you, we should be checking those say how does it compare? 20:58 Do we have requirements to? 21:02 Maybe cause that behaviour, nobody realized it. 21:03 We have a missing requirement conflicts and things like that. 21:06

Let's move on, though now once we have that behavior even if we've got the requirement in place we need some. 21:09 Insight we can't just test every possible combination and see if this thing is going to happen. 21:15 Let's try to figure out exactly how. 21:21 This behavior might come out of the computer and we use a control look like this is some annotations here in black is a common template you can use to 21:23 build these scenarios. 21:32

So we start with the UCA which is the output of the controller. 21:32 It doesn't provide adequate breaking commands when there's an object in your path. 21:36 What kind of belief? 21:39 In the process model kind of belief would cause that we're humanizing this offer by the way which is fine. 21:39 It works very well. 21:46 What do you think? 21:46 Or if it doesn't see yes, 21:50 I interpreted word see that's feedback. 21:51 You're absolutely right. 21:53

We're going to go one step at a time now, 21:53 just to make sure we don't miss anything. 21:56 It's kind of easy in this case, 21:58 but Use the word belief. 21:59 Well, if it believes Watt. 22:02 If it believes there's nothing in the way. 22:05 Now we can say OK. 22:07 We come up with a number of beliefs. 22:08 There will be a small number by the way 'cause. 22:10 We're not talking about inputs, 22:11 maybe hundreds of Inputs. 22:13 Small number of beliefs and we say OK for each belief? 22:13

What kind of inputs would cause that belief. 22:16 Alright well, we kind of inputs would cause this well. 22:19 Maybe there's a radar on this car. 22:22 Maybe there's some other sensors. 22:24 And we can immediately say, 22:25 Well, the sensor fails. 22:27 That's a pretty obvious one what, 22:27 if this rain fog and other things. 22:30 What about a beautiful sunny day? 22:31 Nothing fails. 22:34 Nothing loses power everything is working exactly as designed how can? 22:34 Radar sense. 22:41

In a front bumper Nazi in office. 22:41 Miscalculation yes, there might be a flaw in the algorithm. 22:46 Signal interference, yeah, there could be maybe something in the path right. 22:52 Some obstruction that's going to happen isn't it, 22:57 I've never happens. 22:59 All the time OK, 22:59 so these students are going to cut to the Chase now. 23:01 These are the students found by the way this is just in a couple of days and they had almost no information about the actual system. 23:03

These students had no experience as well, 23:09 let's see what they found the depicted graphically. 23:11 I said, OK, here's your car? 23:14 Which may be something like a Tesla's driving down and it's a human driver and there's your obstacle Hey, 23:15 if you've got this. 23:20 There's something in the way you're not going to see the obstacle gets worse now you're driving along but Human driver what do you do when 23:20 you see an obstacle? 23:27

Maybe get out of the way right now right now, 23:28 it would be great if you check your blind spot 1st. 23:31 Or if you check your blind spot. 23:33 What if you check your blind spot someone's in your way? 23:37 What do you do? 23:39 Greg maybe speedups is going to be some delay before you get out of the path. 23:39 What if you get the students said using this method this was in 2016 by the way. 23:45 What if there is a delay getting out of the way an this car? 23:48

It's just looking at the car in front. 23:54 And at the last second it sees the opposite. 23:56 Could happen. 23:59 I think a crash. 23:59 How how good of a test scenario do you think this might be? 24:04 Sounds great to Maine, it sounds like this is going to happen. 24:08 Where are the controls for this? 24:13 Has anybody thought of this? 24:15 This was in 2016, it's woman saw this we said. 24:19 You've got to present this at a conference. 24:21 Presented in March of 2016 at the MIT conference. 24:23

I mentioned in my last slide. 24:25 You're all invited by the way it's a free conference. 24:27 It's all about this technique in. 24:29 Industry uses of it and they presented it in a little while later we had an accident just like this. 24:31 Happy. 24:38 And then all the manufacture. 24:40 Bad Press, I spent more time. 24:43 Highest paid. 24:47 Engineers in their company trying to Figure out a solution after. 24:47 Sorry. 24:53 Sorry. 24:53 Were you forced to do at that point? 24:53

Some kind of software change kind of work, 24:56 but the radar. 24:59 This thing is already on the car. 24:59 It was never designed to look more than one car ahead by the way is human drivers. 25:01 I don't know if you realize we have a tendency to look through the? 25:05 Through the window can't look through the window. 25:08 We have attached. 25:10 It goes to decide just to see 2 cars ahead. 25:10 We do it all the time, 25:13 not this one. 25:14 Ohio radar works and we thought. 25:16

That might be important so they found a way after a long time. 25:17 How to bounce the reprogram. 25:20 The thing do something was never designed to do balance that beam. 25:22 Under the undercarriage of the car in front uncertain conditions you around turn is not going to work. 25:25 We don't have any turns, 25:29 though, do we? 25:31 All right. 25:31 A certain weather conditions is not going to work and so on, 25:35 but sometimes it'll hit the current front. 25:38

It'll bounce back in the return is one over R to the 4th so it's really hard to do this doesn't work. 25:40 A lot of times, but sometimes it might has huge amount of effort, 25:45 what would it take to figure this out before and get it right? 25:48 Oh, I just got a couple students getting a couple of days, 25:53 there cheap. 25:56 Don't have to have they don't have to have any experience at all apparently and at first I never use this method? 25:56 It's incredible. 26:05

Alright so it turns out after the accident, 26:05 someone actually took this result. 26:09 Conducted a test. 26:12 Let's see so we have a video. 26:16 Maybe that would be a good test case to do. 26:28 Earlier. 26:31 The problem is not that we have stupid testers. 26:31 We have smart testers. 26:35 And I'm sure they did a lot of testing on this. 26:35 But things slip through the cracks. 26:41 How can we be sure that we've got everything we need better methods to be more careful and rigorous? 26:43

Now that's an interesting that one because that was a software interaction. 26:48 Nothing failed by the way. 26:52 Little hard to argue that on a component scale. 26:54 You pick any component in their cars really hard to say it failed 'cause it. 26:57 Exactly what it was designed to exactly the way every requirement, 27:00 said it should. 27:05 Engineers wouldn't called out of failure. 27:05 They say it worked. 27:09 As if I got another problem, 27:09 so we've got to get better at this stuff. 27:11

Here's 27:13 You got it, I'll just skip that OK so. 27:15 Here's an interesting one do we care about humans? 27:19 We really need to care about humans. 27:22 I'm going to try to convince you in about 3 slides. 27:23 So here's what the students did we told him in this project. 27:25 You have to look at software you have to look at humans? 27:28 So I looked at humans, 27:30 but there's a manual steering override. 27:31 This car OK, so how can that cause the problem? 27:33

Let's look at some unsafe controller actions here? 27:35 How can the steering command be provided? 27:38 It caused an accident manual steering command well essentially if the human grab the steering well, 27:40 let's say you're in. 27:46 Automated mode grab the steering wheel. 27:46 Everything is fine, but you grab it and then you cause an accident. 27:48 Swerve into some other some car. 27:52

Why do we even care about that you think that guy must have a suicide which that's not my problem? 27:56 And somebody elses problems they really want to kill themselves let him. 28:01 It's not very approach as a component based approaches failure approach applied to human we need to take a systems approach. 28:04 Need to understand how our system to speak. 28:11 We're not looking for these things. 28:14 So we would do this we would use this template. 28:16 We say OK? 28:19

What kind of beliefs with a human have. 28:19 When they do this. 28:23 Alright let's not just assume that we've got stupid drivers. 28:23 There are some smart drivers. 28:27 Um. 28:30 So. 28:30 What might they believe well they might believe that it's no longer engaged? 28:30 That's why they're taking over. 28:37 They might believe it's doing something they don't want. 28:39 That's why they take over. 28:43 And if they're causing something unsafe. 28:44 They probably don't realize it's unsafe. 28:46

Right so we can put about 3 or 4 fundamental beliefs, 28:49 not a one hundred page report him talking about. 28:52 If you ballpoints OK? 28:55 What kind of Inputs, 28:55 this driver have. 28:57 Normal driving it from the automation that might induce those beliefs. 28:57 Here's what the students came up with they found that this vehicle. 29:04 When is driving down the road it has 2 cameras to follow the painting lines on the road 8? 29:07

And if they diverge what do you do there is not perfect answer but the decision at the time was? 29:11 Follow the one on the right. 29:16 And so they came up with a scenario where if you're driving down the road and if you have an off ramp this. 29:18 Vehicle the automation will try to take the exit for you on the line. 29:22 In addition, it's got some smart logic in there to recognize speed limit signs. 29:27 An actually to celebrate now. 29:32

I don't know about you, 29:34 but up in Boston. 29:36 Nobody follows the speed limit signs. 29:36 At the automation will. 29:42 OK, so let's pieces together so that might explain the process model belief that I need to take over. 29:46 One more pieces, they don't know know that what they're doing is unsafe. 29:51 So you put it all together. 29:54 Here's where you get his the. 29:55 Automated vehicle that with a human driver and it tries to take the exit tries to slow down on its own. 29:56

Let's say there's a human driver behind us. 30:01 That doesn't happen on highways does it. 30:04 We've got a human driver behind us and the cars doing now, 30:06 what's the human driver going to do. 30:08 When a car starts taking the exit and slowing down the human driver is going to say, 30:10 Oh no. 30:14 I don't want to take the exit. 30:14 I want to go back in my Lane. 30:16 The car at that instant when they take over TV. 30:18 Right and you're fine. 30:23 Oh. 30:27 Is this really not our problem? 30:31

We have constructed the perfect system to induce this scenario and set the driver up to fail. 30:34 Every component work the way we designed it. 30:43 Every requirement, we thought we needed in the system. 30:45 And if we were doing requirements based testing only we would have passed every single test. 30:48 In fact, it's a car did anything else. 30:53 We would have failed, it until we go back and fix it. 30:56 We've got to expand our scope. 30:59 But we need methods to do it. 31:03

We can't just have smart people sit down and write these out for me. 31:04 Use your brain we need methods to guide us to do this carefully and rigorously as things fall through the cracks. 31:07 Is an example similar to this? 31:14 You see what happened. 31:20 Software was driving the car why do they do this? 31:20 It worked didn't it followed the line. 31:26 Did exactly what it was designed to do? 31:38 System testing is all about challenging. 31:44

Yes, options, the design the requirements, 31:47 everything is fair game we've got a challenge it. 31:49

Cackle comments for the website

Buy this talk

Access to the talk “John P. Thomas - Opening Keynote: A Systems Approach to Software, Testing, and Test Leadership”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “Test Leadership Congress”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
129
app store, apps, development, google play, mobile, soft

Similar talks

Mike Talks
Test Manager at Datacom
Available
In cart
Free
Free
Free
Free
Free
Free
Davar Ardalan
Founder at IVOW
Available
In cart
Free
Free
Free
Free
Free
Free
Katja Obring
Test Consultant at Infinity Works
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “John P. Thomas - Opening Keynote: A Systems Approach to Software, Testing, and Test Leadership”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
525 conferences
20515 speakers
7489 hours of content