Duration 23:38
16+
Play
Video

Toward Trustworthy AI: Explainability and Robustness By Agus Sudjianto, Manager, Wells Fargo Bank

Agus Sudjianto
EVP, Head of Corporate Model Risk at Wells Fargo
  • Video
  • Table of contents
  • Video
Request Q&A
Video
Toward Trustworthy AI: Explainability and Robustness By Agus Sudjianto, Manager, Wells Fargo Bank
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
45
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speaker

Agus Sudjianto
EVP, Head of Corporate Model Risk at Wells Fargo

Agus Sudjianto is an executive vice president and head of Corporate Model Risk for Wells Fargo, where he is responsible for enterprise model risk management.

View the profile

About the talk

All models are wrong and when they are wrong they create financial or non-financial risks. Understanding, testing and managing model failures are the key focus of model risk management particularly model validation. Key factors of managing machine learning model risk is model explainability and robustness. Explainability is critical to evaluate conceptual soundness of models, a fundamental requirement to anticipate potential model failure when models make generalization for situation where they have not been exposed during the training. There are many explainability tools available and my focus in this talk is how to develop fundamentally interpretable self-explanatory models including Deep Learning. Since models in production will be subjected to dynamically changing environments, testing for model robustness is critical, an aspect that has been neglected in AutoML.

Share

Yeah, my talk today will will be centered around building across word p a r i a lot of topics out there and talk about this subject. What I would like to do is to get out of you from a regulated industry in a financial institution process back. So we'll focus on the issue of explainability and robustness. Is this the two main topics that we cared about and very important in the end of April at industry. As you know, we use a lot of modeling Banks to do both

financial and non-financial Morrow Punisher Marvel or the traditional model that thing always stood for Many Men. Yep this in the area of crowded Market liquidity Revenue expand + stress testing Capital Management at Tetra healing with customer service Financial crime marketing compliance conduct Staffing, etc. Etc. Now in for us when we use model, we believe that all motor arm and when they are wrong, they create Chris both financial and non-financial

race to introduce him with that moderation management is very very cold and Center in the in the industry to to manage model in the M2M life cycle. In this particular talk I'm going to talk in particular in term of machine learning and the rabbit adoption on machine learning in Banking Machine learning to do a stoppie sticker model is an alternative to statistical model or we also use that same protection probably is still practicing tomorrow, but we use

machine learning to SSM smart or we use machine learning to do tomorrow that mistake Ontario Post election and construction off of tomorrow. Even if the final model is the track in snow start tomorrow in the area of the property. Should we have a lot on. You should go back makes my credit. And now Financial crime-fraud exception no more. So this is area in compliance Corner craze customer assistance a lot of the natural language processing model in in in in in this year in there the other area that we

use a lot also area that's traditionally computationally is very very intense fast as 3055 saying when we dealing with hot dimensional stochastic differential equation for Equity basket, for example, that the very very difficult because we dealing with a hard time and channel PD and simplification has to be made so we we also use deep learning as a way to Stall High dimensional stochastic differential equation. So before I go to drive into some of the example and what week About anthem of explainability

and robustness West talk about moderate because this is a very very important in the in in financial institution. Sad, we believe all Model Auto and that's the famous quote from John Sparks. But what we care a lot is when mortillaro be potentially below harm or unintended consequence to the user to the institution of the customer so harm canopy Financial harm in credit Market liquidity. We lost Reddit lost or creating non-financial harm which in turn off reputation compliance and legal example here.

GLA 4matic used song you can be a model image detect and that you had a chance to comply. In Staffing is understaffed long. Wait time at create reputation already. Read it sometime, you know, yes, we do have type 1 type 2 error, but also we have unintended consequence with dealing with pearl and a model make discriminatory price on protected class. That's create a complaint. Another example that seems that looks very sounds very mundane in term of online marketing when you do you log into

I do you do your banking system online you will get product offering it seems very Monday. So what's the harm in that people click at don't Plex or something very very harmless. But in reality Asos model created based on the information in the cookies what website that the person tasted excetera excetera now suddenly with dealing with privacy and fun people with different demographic younger generation and older generation. They have different Behavior online. What

website do you think they said and if people get different product offering that. Pepper Mesquite you that generate later, so this is all the things. We Care a lot of information in the financial institution dealing with with the model race and window as I said before and then they're wrong they create damage and when we using model with taking moderate and that way when we take moderate with better off know what kind of race we are taking So one of the things if I come back to the

machine learning model race, there are a lot of things that can go wrong and the spores of moderate talked about a lot of people talk about that are cheap and fire. That's the implicit bias coming from from the data conceptual talk about this guy in deeper. This is related to explainability house sound the model is so that's very very important and partly in the machine learning work because we dealing with large number of variable if you have a lot of

spurious variable the fact so we'll talk about that. A lot of people in machine learning obsessed with model performance. So they do a hyperparameter tuning to Auto Machine learning to optimize tomorrow. Orthodontist Play Between the chest pain and and and training. So you can build model. That's the best price on the data splitting pissed on your testing data when you put it it can be put in production can be a complete failure or miserable are very very important because every time will it rain tomorrow

highly highly nonparametric model basically when you retrain you change tomorrow, so we better know what has changed and how to how to deal with that a model used to control. Is this a model they make mistake model they will do wrong. So what are Tia and model built milk be used in different environments. How would you control that my my truck today? I'm going to focus on the concert shows darkness and Madara Bassmaster. Explain ability and and and Madara pasta.

Let's talk about the explainability. One thing that we need to pay attention toward is outcome. Testing is not happy. So a lot of people when they build model to machine learning we have we have more plastic that has separate dancing with a good model on North Goodman Auto in the auto machine learning the priority of people model and pick the one that's the winner is on the texting. I'll be very very dangerous exercise because we need to ask question. That's the model makes sense. Can we trust tomorrow? How's the model going to fail

an M4 critical application for mission-critical application in a high-stake environment explainability is the requirement so Lotto machine learning application today. It's somewhat in not in the area that is high stake environment. But in the high-stake environment for example for us when we use for let's say for decision occur example of Socratic DC to make it need to have a recent history. Regulatory requirements. So explain the model makes sense. Can we trash tomorrow? How's the model going to extrapolate? So this is the type of fundamental question that we need to talk.

I talked about being shoe off robustness and dynamic Tumwater going to perform in production in Afrin Syria in Port in Bayern model camper for can can open it up for sale in Byron. For example for us when we we one of the application that we do is due to monitor play their behavior. So a lot of things you read it in the garage and the news in sandbanks got penalized because of the the trailer of misconduct show in in in in in maybe people put machine learning tomorrow to check out the email traffic from from Trader and in that situation model over it in Opera Seria in

environment of the people that monitor monitor find a way to find tomorrow the best thing for robust massage tomorrow. a very very critical I would I would talk about a little bit because of the obsession of people in machine learning intern microphone. How do I mail? Like I said, I think that I'm a legend about people do a static split between training and testing. So. Display thing is misleading. So we need to test it how to look at the robustness off tomorrow. What what situation does the modernization will flip. So that's a that's what we will talk about more robust and I

will talk about this later. I'm going to mention a little bit in turn off explainability that relate to the in interpretability is flammability a s'more in the sense that can you explain it to the user and from that you need to do to do that. You need an interpreter for mall and back me today in the in The Interpreter permission learning from post. Interpretability 3 tomorrow at the black box and then applying to like limes job PDP and a whole slew of Port O'Connor to

try to explain a black box model walk tomorrow how to moderate behave other techniques is called Model distillation versus a symptom coming up with a simplified model so that the with the simplified model we can't explain. For example, we do a gradient boosting Machine model and then we put distillation model simple 3 and at the bottom at the notes off the tree you have a linear tomorrow and you can interpret linear tomorrow at the bottom of the note to depart model be the focus that I want to talk a little bit more is explanatory

model. Can we build machine learning Sophos ticketed machine learning? That is self-explanatory just like the producer. Statistical regression to good old friend status tomorrow like regression in statistics. So that's what I call a star star explanatory model. How can we build machine learning model? That is self-explanatory. So why don't the things that we do a lot and there are few techniques and I'm going to talk about this very very briefly. And then we're going to talk about Pete mural Network

how to make deep neural network as a self explanatory model because the answer is yes, we can. In in the in what we call it what person that we can do is 22 constrain. This is a sample automatic way to do it. I'm going to go to the first one is the Vikings training the the people learning to make it self explanatory. One of the way to do it is we creating from input. We creating what we call it is a protection Layla very very few protection in this example, essay 123 you

projected into 3 and 1/3 projected. We have a uniform. A single projection and then we put the big Network to do the laundry near 80 and then we can buy it at the end. So we have protection we have to be blamed you have to do to do they don't leave me a dirty and then we have combination plate. So the key here is really to the network. Stars not too many sub shop Network Prairie View stop Network. So that will make it more clear and we know what is happening at Purdue and then

very very sparse in India in the input as well. So the protection can be interpreted if you look at the protection layer, it's just simple regression is a regression because a linear equation and then Vietnam Walton and Walton on the naughty that they spelled my eat some of network domain tomorrow interpreter for the second thing is another and other techniques that people do is adopt explainable getting off work. So here is the first step is Spirit model. India individual individual variable with

non-linearity and then combined it and statistics app called generalized additive model or is that you better interpret up tomorrow? And then after you got the general as I think you can build model with induction tomorrow higher in traction, and then you can buy it through either push thing or stacking so you can grab a bag of individual variable as well as interaction very very clearly as in the case of a statistical model railroad network is local linear model weekend interpreted very very easily for example in English.

This example for after you drain the network after you drain the light bulb you can place it for each sample. You can trace it what activation what note are big and if it's brand-new, which is basically is a piecewise-linear so we can get all the linear model coming from the north. How is your combined population of linear model still in your tomorrow? So you got to leave here tomorrow. So in this example, you can place one of the linear model are being used for a superb Temple way that you can go through each sample to

tell you can get all the equation of the linear model that is coming from a deep level Network. And if you do that in this example, we can plot the coefficient which is what I write regression just like linear regression. You can pull out all the coefficient using very simple paracord in this is the horizontal. The variable and the particularly the magnitude of the way or the magnitude of the coefficient of the regression. You can see what variable are important and we can also see how deep learning

the Deep net worth of fish in the data. For example, we can use this one as diagnostic you can see this is the biggest the region 0 is the biggest sample it has about 80% of the sample and the performance is not as good or you can look at things and actually if you look at the Deep Network Manny Manuel Regional many many equation only have a single class. They based classification problems. So a lot of inefficiency in the people are named simplify model further to have less number of a glacier bottom line is

deep learning using Riolu network is local linear model. Can get all the local region you can get all the equation some of you that in private investment peeper. We will have paper that big going to Park Place in our car everything in the week. So stay tuned. You can you can learn that in a in multiplayer. Work embedded in the national language processing word embedding compilation on your lot where we can apply all of those two to come up with explainable a neural network

because I have limited time. Let me do one last thing that I want to talk about the issue of more little bit as I mentioned before operate in the air when you when you when you put water in the production the environment will change. I will Brave or it open under adversarial in environment. We need to make sure we understand how how robust the model is how the model can be. So in this example because of the local linear model of deep learning, we know all the local linear equation and we need to prepare how

much perturbation in the data that will make the decision play. So in this example if it's a linear tomorrow, We can get exactly the distance to the blocker decision boundary and we can preserve and we can see how horrible basketball is. So this example. I think my at my alarm there in in this example, for example, the example that I showed before and we simply look at the regional how to how to make this model is a Haro Bostitch model so you can compare to look at The

Originals coming out the decision boundary from Deep learning compatibility train locally. How can we make it more robust so you can see with a personal Vision a slide perturbation how much more are the model degenerate? So this one here is the original Deep learning. It's not very robust with slide from probation the end of increasing when they are all the matter with the with the perturbation Theory tomorrow still performing very very well. Let me bring back a little bit about the issue of robustness that are talked about the Pope. When model

operate under adversarial in environment like that playlist Money Tree people can change the character change the way people can change the sentence either side of conversation human beings to understand that but machine bendy full very very easily. So testing for Modell's robustness performing perturbation so that we understand how the model can be can be wrong. It's very very critical and how much wrong tomorrow will be will be very important in particular. Like I said before the

we need to understand. How long can it be? How can it be wrong? It's a it's a it's court and Center when we when we when we applied machine learning and and and and with that we require somewhat more sophistication compared to the more traditional model. I will I will stop right here because I think I will have about three minutes to do Q&A. So we'll be happy. Let me stop my sharing and will be happy to do it to answer some of the question here.

Cackle comments for the website

Buy this talk

Access to the talk “Toward Trustworthy AI: Explainability and Robustness By Agus Sudjianto, Manager, Wells Fargo Bank”
Available
In cart
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “Global Artificial Intelligence Virtual Conference”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Buy this video

Video

Access to the talk “Toward Trustworthy AI: Explainability and Robustness By Agus Sudjianto, Manager, Wells Fargo Bank”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
566 conferences
22974 speakers
8597 hours of content