Duration 30:06
16+
Play
Video

Responsible and Ethical Governance of AI Models By Robert Bernard, Director, PwC

Robert Bernard
Director at Pwc
  • Video
  • Table of contents
  • Video
Video
Responsible and Ethical Governance of AI Models By Robert Bernard, Director, PwC
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
52
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speaker

Robert Bernard
Director at Pwc

Rob is a senior data science and AI executive with 25+ years of hands-on experience developing advanced analytics and formulating analytic strategy in multiple industries, from national security to algorithmic investing. His projects have produced tangible benefits for clients, saving time, money, and lives. Rob has led teams of data scientists, intelligence analysts, statisticians, academics, and programmers to implement innovative and actionable analytic solutions. He is especially adept in environments that require predicting individual human behaviors. With a proven track record in artificial intelligence, machine learning, natural language processing, and large-scale forecasting, he is positioned to tackle intractable problems where data is both varied and voluminous.

View the profile

About the talk

The proliferation of tools facilitating the democratization of Artificial Intelligence has necessarily accelerated the need to develop and deploy those tools responsibly.  Whether it be as simple as a linear regression, as automatic as an unsupervised machine learning model, or as powerful as a deep neural network, the ease which non-data scientists can create these models requires that an ethical framework and a formal governance structure be in place before, during, and after the models are developed.  In this talk, I will discuss the implementation of responsible AI practices and how one might integrate them practically in bureaucratic and administrative processes likely already in place in many organizations.

02:44 New approach to AI

06:20 Ethics and regulation

10:05 Bias and fairness

13:46 Prevention of harm

19:28 Responsible AI

23:16 A new model

28:12 Thoughts on deployment

Share

My name is Rob Bernard. I am I am a director of artificial intelligence in PWC Labs at pricewaterhousecoopers. Let me talk to you today about responsible and ethical governance of AI models. Now lot of times. This is a quote from the New York Times from last year about a year ago lot of times people are talking about bias in Amory the quote in series of equalizing performance across groups or not. Thinking about the underlying Foundation whether a task should exist in the first

place who created who will deploy it on which population who owns the data and how it's used. Obviously, these are important questions, especially in terms of fairness Equity bias, and the ethics behind it weather and regulation whether it is the regulation of your own company or the ethical principles of your own culture, whatever it might be all. These questions are very important in the implementation of any kind of AI machine learning type model New York Times Doctor Gabriel from dealing with bias in artificial intelligence.

So in this particular talk I'm going to be referring to four different sections first is talking about the need for responsible a I talk about ethics and regulation. I'm governance models and then I'm going to end with a few thoughts on deploy models itself. And what I want to concentrate on here is the fact that ethics and governance are real two sides of a coin where there's a big middle in the coin will talk about that in the second person need for responsibility. I it's big the the the market itself is now a approximately

13% of 2.6 billion dollars and consulting services. This is a PWC commission study from last year. And it's only growing in a couple years. It's projected to be up to 17% of the market. These are are important and obviously large services in the AI Market, but it's relatively new in terms of the services themselves. If this this topic is obviously become very hot in the last couple years and I know on the agenda for this particular conference are a couple people talking about responsible and ethical AI And really

within at the whole way. I can buy there are really six areas of of risks that come about with responsible AI there is there application-level risks and their business and and national-level risks. So the society Market reggaeton a forces are are really really take a new approach. So maybe it's performance risk of the model itself whether the model is biased whether it has errors whether it's opaque. There's also the security risk, is it is it the model 70 adversarial attacks is someone is trying to actively

fool or trick the model and just predicting or suggesting something that isn't either right or ethical or dangerous. In fact could be cyber attacks as well. And of course even though many companies Advocate and I think you Open source software every virtually every company uses it to some degree of course always risks with open source software as well the crowdsourcing maybe make mitigate the risk somewhat, but there are definitely risk with that. There's also governance risk. I mean without human

agency AI models really can run while they can go Rogue and they can have unintended consequences. If you just let them model do what the model says if you're given a new set of data that doesn't work out as well. The model may go Rogue and do something. It's not supposed to I'm in addition. The other risks are we talking about ethical risks there make me laugh values themselves. There may be a value misalignment your your values may not match with your your company's values or your societies. He was our country's regulations economic

risk, people are afraid of getting their jobs displaced by Ai and automation of course and societal risk autonomous weapons or the big one. Do we really want a an AI pulling the trigger and surveillance issues with age of surveillance. Do we really is it something that we as a society or tolerate or is it something that we are reject or is there some Middle Ground there? So Within our review of that they are really five dimensions of responsible II on one hand. There is the ethics in the

regulation on the other side of the governance issues in in between a really the performance insecurity issues bias fairness interpretability. Can you understand what the models doing explainability? Can you explain what the model was doing and robustness security is the model vulnerable to attacks and privacy and these the ethics on one side and a governance on the other side. Really? You're not just links in a chain. They're not just pass along the way they really expand the entire gamut of the

performance security risks Dimensions ethics and governance have to do with whether a bias and fairness and the security and interpretability and the explainability Negative across the entire spectrum of those performance and security ethics and governance actually apply. So I'm going to talk first about the ethics part of it and Really? They're in the ethics part there really three questions that main questions that we believe that you really should consider when you are in the middle of implementing or

developing an AI model are your algorithms making decisions in a way that align with your bags, isn't that is it a value-based model itself or is their algorithms just making decisions that don't align with your guys important question. Second to your customers trust you with the data. Are you able to take a customer stated and use it responsibly are you a tardy or explaining it obviously data models data Agreements are extraordinary important to maintain a a sense of responsibility and responsible use of data

and responsible AI models based on that data, but are those from an ethical standpoint? Is that something you really want to exploit and there's obviously been many cases in the last 4 years of companies going and exploiting data using it for purposes that wasn't really intend in addition is a regulation here to obviously General GPD are in Europe General data regulation in Europe is something that is Regulatory and having to do with data usage itself. So you may codify the ethics in a regulation depending on your

culture. And if your how is your brand affected if you can't explain how a i systems work? So someone comes to you you run an AI model it denies alone for instance to a particular person. If you don't know why it designed alone or it D'Amato cannot explain itself. Why how it works or Inns in essence why I denied a particular loan its potential reputational risk for the company itself. So another question to consider is do you really want to take on that reputational risk? And how is your brand would be your band be affected if you

please explain ability is not possible. So do European commission high level expert group on AI in a document called the ethics guidelines for trustworthy three principles of of three principles fairness ability or transparency and prevention of harm. In the principles of fairness. I'm not going to read this entire thing. But it's one of point out a couple couple key phrases of it. There are many different interpretations of fairness is fairness good to me is fairness good to you is it to to our society to a company to lower income

to a minority status to a particular legal status? Whatever it might be there are different different definitions of fairness depending on where you're coming from. So in this particular, principal the European commission for the high level expert group on a I talked about duai system should never leave people that being deceived or unjustified ably impaired and their freedom of choice and what they want to do. So you want to be able to give people freedom of choice. If you want to treat people equitably you want to treat people fairly and this again is a broad definition.

So important principle of fairness first ethical principle and I can hear it is there is definition to Ferris. What definition are we going to use bias detection also, very important in the sense here that how do we detect bias given the different data sets that we have protected attributes in the data sets different decisions that you make that prevention of bias and able to discover that bias is crucially important in order to The Next Step mitigate that buys or intervene for that by itself.

How do we change the thresholds on decision variables to be less? Atari, how do we trade off accuracy and fairness maybe if our our model and in fact, I'm some of you probably realize that your bias your model can be less biased but it might be more unfair or less predictive. So you really trading off that bias and accuracy in many cases your trading out that buys and accuracy trade-off. So those are particular principle of fairness. The next principle is the principal x

x x quick ability are open transparent Hopper to operate transparently. We want to make sure that You really have transparency and understanding how the model was built what the models doing? Why the model is doing what it is? It can should be auditable. You should be able to produce models that may look black boxy in terms of its algorithms in underneath it. But in essence you can dissect it. It can be comprehensible by human beings themselves rather than just having some sort of

purely unexplainable answers. So again, if you have a deep learning model, this may be very difficult to do in terms of the abstract been able to come up with a reasonable explanation a testable explanation of why it works is also important. Also you want to be able to give people consent that they're being used or they're using AI systems as well and that explains X cookbook ability is a precondition for informing else consent. So really we're going to talk about a few things here transparency understanding the models and decision-making

explainability understanding reason reason besides each decision and prove ability to mathematical certainty. So the global interpret bility is one thing it's important. Why how does the model do what it does that kind of interpret bility itself? And then there's something called local explainability which is why did the model make a particular decision? Why was my loan denied? Why did I predict not to make that decision those particular individual decisions is that local explain ability and finally prove ability?

Is required it says it only in the most sensitive applications to prove that it can actually the mathematical certainly behind those decisions. The two things really here is the local and Global explainability widest. How does the model work? And why did it make a particular decision? And again, the third principle here is the prevention of harm a human being safe security robust and private as well. So the four pillars of prevention to harmer listed here robustness. It is unlikely to break or fail and it's

reliable and consistent its outcome on safety is it it is actually physical safety is important here robotics that are robots in factories Princeton important that they are physically safe to the humans. They're working alongside of security at self. Is it prevent against malicious attacks? Is it virus approved for virus safe as as much as it were and then privacy itself. Does it protect a person's data privacy personal identifying pii person? Play identifiable information. Those are perfect prevention of harm

to the people that sell all of these principles themselves are ethical principles. And this is one of the things again and at all parts of a modeling process from the development from the Strategic understanding all the way to deployment these ethical principles of fairness X clickability prevention of harm must be maintained and beat one could be vigilant of these particular Ridge principals during the development deployment and maintenance of a model itself. So I'm moving out a little bit here

was going to skip this for a second. We'll talk a little bit about governance models themselves. So government is the other side of the coin. So we started with ethics and now we're moving to governance and governess is the ability to accountability for all parts of the system be able to have people check off what has happened to the people that check into how a model is progressing legal technical administrative management sign off and understanding of the process of the model itself

and Really we want to get into some important questions when will devote designing a new government structured what questions should we ask before we design is governance structure what levels of governance are desired or I needed and then the final question is how do we practically Implement these structures themselves? So, how do we design it? What do we need to put in there? And then how do we implement it themselves? And we really want here as it's a risk minimization exercise hero. We want to

assess and minimize the risk and obviously depending on what industry you're in the amount of risk you're able to Are you desire or your company's strategic priorities say you should take on particular issues and it may be different for different hobbies for different operations when a company itself and maybe monolithic for particular company. The risk is really up to the company itself. This is there is no in a government structure. There is no one-size-fits-all solution, but what you do need to consider these again is the question you need to Come up with a shirt design in it what

four levels and a procedure. Do you need to put into place? And finally, how do you going to implement some structures? Here's example of Designing the governor's needed. We need several questions. So enterprise-level management level model level and data level what questions we need for Enterprise who's ultimately responsible. What oversight is required. How does intervention happen at the Enterprise level at the management level of a governance model who is the authority to make these decisions? How are these decisions? What what decisions are made what reports are

generated and finally, how does feedback occur itself? Finally for modeling houdinied explain the model to show how the model of decision is explained. How is mono testing performed on one question? That's not in here. How is the model assess as being good enough? Where is he what you're stopping criteria when you are designing a model hot. Do you want to minimize the false positive to minimize the false negatives or have the lowest F1 score? Whatever it might be. That's a question that needs to be decided like upfront. And before you deciding the

model not at the end of the the whole process itself and finally, how does the models Roi I get measured it if it's present models that prevent things to happen. I do want to prevent something from happening. It's hard to measure a non-event. So how do you measure the ROI on models that are designed to prevent things and maybe it's you compare fewer accidents this year than last year and our model did that sure. That's one way of doing that but what if you're trying to prevent Low-frequency events that rarely ever happened. What is it your preventive not have happened. Anyway, those kinds

of Roi are very difficult infect the whole new paper hole papers on these things are deserve to be written on measuring Roi for measuring low-frequency events in models and finally for data who has access to the data. What purpose is can it be used for? Can you use a secondary use of data after you have it is that in your what your client agreement is and finally how long can data be retained have to be destroyed after a year, whatever it might be all important questions that that again and what one-size-fits-all but when you're designing a government structure something you

need to think about Excuse me are responsible AI is just not a point in time. And I've tried to make this point that the ethics and governance are part of the whole event itself. and I'm If you look here, we have a strategy at the top corporate strategy industrial stand is a regulation Turtle policies and practices. That's how it starts. It's a it's a path that goes from the upper-left down to the bottom and strategy through planning for managing the portfolio management delivery proach to oversight

of the program itself through the ecosystem. What's the technology roadmap where we going to be a year from now to yours for now and the technology is our governor structure governance structure still going to be relevant if our technology is changing obviously important and I'm sure you all realize is change management. What are we going to allow to be changed in the model itself in the technology in the governance structure? Is there change management for this Governor structure? And then we have a 9 step model that talks about that is listed in the middle there.

They'll talk a little bit more beginning but then this is the Call development process from business and data understanding. What is the problem that you're trying to solve it front all the way through and talk about this in a second evaluation and check in after the models over a maintenance of it. I checking in evaluating it checking in after the model has been deployed. Is it still working does the new data we have fundamentally changed the parameters of the model itself who is responsible for a for maintaining that and who's responsible for actually doing that kind of Maintenance

and finally got the audit and compliance and Operational Support at the end itself. So here's the briefing on on our 9 step process and I basically talked about this uscope the value of the beginning business data understanding solution design. You prepare the data preprocessing you extracted from wherever you need you build the model and you deploy it obviously that's a big step and that's where a lot of being obviously a lot of work goes in. The date of crap in the model building. But again, it's just one piece of the puzzle. That's not just where the governance and the ethically basic

governance goes and finally value delivery on the end the Monitor and the check in at the end and working it into the workflows of your business has itself a how is it going to integrate into the current methods and procedures that you have going? So we look at a traditional model governments have three lines of Defense first line of defense was going to call the creators and the executor so that those are people who are building and developing coating the models operate them and they are really big lunch for the entire model life cycle itself. The

second line of defense are the management managers directors who assessed the risks on the model itself. And those are responsible for creating the strategy around the motto. And finally the Auditors at the end of the people who are overseeing the other two to ensure compliance to make sure that it fixed what fits with a larger policy laws a strategy of the company itself and the larger ecosystem around it to make sure that the model itself and how it's being deployed and how it's being used and it complies with. Not only the the laws but also the

ethics in the strategy of the company itself. And excuse me to that and any model of governance structure really needs to be valid and reviewed all models need to be validated and reviewed using these through three lines of a government's right here that great an executor's the managers and directors and the Auditors themselves. So for instance at the beginning, we have some sort of the model whether it's maybe citizen developers if it's a democratization city case, where the your Employees can

develop models themselves? Maybe it's a third-party tool. Maybe it's a custom-built model and the dated it comes from where there's complies with privacy. That's the first step and then we will all go to the project team. So what are they responsible for in terms of the governance structures are responsible for model development and data use they must follow the guidelines that they've gotten whether they have to fill out any particular paperwork or any sort of bureaucratic thing. Maybe it's very simple beer bureaucracy. Maybe it's a very depending on the risk of the model

itself. It could be very detailed and very careful to make sure that all the t's are crossed and I's are check eyes are dotted and it really needs to comply with the organization's strategy itself again, risk assessment also, very important part of this not only in terms of how the models being built but Downstream governance government requirements Government Street, Garments and in essence, how is it going to affect our brand our reputation and importantly most importantly our clients brand reputation and our relationship with a client itself. What you do not want to

happen is to have a model be developed and be developed poorly give it to the client and they completely obviously bombs on their the clients side and nothing and you've ruined your reputation of risk in your for all models to come. So I'm finally process documentation not an exciting topic to a lot of people but you really need to be able to the project team the Creator's in the developers of the model need to be able to document them sell things as they go along. Set reminder to Fenton this is leadership and review the managers and directors. They need to lay out the

standards and specifications at the beginning of any model, including the acceptance criteria. And again, this may be strategic are whereas I said are we going to minimize a false positive and negatives or F1 score? Whatever it might be. They also have to take a very independent review. It's very difficult. It would be great. If you have the resources to have this manager and director team be outside of the business unit that the developers are in. So they really do take an independent review as time goes on people within their organization reviewing

their own models can Possibly 10 to take some shortcuts along the way it's something you definitely don't want in a governance structure. And finally there needs to be a s at me SMS final approval model sign off. So that someone is ultimately responsible for for the model itself, whether it again, biosphere fairness explainability interpretability and security and finally the internal audit a really goes through and says alright is our government structure. Right? Are we compliance to what we need to in terms of laws

and regulations that turns the external environment that's the oddest themselves other possible of groups that are involved in the model of governance structure is an Ethics board. Where there is that a company where there's no external access explore whether it's in a professional organization that you have to you have to get your model or or whatever your project approved by an Ethics board along the way and then finally and really importantly is ongoing monitoring as the as the Model is out in the field if the date of changes

do we need to go back and change the model do we need to recalibrate it do we need to retrain it? Is this model valid anymore with covid-19 on here? Obviously, a lot of models are are going to be need to be vast retraining or do they ignore cope with this is a modeling question that they designed is are going to have to contend with over the next few years. Is this this. Of time since March in the US, how are models going to deal with this very low-frequency events almost an analyst event. Finally, I want to give in the last couple minutes hear some

thoughts on deployment itself of the model when you're deploying a model. It seems almost trite but really if it's an ethically based governance model, you really have to ask yourself before you release it some questions who is ultimately going to be impacted by this by they positive decisions and negative decisions. What date am I using? What who does it represent? Is it my am I using the data properly does this model shift Powers goes back to the beginning in the fairness. Is this

a shift of power in terms of whether it's a financial power or political power military power. Does it really the model? How does a model ship power and if it does is that acceptable is that compliant? Can a model be contested by people who've been adversely affected by it? Is it is it contestable and if it does get congested is are you able to defend it another important question an existential question should the model exist at all? Do I really need this model? If you're building the models it's very likely to do you need it

because you're trying to make more something more efficient, but that's a very important question does enabled better decision-making and finally have I tested it under many different conditions. Obviously, if your data scientist you run a validation set a 20-minute training set validation set test set, but it is really you have I considered all other external factors in external conditions through of surrounding which I should run the model itself again. This goes to these low-frequency events like covid how is that going to change it to have I tested is under this

condition? Is the model even valid under one of these events?

Cackle comments for the website

Buy this talk

Access to the talk “Responsible and Ethical Governance of AI Models By Robert Bernard, Director, PwC”
Available
In cart
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “Global Artificial Intelligence Virtual Conference”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Similar talks

Hudson Mahboubi
Sr. Manager Of Data Science at Workplace Safety & Insurance Board
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Responsible and Ethical Governance of AI Models By Robert Bernard, Director, PwC”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
561 conferences
22100 speakers
8257 hours of content