Duration 29:54
16+
Play
Video

Responsible AI – Principles & Practice By Tirthankar Barari, Solution Architect, Microsoft

Tirthankar Barari
Solution Architect at Microsoft
  • Video
  • Table of contents
  • Video
Video
Responsible AI – Principles & Practice By Tirthankar Barari, Solution Architect, Microsoft
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
19
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speaker

Tirthankar Barari
Solution Architect at Microsoft

Experienced Technology leader adept in architecting data solutions, building teams around ideas and delivering products or SaaS environments on schedule and budget in Cloud Computing, Big Data, Analytics and AI.

View the profile

About the talk


Share

Thanks everyone. I'll start with my introductory slide and then go from there. So I'm supposed to mention then solution architect at Microsoft my background being mostly on the data and I sign in as you are in the cloud. I work for the financial services and insurance particle at this moment with Microsoft in the US markets today is a responsibility I practices and principles. So what he's responsible the eye? Well, these the entire ecosystem for advancement of the I driven by ethical principles

that put people first. This includes the government to build regulations to prevent any harm to society is also includes the innovators to ensure responsible development that has a positive impact to all there is no harm and no bias in embracing responsibility. I in this system. Mel with the elevation of different assistance and services and software tools. There are a lot of societal concerns that we hear about recently like facial recognition privacy of individuals

facial recognition is being used deepfakes is another example where you know, people are being misled by information that comes through on the use of the fake of Technology. Then constant and contact tracing is another example of contact tracing can be used for things. Like we are using it now for the epidemic that we are all facing but on the other hand side, there's also privacy issues and concerns about How far can we go with the contact information making sure that it is used for the rightful purpose?

Give me the another concern is this appropriate disproportionate impact of the eye where you only a certain group of people get the benefits of assistance. Not everybody that is also another concern. So basically we need to make sure that all day I Technologies deposited in back spans across everybody in our society. Now that we have an idea about what responsibilities I use you may ask why dispensable the I if I own a small business or say if I am in the leadership position in a midsize or a large Enterprise? Why

should I care about responsibility? I well this person captured when I report showed that nine out of 10 Mexican juice have reported facing ethical issues in the limitation of AI. And if you see what the numbers are quite alarming, it's about everywhere across the globe different countries that execute is have faith that he got issues to the extent of more than 80% in each country. So it's quite alarming and we have to take it very seriously these executive I did reasons including the pressure to Argentine Clemente. I I was one of the reasons that

they had face ethical issues and other one was that they had there was a failure to consider ethics when constructing your assistance the Builder assistance without doing due diligence without looking at the failures that might happen because of not considering ethics. And also a lack of resources, did he get it to ethically a system. That's also another very important factor wear if your people are not trained enough and you don't have right to sources. It becomes

very difficult to embrace. My systems are responsible fashion. One thing that we come across quite often you what is it? There are many ways that an AI system can behave unfairly for example of weistec ignition system might fail to work as well for women as it does for men. for example another one is You have built maybe a screening application based on the eye to scream for loans for jobs. Now that model if it is not carefully built using responsibility. I using fairness might be picking up good candidates from a

particular segment. Like for example, maybe white men are given preference of Dan other groups so that your assistant that you have to build is fair to one category of people but not every category of people. So Daddy's by asking it is to make sure that you avoid negative outcomes of AI systems for different groups of people. Now, how do I impress responsibility? I in my organization. Well, how do I questions are like, how do I build the Isis stands that impact individuals and Society in a positive way.

That is the question that we have to answer when we didn't listen to sponsor their how do you ensure the eye system is treating everyone fairly. And also how do we prepare our Workforce individuals and Society at large for this new age? Well in order for your organization to develop and deploy in a responsible manner, you should first ask some of these questions as to how can you use the human lead approach to drive value to your business? How will your organization's

foundational values affect your approach to are example, if you are a nonprofit organization that is doing good for society than your foundation of value. Is that right? And then your approach towards the I would be different from saying organization where one of the core and foundational values is say providing best-of-class service to its customers. So if you are foundational value is to provide best-in-class service to your customers, then your approach to you. I would be a little different from

several examples. Nonprofit organization. So you need to make sure that Yard, upholstery DIY matches your foundational values then another thing to also consider is how will you monitor systems to ensure that they are evolving responsibly? No, here are six principles that are guiding Microsoft responsibly height of lamenting use. I took this as an example as it touches upon all the key components of principles of responsibility. I the first one is fairness High systems should treat everyone fairly and

affecting similarly-situated groups of people in different ways. Next is reliability and safety testing testing testing mixture that your system is reliable and safe vigorous testing Century II systems campus phone safely in on anticipated situations and edge cases to avoid dangerous consequences of the example. You may have been the self-driving car that uses image to identify whether the person is in front of the car so that it can break but if your system is not well tested it may not detect a certain class of people as human

in front of the car and it might not break. That would be a dangerous consequence, right? You don't want to be in that situation. So you have to make sure that your AI system is built for Lively and work safely. What's overtime as we all know that models become unreliable and inaccurate because data trips. So you have to make sure that your model is accurate and up-to-date and above all human judgment is key to identifying potential blind spots and biases in the model. The

next one is privacy and security. We all know that data is fundamental inessential to building successful models. We need the data to train or more networks predicting something so crucial. We also have to make sure that the data privacy and compliance and regulations are are are are met in our system. So access to data is essential for your assistance to be accurate. So did a privacy and security compliance is vital to ensure that the collected data that is used by the eyes and responsibly. This one is inclusiveness.

Everyone should benefit from AI technology. You should recognize exclusion and learn from diversity in your system. And if your organization. Build AI systems that stalls for 1 and extend too many. Next one is transparency. People must understand how You are making decisions. For example of bank is using in the eye system to decide creditworthiness of the customer. So people should understand should know how the bank is using it for deciding credit would be honest about when why and how your

business is choosing. I'm to deploy in the eye system. Then finally accountability have internal review boards to provide insight and guidance on the IC Systems development testing and deployment ensure no harm is done in such systems. And Sherry. I systems are not the final Authority on decisions impacting human lives human should maintain meaningful control over your system. You have to have complete log off your audience your activities so that you can manage the accountability of The Guiding principles.

What is responsible machine learning? Responsible of machine learning has the three foundational pillars understand your machine learning model then protect the date on that is used in Yard Machine learning system and then control the system ready for audit trails and you know full details about the entire life cycle off your machine Learning System. So the first three languages understand your model. Different sample a loan application decision right when you create a model for your loan application acceptance. You have to have the same model

but also should be able to interpret how the decisions are being made by this model. So interpret ml is an open-source package by Microsoft that is used to interpret and explain your email models available in GitHub can download and use in your application. Also fairness Fairlawn is another open a package that can assist and mitigate the potential on fairness of AI systems. It is also available for download from GitHub. this actually helps you to determine the

unfairness off your model and then I'll also provides UL Gardens to mitigate No under interpretability, Jesus all the libraries on the packages that are available in late is an open-source package that incorporates interpretability techniques Under One Roof. With this package, you can play interpretable glass box models and explain black box systems. So in the glass box models, you can interpret clearly as to you know, which feature had the most impact on a particular decision for Black Box models like you and that's where

you won't be able to interpret clearly you would be able to explain using some of the explainers how a particular decision was to drive that are available for the Black Box models are schaftlein Google surrogate music. Underclass box models explainable boosting machines, ebm, which came out of Microsoft research. He is a glass box model which is accurate as some of the Black Box models, but unlike them it produces lossless explanation. Meaning that enable boosting machine is accurate and at the same

time explainable or interpretive or generally, Bell Gardens are pretty accurate, but not intelligible or intelligible but not accurate but TBM is one example where it is both accurate and intelligible. Interpreting General is where you know, it's cut off heads. You understand why a particular a loan application was rejected. What is the age income of the person that contributed to that decision another way of providing additional information orientation to the model is

a song by dice dice is divers counter-factual explanations for email pacifiers. That's what it stands for. What it does is instead of ranking features by the predictive importance like a jar of the person dice internal Gibson model to find required changes that would flip the models position. For example, you know, it might give you the information. And asked you you would have received the loan if your income was $10,000 higher. I'm so instead of saying that you will your loan was

denied because you're running wasn't up to the mark. It says that well, your loan was denied but it was if you had $10,000 more in your income Oliver if your age was in a 5 years more then you would not have been denied. All you would have been approved for the loan. So that's dies once again and open source package that you can use to get these kinds of inside information. No interpret text is another open source, interpretability techniques for NLP models natural language processing models and a visualization dashboard that comes with it to view the results.

So here you see say you have to go to document text talk and you are classifying the text document saying that this document is a fiction or nonfiction. So interpret text would tell you that these are the babies are the five most important keywords that led to see find the document as fiction versus nonfiction. So you can also you can also say give me the 4 5 6 10 most important words that led to the decision to classify this document. fiction in this case a for example train dragon travel. These are important words that led to the decision in classifying

the document as these texts are the sentences as fiction. As I mentioned earlier fairness mitigation using Fairlawn, which is an open source software allows you to determine the demographic Stars based on that. You can tell you whether your model is fair or unfair if it is unfair it also provides UL guard mitigation of which you can reduce the unfairness of your model. Yeah, you seem disregard model comparison as your accuracy increases as your actions increases unfairness also

increases. The discrepancy also increases I'm about using fear learning to manage to come to a sweet spot where your accuracy and and unfairness match the requirements. Not the second pillar of responsible machine learning is protect has mentioned it is important to make sure that you have the data protected. So differential privacy is one of the techniques that is used to hide the contribution of the individual by adding Noise White Noise is an open source software package that is used to add noise into

the data. Some of the key points here is Epsilon Epsilon is how noisy are private a report is it's a measure of that Delta is the measure of the probability of these gives an idea as to how much of data has been hidden and privacy has been telling another another Point differential privacy privacy budget. This is where you know, say for example, you are creating the data as to how many people are give me the salary for give me the salary of people who work in the financial services in the Boston area, right? Maybe your data set has only four people

I'll be there Saturday. So if if that did she really easily Be able to identify maybe the people and their salaries. So the Privacy will be compromised. So white noises that it to it so that the person cannot easily identify individuals, but if I keep on getting quite often if I cried a thousand times, maybe then I might be able to filter out the white noise and gift to the actual data. So what they do is the rate limit multiple user queries to ensure privacy so that you cannot credit no 10000 times and then finally get to the actual data.

Another package that is used using the concept called homomorphic encryption to protect the data is here so quickly go to assemble your system works on encrypted data. And when it is walking on encrypted data, he doesn't know the actual user data. So users privacy is maintained. So give you an example say here. You have got the two numbers like 3 and 5 say I'm just cooking up an example to Dick's your salary based on your experience on different language is deja vu 3 years of experience in

Java and five years of experience in Python. Now, can you do that that's on the plan to find Sirius the system. So I'm sending three and five including them with an encryption algorithm. So la ti cistern only gets the encrypted data 6 + 10 Corresponding to three years of experience in job and 5 years of experience in Python. Very simple in this case is the solar system going to have 6 + 10 + 16. + that is the predicted value of encrypted and it's encrypted prediction. So it's a 16 and a 16 dad gets the Cryptid.

And then the end user gets ate so they induce her now knows that if I have got two years of experience in jail or $80,000. What are the end user has never shared the plaintiff did that with the AI system but still got the prediction back. So that was the example of how homomorphic encryption is being used to protect privacy off user. Now I'm going to the Darth Vader versus control, of course in control. You have to have audit Trail basically all the information about I know when your

data is. How did Ice be used to train tomorrow and then tell life cycle of the machine learning model and logs all the things you have to give you to have it all and then dealerships is another open source system that allows you a way to talk about machine learning assets. Created and used in the MLL life cycle for data sets it provides how and why I didn't say it was created should or should not be used to know and potential ethical and legal considerations for the Dennis

system can provide information about the intended use of the model and also Howa model was build saying and so on and so forth. So with that I had the overall summary of what I had to share. I can do a quick game or guinea quickly. I think. Share this year. Forget is my jupyter notebook in which I'm using interpret ml. So first basically, I'm installing interpret a mail package so that I can interpret my machine learning model that I'm getting the data that data is

US Census Data for adults and I have saved my Moto losing their time sitting my data into training data and test data and then I'm just showing the did I hear it's basically age. Can you get your marital status occupation of different people in the data set? So I'm just the Instagram of the data shows that when my when I'm young before people who are younger, they are in that predicted salary on all day. Saturdays are below 50 50 k that's that's what I'm basically

building. I'm building a pacifier which would predict if the person salary is about 50 below 50k. So in the data, I do see some intestinal as you age your salary would be higher than 50k. So that's just the right now. I'm cleaning the model using the expandable boosting machine evm. So I'm using that to fit my model is independent and then I am actually no asking my model to explain so in the global explanations what the model has learned on the global data set is what I am showing here. So it says that the model found that age is one of

the key features than marital status then capital gains. These are some of the key features that contributed towards the prediction that the person Saturday would be in a more than 250k or less than 50k. And then mission is where I get some idea as to how an individual prediction was made. So if you look at this where the predict prediction was one that needs my salary is higher than 60k and an actual reality. Also the doctor did also say that my Saturday was more than 50K. In that case,

it also shows that capital gain is one of the features that contributed to the prediction that my salary would be more than 50k which makes sense because if I have a huge amount of capital gains determine whether my salary is more than 50k. I don't have any more time left. I will hand it over if there are any questions. I'll see if I can answer some of those.

Cackle comments for the website

Buy this talk

Access to the talk “Responsible AI – Principles & Practice By Tirthankar Barari, Solution Architect, Microsoft”
Available
In cart
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “Global Artificial Intelligence Virtual Conference”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Buy this video

Video

Access to the talk “Responsible AI – Principles & Practice By Tirthankar Barari, Solution Architect, Microsoft”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
561 conferences
22100 speakers
8257 hours of content