Events Add an event Speakers Talks Collections
 
Google Developers ML Summit '19
September 18, 2019, Kirkland, USA
Google Developers ML Summit '19
Request Q&A
Google Developers ML Summit '19
From the conference
Google Developers ML Summit '19
Request Q&A
Video
Panel discussion - Kirkland ML Summit ‘19
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
367
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About the talk

Panel discussion with Research Scientist Aleksandra Faust, Developer Program Engineer Andrew Ferlitsch, Head of Data Analytics and AI Solution Lak Lakshmanan, & AI Resident Ali Mousavi.

The Kirkland ML Summit brings together developers from across the globe to discuss recent developments and get the latest news on everything Machine Learning. Join our many sessions to keep up with what’s going on in the Machine Learning world.


About speakers

Lak Lakshmanan
Head of Data Analytics and AI Solutions at Google Cloud
Aleksandra Faust
Deep Learning Task and Motion Planning Researcher, Technical Lead and Manager at Google Brain Robotics
Ali Mousavi
AI Resident at Google Brain
Andrew Ferlitsch
Developer Program Engineer/ Machine Learning / AI at Google

I lead a team that builds software solutions for cross-industry business problems using Google Cloud's data analytics and machine learning products

View the profile

Aleksandra Faust is a Deep Learning Task and Motion Planning Technical Lead and Manager at Google Brain Robotics, specializing in reinforcement learning. Previously, Aleksandra led machine learning efforts for self-driving car planning and controls in Waymo and Google X, and was a researcher in Sandia National Laboratories, where she worked on satellites and other remote sensing applications. She earned a Ph.D. in Computer Science at the University of New Mexico (with distinction), a Master’s in Computer Science from University of Illinois at Urbana-Champaign, and a Bachelor’s in Mathematics from University of Belgrade, Serbia. Her research interests include reinforcement learning, adaptive motion planning, and machine learning for decision-making. Aleksandra won the Tom L. Popejoy Award for the best doctoral dissertation at the University of New Mexico in Engineering, Mathematics, and Sciences in the period of 2011-2014. She was also awarded with Sandia National Laboratories’ Doctoral Studies Program and New Mexico Space Grant fellowships. Her work has been featured in the ZdNet, New York Times, and PC Magazine.

View the profile

I am currently an AI resident in Google AI. Ph.D. in Electrical and Computer Engineering, Rice University, May 2018 Advisor: Richard G. Baraniuk Thesis: Data-Driven Computational Sensing M.Sc. in Electrical and Computer Engineering, Rice University, May 2014 Advisor: Richard G. Baraniuk Thesis: Topics on LASSO and Approximate Message Passing B.Sc. in Electrical Engineering, Sharif University of Technology, June 2011 Research Interests Machine Learning and Artificial Intelligence

View the profile

Presently, develop curriculum and teach artificial intelligence, data science, machine learning and computer vision. Formerly, principal Research Scientist and Project Manager with over 20 years experience in research fields and 30 years of industry experience, with 115 US issued patents. Masters of Computer Science, with major in Artificial Intelligence. Have worked a wide range of technology fields for commercial driven research. Primary expertise in last 5+ years has been on machine learning, neural networks, geospatial technologies, data science/analytics, open/big data, and autonomous vehicles. Primary Programming Skills/Frameworks: Python, C#/C++/C, Java, Tensorflow, Keras, Sci-Learn, Visual Studio

View the profile
Share

I manager for election as I mentioned before I'm in Google Cloud AI developer relations. I spent over 20 years of my career in Japanese i t i was originally a research scientist. And then you know, I said research teams all the stuff was before modern day. I die and deep-learning we could do all those things back then you know, so I worked on augmented reality. I worked on telepresence. The last thing I did was autonomous vehicles. We had no deep learning.

We just had to spend we have staffs of like hundreds of PhD people and spend just amazing. How deep learning has changed the left I'm black and like Andrew I started out doing machine learning. I believe we should hang out my career worked on weather forecasting. So if you ever be in a plane that got delayed you can blame the model. So they built the opportunity to I've been Consulting for a startup and the move to Seattle then to do the startup and I'll be built around the time

that deep learning had come in had started to change. How is she doing was done. It was also the time when Cloud was becoming very popular and also at the startup we did everything with Cloud using more modern techniques and like a year later I said, okay. Now where do I go? If I want to do both cloud and machine learning in Lagos pretty obvious place and here's where I am. Hello everyone again, I'm at Lima Safi Airways it on at Google brain here in Kirkland office. I did my undergrad in Iran and Sheriff University. I

did electrical engineering at that time. And then I remember that at the end of my undergraduate studies. I wanted to do information Theory, so I went to epfl. I did an internship in at 8 p.m. Following information Theory, but my path changed. I came to the US for a PhD again in electrical and computer engineering. I started doing signal processing at the beginning and a statistics working on lasso and compressive sensing and then after my master, I started to shift toward machine

learning. So my focusing my PhD was developing machine learning techniques for inverse problems and Even at the beginning of my PS2 I started my PS3 with the goal of becoming a university professor at but as I was approaching the end of my PhD, I noticed that even University professors are draining Google so it wasn't for me but identify PS3 I had two offers one was from Google. I want the other one was a faculty position at Texas A&M University CS department, but I chose Google over

Academia and I'm here right now. Hello, I'm Alexandra Faust emotion planning team with robotics in Google Now. Before I break the Google brain, I was worried waymo working on the motion planning there in machine learning. And before that. I was with Cindy on National Labs in my research center has been in reinforcement learning reinforcement learning for robotics before it was deep the when I was in Sandia back will planning with no machine learning and so on and they decided to go and

get fugitive kind of later on and my other things that they had research scientist that project told me like, okay, you're going to go and you're going to do your machine learning but you know what, I'm not putting any of your learning stuff onto the system unless they're producing it shaped my point of view as a researcher and how we're bringing. How do we bring this machine learning methods and reinforcement learning methods on the safety critical systems and how we think about the safety and deployment. Responsibly and so on.

Thank you, even for myself if it was actually really available to hear the background and I'm really happy with the fact that the team actually found folks with such diverse backgrounds diverse experience that I feel like this is a good question to kick things off. It's a bit of a loaded question for all of you with you all being in the industry for the. Of time that you had you seen a lot involved in machine learning in artificial intelligence considering what you've seen in the past and considering what you see today, what would you say for anyone who's going into machine learning

artificial intelligence is one of the biggest challenges today in machine learning and artificial intelligence. Go to want to take the 1st Avenue. I don't think the challenge has changed in the last 25 years, which is the challenge is always get the best data that you can and spend as much time with you as you need to improve the quantity of data and improve the quality of your data the type of model that is great for doing research. It's great to show the new half percent Improvement that

Academia thrives on but in reality, right, you will often the same amount of investment in time and effort canvas paintings getting another data source and often times you will get much do much better results by spending that time and effort in getting new data different types of data rather than spending it in terms of improving your modeling. I cannot offer low inside I think particular if you're in the data science field. I think the challenge is really be committed to lifelong learning. I don't like you

might not find another few and I can reflect on it know about 12 years ago, you know, I was in imaging Sciences, but people also refer to me at this very obscure titled call the data scientist, you know that I knew what that was and I did basic things like I cleaned up data and I did feature engineering and I work with data ontology. How data from one store so line with data from another source, you know, and and I was sort of no one around the world for doing that with a real narrow the field by and then I came along

and you had to know now how mapreduce and you had to know Hive and you had to know if I do and I didn't know that stuff. And suddenly I wasn't at a designed it really going on, you know life was just fine and you know biting during all that time, you know in my graduate programs out, I learn statistics and all that but I never used my my job never thought I would have to rent and even the people who were using them where that data analyst and the business intelligence

people and they're doing an Excel spreadsheets in the simple linear regression and all that nothing on a large scale, you know, and then you know, maybe about 5 years ago suddenly. Predictive type of intelligence or or statistics was being applied to Big Data. And now I had to do it on a big scale and on large polynomials. This is still before deep learning. I wasn't a data scientist on it, but I had to relearn it eventually all click. Oh, yeah, don't lean your staff that logistic the cartoon Alice it normal distributions it

slowly all came back and it was a data scientist again be yours about three years ago, you know the Deep learning thing really took off and I didn't know deep learning. So I really had to scramble and I'm learning with deep learning is and you know what? I don't know what's next but you know, something next is going to happen. That's really interesting. The first point was the challenges themselves haven't changed the first 25 years in the last 25 years, and then the piece of

continuously meeting. To learn because the technology itself continues to change from our registration that we received. There were a lot of folks that are in Academia or are teaching or in a mental capacity. And so some of the pieces that were in cycle to us like you're teaching or you're talking about machine learning to other people interested in getting into machine learning and so for those that are working with that say beginners and I'm not putting anyone out with that later, but those working with beginners, what would be some resources or what would be some suggestions of if

you were going to get into it what these men should share with their students or Mentor he's as to how to start or what resources to use Coursera that's right Coursera That's nice. I can give you an example of my wife because her background was in robotics and still like she was also didn't care PhD and these days if you're on the job market, you would see a lot of jobs in machine learning data science, but if your job doesn't like that say like Robotics and she decided to

give it a try and then see if she can change her filter machine learning as well. So she started to study things like and drinks course at the Sarah Stanford not the Coursera version, but the more complicated version that is available on YouTube. You can see all the videos on assignment. Yeah, so she she just did that sell to study and then there are some programs particularly. There are some like your data science programs. So there was a fellowship

as there are actually two companies right now that they take fellows they give you a fellowship for HVAC training of like data science or machine learning or Ai and I want to stream is called like insides one of them is called the data incubator. You can apply they have produce 8 week program every semester every quarter, I guess so she took a party in that program and then she learned about like python she learned about spark. She learned about tensorflow in that program and she ended up like getting offers from Microsoft Expedia and other places in machine learning. So that's

something really which is doable. I would say these online resources that we have. Maybe third of these aren't like they are the best options right now. Excellent. You don't want to add anything to that. Yes, I do, So like how I started in deep learning, you know, I did eventually take a course and I didn't even bother to finish it. And you know, what I did instead is I kind of figured I might be better off just watching the videos and going over the work of people who are well-known. So, you know, like for the reinforcement learning, you know, I watched all

the work of a of the lectures of David silver, you know, then, you know, I watched all the videos of Andrew and and people like that and I just stuck with the people that I knew were good quality and we're delivering near the right information the right education and then after that I signed up for a course or coarse and it was just so easy. Just want to ask quickly. If you're new to the field. It's always good to start with a background that you already familiar with select a problem. You want to work on and then learn

whatever is necessary to solve that problem. At least that part for me. That's great piece of you. Take a use case that you're already trying to solve and use machine learning solve and learn accordingly. That's that's amazing. Thank you for sharing that. I want to turn it back to the audience any questions for this great group analyst it went over here where I work at Nordstrom and I support computer vision team there and on that team we have both data scientist and software Engineers that's been really useful for us in terms of

not only are we bubbling up insight about our products, but we have people on that team that can then, you know create the apis that deliver that insight and some curious about your experience with that Dynamic and sort of aware of another model where the data scientist kind of live on this island and the software Engineers live on this island. They throw things over the fence to each other some curious. What is your experience with the trade-off between those two scenarios? I can start the

song to robotics. There's Hardware. They're right half of my team are research scientist for software engineers. And I believe you were the only research group in brain and potentially Google that has a full-time test engineer on the team and I think this system going up and running. It's not the nearest Perfect Right, but it's starting to kind of look like a system and having that close connection between the two and integrating and taking care of the code in doing good software engineering practices that thing it's incredibly important. That's

my take. It's it's very very very difficult to a bill production systems. If you're not it rating gravity and you cannot it rapidly the two islands, right? So you definitely need to make sure that any new models that you built your basically doing a b testing on them as quickly as possible. And that means that you cannot be a pure data scientist. You have to know how your models are getting integrated into into the into systems where you can get your evaluative feedback immediately.

So you just I I think that's just another way of saying you you basically need to make sure that your dad fights are Engineers to and their Engineers to understand the models that are being put into production. Hi, I'm a data scientist also at Nordstrom when I look at a lot of the presentations and the material available online a significant percentage of it. I can't give you an exact number is almost image processing or computer vision or NLP and it's kind of frustrating because those are extremely rich data sets or it's easy to get as creative as you want with a deep learning

methodology. So what is advice like once you get into business problems we're getting good data is probably most of the work. What what what can we how can we move a lot faster? Because right now it's really it's a lot it's very slow to go from you know, POC type stuff to production izing and the big bottle neck is just how much more difficult it is. Once you're dealing with business metrics not image data and not and okidata. Yeah, I think the reason you saw a lot of big giant advances first and vision and then MLP is that the data has spatial

relationships and you can learn those deep modesty. It's not just earning date about learning the relationships between that data. That's a challenge with structured data or table data databases. You no one feels as age of one field like the income and so forth. There isn't a spatial relationship between that data. So it's a much bigger Challenge and develop models as you know, we are now working on trying to solve that mainly with wide and deep models. At 2

cents that envisioned and I know Peter is very strong metrics and for many applications like the one you were mentioning and the problem is actually to find a problem to know how to measure it in so long. I think that something is a community were not talking enough of that's not recognize this is the thing and I think so, I think that's going to work if I leave that Community or your application we're working in his better understanding is what needs to be measured in so long then the progress would come faster. So there is that prep work that needs to happen.

And it's also the case that people are sometimes have complained that we don't have enough data and then know our team goes and we started this at working with them on. Okay, where is the data that you've gotten and it turns out that the data that they have is actually aggregated data that it's not the raw data. They've actually taken the transaction state that they were aggravated it and now they have daily number so they have store numbers are they have product-wise numbers and it is possible that much harder but it is possible to basically go back and say before you agree get at the

data. Where was the original raw data, which is the point of sale information or they are denoted brought to create a business data. So if you want a biscuit train extremely good machine learning models, one of the things that you have to learn how to basically a push back against people give you aggregate data and say no. No, I don't want that. I want the data before it was I agree get it before you ran the ATL right? And and if you can sometimes ask those three wise and get to the original to get to data you can

often times. I think we had the presenter talk today about how you have three steps and the errors accumulate from step to step two step. And by the time you're at the third point you cannot you cannot fix the errors that have happened in the ETL process rights. So to the all the other thing that you can do is if you are in a in a business or you you you have access to the most most original the are there possible use that instead of the aggregated data. So maybe this is not a hundred percent related to a question, but even Envision on NLP

if you want if you just consider scientific metrics, I would say that we are still having problems and Son domain. So for example, when you take a look at generative models, let's say Ganz. If you have heard about it be produce fake images of people with cans and they're getting better and better every few months, but we still don't have a clear metric which we can measure how good a faith generated image is so even that is an active area of research in the research community. So it's not just the problem with business

metrics. Even we have problems in the research command TV to computer vision on NLP metric see themselves. That's a great question think you are I am a data scientist in the neurosciences and a lot of my job involves making machine learning models. Then also interpreting the relationships between the input features on the album features. And of course for this when you'd interpret Apple machine learning models, I wanted to ask you how important do you think interpretability in machine learning is currently Well, we have I think a separate team at Google working on interpretability.

So it means that Google cares about this issue very much, but in terms of details, I'm not sure that much about the cheetah details. Maybe I can route to to decorate Thurston. Yeah, I'll just go out and I want to punch it to a recently released to a we have called the right right? I'm not too familiar with it yet, but it is a tool that allows you to take your model. And while you're doing inference is showing you what's happening inside your model and what type of features are contributing to

the outcomes. Yeah. Yeah, and it also lets you like say what about the input if it if the classification model what what what about the input if you had changed it would cause the opposite of rights that basically says for example, you have you have information about somebody for whom they were pulling alone and it would be able to save a lot this person's income. We're $3,000 more per year. Then we would have approved it and you know that that's deductive boundary point and you can look at the boundary point in all of the input

features the basically see what would what that white cold water filled. But what if something about the input has changed and that helps you understand things like biases, right? So you can actually go to go through and say what if I swap the gender of this person does that make a difference or not? And you can look over your entire day to start swapping genders and say how does the actor is he buried and you can do this by different slices so you can use it to look at. The impact having said that do you want to be very careful about the difference between

interpretive interpretability of the model itself? And in your assigned what you're talkin about, which is the relationship between the input and the output because those are two very different things. Right the relationship between the input in the output is your asking for the relationship the real world and the interpretive the model is basically telling you what is Marty has learned and different models are going to learn different things from the same underlying data fit. And so you have to be very careful that you are not assigning meaning to those features

because the relationship between the input and output feature is only the release decks Pate ability of that model. Right and what you're asking for is learning the physical relationship of those are two very different things that you want to make sure that you don't mix one with the other. You're a lad a little more. I mean, obviously we need explainability for the data scientist in the engineers to sort of pitch to the business management people. They want to understand but we're also finding other reasons for it. Once these models are deployed. People want to know why

did it give the results to like one of the things I hear about is how all models are now used in background checks for renting property property property management companies are background checks to these giant conglomerate companies that are licensed and bonded so they're immune from the results because that company is bonded. Okay, I mean in the old system if they denied somebody for a background a human-made that interpretation and you as a renter could challenge you could get the reason why

currently were in the state. Done by a model and those companies are just pushing back. That's what the model said. They don't know. Why what do you do at Lisa's renters in limbo. So we're going to start getting involved in making those types of decisions in our society. We will see the need and probably in regular Tori changes that's going to require them to be explainable. So one thing I want to ask briefly is in reinforcement learning and the safety interactive policies that are interacting with other agents an

interpretive more and more important because it's tightly linked to the safety. We have two agents. If you know what I'm going to do then you're going to react to me and if I'm behaving in a way that you're not expecting the collisions can happen then bad things can happen. So that does becoming more and more in Portland and I don't know how we're going to do that have different s cases decide what we're going to do in musically have some ways or means to figure it out. But I think that's just very open field. Thank you. Another question here in one more.

So what else is question privately, but after Josh has to question time series forecasting earlier. We had so many people come over. It's probably more widely interesting. So our team works on time series forecasting at lift, and we're here to learn more about things. Google is developing that might be helpful to us things we care about her interpretability a deployment production is scalable maintainable reliable way, so we're interested in your locks. Been a long time in forecasting. So perhaps you would know if you were saying As far as the production

Edition is concerned Google cloud is definitely working on quite a few things in terms of taking development models and moving them into production flow pipelines are open source version of that and we're obviously creating manage versions of it to basically make them serverless and the basic idea being that you develop a model using open source Technologies, and then you're able to move them into production you're able to do interpretability of those things, right? So that's what aspect of it the other question that you asked for time series so that now it's a specific type of models and of

course, then you're basically a thermal tables that support time series and we'll be adding it to other other toothless one. Got a question. I was just wondering how much is the job of a data scientist to take some responsibility about the data where it comes from how it's applied the ethics ethics behind all these emails collection and privacy, you know regulations. I mean at at one level is everyone's responsibility, right everyone in the company has to have to follow data hygiene. You have to make sure that you

know that the government has in place that you're following regulation that you're being compliant. All of that is like table Stakes that you don't have her. You don't know you have to do those things. What is it any extra thing beyond the basics that every one of the company needs to do as a data scientist. I think we have to be aware of the off of how viruses enter data collection are going to impact a devices of the models. And usually we discovered that after we create a model and we look at explainability of the model and then you want to go back and make sure

that it is not just an artifact of the projector model you chose for an artifact of how the day and I got collected and then walk back to how you collected. Did I wasn't a survey or was it that it was collected on transactions in only certain types of stores that certain neighborhoods all of those kinds of things. You have to be aware of them, but I think doc comes about as a course of your work and when that happens we need to have the processes in place to basically be good citizens. Just wanted to add that

Google is very active on the Federated learning door on device machine learning. So we have a huge research thing, which is working on it. And one of the goals of that project is to make sure that user data is private as so which means that sewing Federated learning instead of having a centralized system for processing data. Let's say we all have cell phones in Federated Learning Center Rio our data is going to be processed on our devices instead of being sent to the server and then the update on the model which is a specific for us, which is actually personalized for

us is based on our own data on our device. So nothing is going to be sent to Google servers. Everything is going to be happened on our phone in a very private way. So this is something which is taken very seriously and so Google athletes in the research community of Polaris we have is a i principles that we have to take care of it. Even when we were when we want to work on a research project. And one of the issues is this privacy that they take it very seriously. I have a non-technical question more related to some

sort of a malicious use of AI so they say we now live in the in the world of fake comedian news reports and the Deep fakes are now emerging and I think in a few years they're going to explode and the question is what do you think is the social impact of this to the unprepared population for this and what is the scientifical Community doing about it? You can come back to to be ready to provide tools to figure out what's fake and let's not and what would it be all your opinion on

this? I think it's a tough question. But whenever I think about this sort of question is I would say that we already have this sort of problems in our society. So for example, maybe the simplest example is advertisement that we have right now. So you're watching TV. They show you a shampoo that sat after the advertisement you get this feeling it. Okay, if I put that jump on my hair. It's going to be super brilliant super blah blah, but it's nothing that's my serious is going to happen. So even we are being fooled right now by advertisements or other things

but in a different area, so politician sing something that was never said then really disinfect elections are going to very serious thing stop, please today stand that I can say. I know that even in Facebook or Google there are teams that are working on this fake news detection. But what is the perfect solution we do not have a perfect solution of one problem. Is that fake news? Sometimes is a subjective phenomena. So maybe something is Paige from my perspective. What's right from other people perspective? Y'all write a

comment to that. I mean, I do anticipate it's going to cause some social change that we can anticipate if we look at our sensory information. You know, what people say or write we are already naturally skeptic of and that's kind of what you were implying that's not going to change it. But we trust what we here and what we see and if we're going to move in an age where now we have to as an individual becomes skeptic and what we see and what we hear that really is going to cause some kind of social change and and adaptation. Thank you for that.

We have one more question here one over there and then I apologize and other a lot of their hands up while we're going to have to park that for later again. They're all going to be available during the networking time. So just two more questions now 1 and 2. Hi, thank you. I'm always in some ways a very research-driven field. Obviously all of you are researchers or data scientist with an emphasis on scientist. I'm more of a data peasant. I guess I work in the data field. I often find that

because of the nature of the job even though I'm more of an industrial person. I do have to be conversant in say the research that's being published and nips and icml and all of these places and sometimes that can be a struggle for me because I don't have the same academic background. Do you have any advice for folks who are in that situation of trying to read the current literature and not having been around for the older stuff. So coming into the field sort of in medias res.

So I've one personal experience that I have was Google AI residency program. So they go left. The program was to gather people from different backgrounds that are not specifically machine learning and like teach them how to do Ai and machine learning. So at the beginning of the program we have this task for re-implementation of a paper. So the task was Beaver choosing a paper on her own that was related to m l Rai and then the goal of the task was to arrange limited so that like we get used to using like tensorflow python all these things.

I think that would be a great wave especially because there is a heavy push in research Community to make the research reproducible. So like researchers are Cars to share their code whatever they stopping the paperwork. I have an accepted paper at new Sarai similar all these like research conferences. So that would be a great thing. If you can do it basically like for implementing that ideas that are presented in those research papers, and I think that will help you a lot. I just want to know it's a big field and there are lots of papers

coming out there every single day. I forgot the numbers, right and I don't think anybody can keep up the especially we keeping up with the literature. There are many things that I mean I do that for a living right then and there are many papers that I don't completely don't understand and it shouldn't be on the bright side there nowadays more of the kind of blogs and medium post that do friendly explanation on the kind of key key areas. And if there is a particular area you are focused on then they make sense to pay no

go more in-depth are in after spending all the time things start to kind of get easier. Yeah, just adding onto that the biggest advice that I would give to people if they ignore all the noise, right much of much of the Publications that you see are mostly just noise ignore them. And when you're starting to solve a problem go with the simplest possible approach, right? And then once you have once you have a reasonable solution to it and many of these things as Ali said implementations are available and open sores and then you can basically try

once you've solved the problem and the performance isn't good enough then try something that's a bit to bit more best-of-breed. But even their don't have to go to the absolute Leading Edge if you find something that is understandable and relatively recent use that instead the chances that the algorithm is going to make a huge difference of really really small right so Go with the thing that is easy. And only after you solve the problem and you're basically at the edge where you've really need that extra half percent accuracy. That's the point at which you

have to really start focusing in on the differences between the techniques and the tricks that are being used and whether those are useful leave it for the researchers right for the most part like the simplest approach will get you a very long way. Respond to her this is really great advice. How many of you guys like using beta softer? Okay. Yeah, what about the rest of the room the research papers exactly. Thank you for that interesting. You do a Google Cloud AI developer relations. We obviously noticed this problem that the Internet is just a

lie with blonde and videos and people's examples. I mean, I've never seen any subject so overwhelmingly written about but the problem is is so much of that is no reason support somebody coming in and trying to navigate how do you know what's noise and what's its high-value? So, you know that earlier recommendation, of course going starting off with people are well-known. Okay, but we're also addressing yet in our own developer relations. We started a repo we launched a few months ago called idiomatic and what we've developed is a

design pattern. Explaining models in and writing models. Okay that fit software Engineers not data scientist. And so we've been populating that with Han books and workshops and our own versions of models that we hope or let from our experience. We believe software Engineers can understand a lot quicker and in the more of a straight line. All right. Thank you the final question for the panelists. So I need a team in the security space and as you know bias in security and even illegal is a big issue.

What are you doing today? Or what? Do you look what you would have came and doing around avoiding or minimizing that buys besides have a representative data. So there are two aspects to look at that died mention of including or excluding features that can cause a knot just data itself. We all understand that now. Okay. Let's try that one more time. I went to look at the bias aspect of it, of course including board data can give you the aspect but the other one is actually included or excluded

certain features that can cause bias. That's where my question is. How do you actually avoid falling in those issues because you've included certain features are excluded certain features that can cause by us? That's a really complex topics of the question was how do you basically I do whether you into the future excluded feature and weather at Arthur contributes to buy that's an extremely complex topic because it turns out that the definition of ml fairness is itself a difficult problem. Okay, so I will

go ahead and look at just do a Google search for ML fairness and there is a there's a very nice article by a group called pair pair that actually goes into the different possible things that you might want to optimize. So you may want to be agnostic after the buy us today after the feature that you want to be biased against which is the right answer in some situations. You may want to be you and me want to marry the probability of the of the population that you're actually by predicting for in some other cases should really

depends on the problem. And so go ahead and look at the article on ML fairness by this group called pair. Yeah, I pick up another thing. You want to look at? You know, sometimes we we we think the problem with fairness is you know, I buy us to say like gender. Okay that if everything else was the same, you know, the decision to say yes or no decision may be a male gets yes more often than a female. Maybe we should take the gender out but that's not really the bias that were mainly concerned about. There are other social biases in society

where people are at a little bit of a disadvantage. Okay, and so like getting loans there might be one group where they deserve to get loans as often as other people but the real ID as a whole if you look at the data, they don't have the same medium income and other features. So one of the ways like this at your financial institution, you know, you can change reality out there. But what you can do is what we sometimes do as we look up resampling so Take the data that has been positive by us. And we're going to resample it and then the

feature in their that's causing the bias. We're going to flip it to the other one and that way it kind of changes the model or Shifting the bias out. Thank you. Thank you very much. I was told to cut this off I think 30 minutes ago, but I refuse to do that just because of how engaging everything wasn't and there was a lot of inside that all of you provided. So thank you very much for being part of this panel. Thank you all.

Cackle comments for the website

Buy this talk

Access to the talk “Panel discussion - Kirkland ML Summit ‘19”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “Google Developers ML Summit '19”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “IT & Technology”?

You might be interested in videos from this event

November 9 - 17, 2020
Online
50
254
future of ux, behavioral science, design engineering, design systems, design thinking process, new product, partnership, product design, the global experience summit 2020, ux research

Similar talks

Stephen Wylie
Senior Software Engineer at rewardStyle
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Amy Krishnamohan
Product Marketing Manager - Google Cloud Platform Databases at Google
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Aleksandra Faust
Deep Learning Task and Motion Planning Researcher, Technical Lead and Manager at Google Brain Robotics
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video
Access to the talk “Panel discussion - Kirkland ML Summit ‘19”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
944 conferences
37486 speakers
14316 hours of content
Lak Lakshmanan
Aleksandra Faust
Ali Mousavi
Andrew Ferlitsch