TensorFlow World 2019
October 28, 2019, Santa Clara, USA
TensorFlow World 2019
Video
Jeff Dean discusses the future of machine learning at TF World ‘19 (TensorFlow Meets)
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
11.62 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Jeff Dean
Google Senior Fellow at Google
Laurence Moroney
Staff Developer Advocate at Google

Jeff joined Google in 1999 and is currently a Google Senior Fellow in Google's Research Group, where he leads the Google Brain team, Google's deep learning research team in Mountain View. He has co-designed/implemented five generations of Google's crawling, indexing, and query serving systems, and co-designed/implemented major pieces of Google's initial advertising and AdSense for Content systems. He is also a co-designer and co-implementor of Google's distributed computing infrastructure, including the MapReduce, BigTable and Spanner systems, protocol buffers, LevelDB, systems infrastructure for statistical machine translation, and a variety of internal and external libraries and developer tools. He is currently working on large-scale distributed systems for machine learning. He received a Ph.D. in Computer Science from the University of Washington in 1996, working with Craig Chambers on compiler techniques for object-oriented languages. He is a Fellow of the ACM, a Fellow of the AAAS, a member of the U.S. National Academy of Engineering, and a recipient of the Mark Weiser Award and the ACM-Infosys Foundation Award in the Computing Sciences.

View the profile

Laurence is a developer advocate at Google working on machine learning and artificial intelligence. He's the author of dozens of programming books, and hundreds of articles. When not Googling, he's author of a best-selling Science Fiction book series, and a produced screenwriter.

View the profile

About the talk

AI Advocate Laurence Moroney sits down with Google Senior Fellow, Jeff Dean following his keynote presentation at TensorFlow World. They discuss how advances in computer vision and language understanding are expanding what’s possible with machine learning, as well as Jeff’s ideas about the future of ML.

Share

Hi everybody Laurence Maroney here. I'm at tensorflow world and we've just come from the keynote that was given by Jesse and I saw Jeff. Well, thanks for coming to talk with us. So many things that we don't have time to go over them all but there was one really impactful thing that I saw and you were talking about like a computer vision the way now they are our rights in humans is like 5% and computer now with machines is down to 3% and that's really really cool. J

have to be able to distinguish 40 species of dogs and other kinds of things and 1000 categories, but I do think the progress we've made from about 26% error in 2011 down to 3% in 2016 is hugely impactful because the way I like to think about it is computers have now of all dyes that work right? And so we've now got the ability for computers to perceive the Around them in ways that didn't exist six or seven years ago open applications of computing. Just didn't exist before because now you can depend on being able to see you and send another one of

these applications that you're always passionate about is diabetic retinopathy in no diagnosis of that. Could you tell us what's going on in that's good example of many Medical Imaging field. We're now all of a sudden if you collect a high-quality use that from you know, domain experts, you know, radiologist labeling x-rays or ophthalmologist labeling eye images and then you train on a computer vision model on that task whatever it might be you can now for the replicate the expertise of those those domain experts in a way that makes it

possible to bring and deploy that bad for an extra keys much more Wi-Fi. You can get something onto a GPU card and do a hundred images of second inaugural Village all over the world. Important part is like places where there's a shortage of that expertise. You cannot have been back to change the world and how he's going to turn to but you can also do play it in places where there are just aren't enough doctors being able to see has all kinds of cool. I'm not sure. I mean I think

in the last four or so years, we made a lot of progress of the community and how do we build model that can basically understand pieces of tax, you know things like paragraph for a couple paragraphs long. We can actually understand them that I'd much deeper level than we were you able to do before we still don't have a good handle on how do we like read entire book and understand that at all the way up? Get from reading a book but understanding few paragraphs of text is actually a pretty fundamentally fundamentally youthful thing for all kinds of things. I can use these to

improve our search just last week. We announce the use of a burke model which is a sort of Fairly sophisticated natural language processing model in the middle of our search ranking algorithms results quite a lot for lots of different kinds of queries over previously pretty hard cool cool. And there's also a lot of advances in the field translation using these kinds of model Transformer base model for translation are showing like remarkable game in and blue tour which is a major translation college.

Right right. Now one thing I know is that a lot of time we have these kind of atomic models of all the do all these unit tasks. But what about the great big model like Do multiple things and using neural architecture search to be able to add to that modeling. Could you elaborate a little bit on that? Cuz we find a problem we care about we find the right data to train a model to do that particular task, but we usually start from nothing without model. We basically initialize the framers of boiling

point numbers and then try to learn everything about that task from the dating site. We collected and that seems pretty unrealistic and sort of akin to lights when you want to learn to do something new you forget all your patience and you go back to bring out and now you try to learn everything about this house and that's going to require that you have a lot more examples of what it is you trying to do because you're not generalizing from all the other things you already know how to do. And it's also going to mean you need a lot more complication and a lot more effort to achieve

good. If instead you had a model of a new how to do lots and lots of things, you know him to limit all the things were training separate machine learning models for why are we training one large model for this with different pieces of expertise that are you I think it's really important that if we have a large model that we only thought a sparsely activate we call upon different pieces of it as needed but mostly you know 99% of the model of vital for any given task and you call upon the right pieces of expertise when you need that I think is a promising direction. There's a lot of really

interesting computer system problems underneath their how do we actually scale model of that size? There's a lot of interesting machine learning research questions. How do we go to have a model that involves its structure that like learns to route to different pieces of the models that are most appropriate, but I'm pretty excited. Only two or three or four years ago the computer vision in natural language stuff that we talked about scene. Is like things like neural architecture search seems to work. Well for things were seeing the fact that you know, when you do transfer learning

from another related tasks, do you know you generally get good results and left a note for the final task? You care about a multi-task learning at small scales with like five or six related things Alton to make things work. Well for this is just sort of The Logical consequence of extending all those ideas out. So then like bringing back for example to the computer vision that we spoke about earlier and it was I who would have thought that when we were first researching that that things I diabetic retinopathy would it be possible and now we're at the point where would this like model is

going to want to call it model of everything moving model that kind of education so that if we go back to you if you know the engineering challenges Inspiring to you, too. So thanks so much Jeff. I really appreciate having you on tonight.

Cackle comments for the website

Buy this talk

Access to the talk “Jeff Dean discusses the future of machine learning at TF World ‘19 (TensorFlow Meets)”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “TensorFlow World 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “AI and Machine learning”?

You might be interested in videos from this event

March 11, 2020
Sunnyvale
30
206.92 K
dev, google, js, machine learning, ml, scaling, software , tensorflow, web

Similar talks

Laurence Moroney
Staff Developer Advocate at Google
+ 1 speaker
Kemal El Moujahid
Product Director, Tensor Flow at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Laurence Moroney
Staff Developer Advocate at Google
+ 1 speaker
Megan Kacholia
Engineering Director at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Krzysztof Ostrowski
Research Scientist at Google
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Jeff Dean discusses the future of machine learning at TF World ‘19 (TensorFlow Meets)”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
561 conferences
22100 speakers
8257 hours of content