Laurence is a developer advocate at Google working on machine learning and artificial intelligence. He's the author of dozens of programming books, and hundreds of articles. When not Googling, he's author of a best-selling Science Fiction book series, and a produced screenwriter.View the profile
Megan Kacholia is an Engineering Director on TensorFlow/Brain, and a long-time Googler. She specializes in working on large-scale, distributed systems, and finding ways to tune and improve performance in such environments.View the profile
About the talk
AI Advocate Laurence Moroney sits down with the Vice President of Engineering at Google, Megan Kacholia following her keynote presentation at TensorFlow World. They discuss how TensorFlow 2.0 is powering ML research, performance improvements in TF 2.0, as well as the new preview release for TensorBoard.dev.
Hi everybody Laurence Maroney here. I'm at 10 to flow world and it's a great privilege that I have to chat with Megan kacholia is vice president of engineering we're working on tensorflow and you just gave a great he know about 10 slow in about 10 to and some of the new things in it. Could you tell us a little bit about it for a little while right? We talked about it at the earlier this year in the spring and then just finalized the released in September. So just last month one big thing with tensorflow to is focused a lot on usability right making it
easier for users to be able to do what they need to do with, you know, cleaner more streamlined apis better to bility with either mow. Just try and make things simpler and easier for both really like to nakina was when you called out like three audiences write those researchers that was data scientist and it was developers. I spend most of my time working with developers, but we have left it really real down in his researchers and Buffets in Townsville to for researchers. Could you tell us about that? I think sometimes people worry that if they want a lot of control that's one
thing researchers really like if they want to be able to try out to sleep and they want to be able to try out new think right and they want to make sure they have that control and be able to do that. One thing to emphasize is that researchers still have that control with the flow to yes, we have done a lot to try and streamline. The high-level API is because we want to make it easier for folks to come in at that point. But I know there's a lot of folks who just want to be able to go in deeper. They want to go under the covers of that. They want to go under the hood and be able to figure out
well, I have this new model type. I want to try out and I have some new model architecture. I want to play with and all the things they loved about tensorflow of being able to do that before that's all still there. We're just trying to make sure there's also a nice clean high-level Services. Well, it's not an either-or situation. They're still both. Necessarily research it doesn't preclude that exactly mean showing today the example that I went through from hugging face hugging face is done. So many cool things in the everyone's very excited
to share it like everywhere everyone's able to implement some of the more advanced models in Pensacola tonight by and recently they did and it was really exciting for us to be able to highlight that and show that no it wasn't single have all the intricate knowledge of things with the external Community being able to use it for really Advanced research these cases. Which when people are able to take that platform and bring their addresses to do what they need to do with him working really hard to tweet performance up-tempo. Could
you talk a little bit about the performance is something that we always pay attention to a mean one big thing that came to make it easy for people to be able to scale things up right and not have to worry about probably something probably do things with no let-up handle more for you. So that's something that we focused a lot on is just how do you get that performance and get it in an easy way so that that way users don't have to worry right now again, if people want to go under the covers and tinker and tweek Everlasting theft fine, like that's all still
there. But we want to make sure that it's easy for them to get the performers they need so I talked some numbers for diffuse right looking at different types of Nvidia gpus the 3x performance Improvement that were able to get there. Could I doubt that which is great? Yeah, it's great those things and then in the upcoming releases will have more things around TV use as well as I mix Precision support. Wow. But still working hard in this is just like anything else to work
for developers in for like tooling around developers that transport water for world. It's hosted tensorboard for the whole idea here is that again people love to be able to share their experiments because they need help. Can you help me see what's going on? Let me show someone else and right now folks are screenshots will take screenshots of what's going on with their experiment are there currently out and instead we want to make it so folks can actually share their worst
of soul. Boy comes in for this is something that we're just starting to preview release for there's a lot of features will be adding over the coming months right as we stabilize it and get two more of the general availability. But I think it's really exciting for folks to be able to take that and then make use of it to more easily share with others. Like look. Look what I'm doing up. The better off. Everyone is gone on Twitter. I've seen screenshots of 10 subordinate. Sometimes it's hard to believe in Discovery cuz we need to see if we can get Hands-On and you can
pull around and discovery made in isolation isn't really a discover a riot when you're able to share like that. Yes. We've tried to make it an easy uses a big thing that where are continuing to emphasize. I intended will have to make it much easier for folks to be able to go to HUB and find thing, right. How can they know what they're is how did they find the model they're looking for? And just how do they become discoverability. That's always a big thing with any sort of you I element is like
other people discover things, and so we met a lot of improvements on tensor Hub Centric wheel hub in order to improve that discoverability, and then just make it easier for folks to find the types of fruit remodel that they want to use when we can find them easily discovered something that they will be a great addition right? I want to be able to share this model. We want to make sure we also have model from the curated models available there. So against other folks can come and take it and use it for
now. Thank you for always fun to chat watching this video. If you have any questions for me, or for any questions for Megan to please leave them in the comments below and don't forget to hit that subscribe button. Thank you.
Buy this talk
Access to all the recordings of the event
Buy this video
With ConferenceCast.tv, you get access to our library of the world's best conference talks.