Duration 42:15
16+
Play
Video

Advances in machine learning and TensorFlow

Debbie Bard
Data Science Engagement Group at National Energy Research Scientific Computing Center (NERSC)
+ 3 speakers
  • Video
  • Table of contents
  • Video
2018 Google I/O
May 10, 2018, Mountain View, USA
2018 Google I/O
Video
Advances in machine learning and TensorFlow
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
29.13 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Debbie Bard
Data Science Engagement Group at National Energy Research Scientific Computing Center (NERSC)
Douglas Eck
Principal Scientist at Google
Laurence Moroney
Staff Developer Advocate at Google
Vincent Vanhoucke
Principal Scientist at Google

Debbie Bard leads the Data Science Engagement Group at the National Energy Research Scientific Computing Center (NERSC) at Berkeley National Lab. A native of the UK, her career spans research in particle physics, cosmology and computing on both sides of the Atlantic. Debbie’s work at NERSC focuses on data-intensive supercomputing, including machine learning at scale.

View the profile

Doug leads Magenta, a Google Brain project working to generate music, video, image and text using deep learning and reinforcement learning. A main goal of Magenta is to better understanding how AI can enable artists and musicians to express themselves in innovative new ways. Before Magenta, Doug led the Google Play Music search and recommendation team. From 2003 to 2010 Doug was faculty at the University of Montreal's MILA machine learning lab, where he worked on expressive music performance and automatic t

View the profile

Laurence is a developer advocate at Google working on machine learning and artificial intelligence. He's the author of dozens of programming books, and hundreds of articles. When not Googling, he's author of a best-selling Science Fiction book series, and a produced screenwriter.

View the profile

About the talk

Artificial intelligence affects more than just computer science. Join this session to hear a collection of short presentations from top machine learning researchers: the TensorFlow engineers working on robotics, and the Magenta team exploring the border between machine learning and art.

Share

Hi everybody and welcome to this session where we going to talk about breakthroughs in machine learning and Laurence Maroney. I'm a developer Advocate Google the working on santaflow with the Google brain team. I will here today to talk about the revolution of going on in machine learning and how that Revolution is transformative. I come from a software development background any software developers here. Given that it's a transformation is Revolution is so I can protect me from a developer's perspective is really really cool because it's giving us a whole new set of

tools that we can use to build scenarios and to build solutions for problems that may have been too complex to even consider prior to this. It's also leading to massive advances in our understanding of things like the universe around us. It's opening up new fields in art and it's impacting and Revolution. I think think such as Healthcare and so many more things. So should we take a look at some of these? So first of all astronomy at school, I studied physics. I wasn't the count site person. So I'm a physics and astronomy geek and it wasn't that long ago when we learned how to discover new

planets around other stars in our galaxy in the star. I'm not meant that there was a very large planet like Jupiter size are even bigger orbiting that's all very closely and causing a wobble because of the gravitational attraction, of course the kind of planets. We want to find out a small rocky 1 xicara from Mars where you know, there's a chance of finding life on these planets and find those are discovering those was very very difficult to do because small ones closer to Star you just wouldn't say There's with research this been going on in the Catholic

Mission. They've actually recently discovered this planet called Kepler 90i by sifting through data and building models for using machine learning and using tensorflow and Kepler 90i. It's actually much closer to its host star than our face of its orbit is only 14 days instead of our 365 and 1/4 in a bit. I'm not only that which I find really cool but they didn't just find this as a single planet around that star they've actually mapped and model the entire solar system of eight planets that are there is some of the advances it's it to me. I find it's just a wonderful time to be alive because

Technologies enabling us to discover these great new things and even closer to home. We both have discovered that looking at scans of the human eye as you would have seen in the Keynotes, you know with machine learning trains models on this we've been able to discover things such as blood pressure predictions of being able to assess a person's risk of a heart attack or a stroke. Just imagine if this screen and can be done on a small mobile phone. And how profound is the effect going to be somebody the whole world is going to be able to access easy rapid affordable and non-invasive

screening for things such as heart diseases. I'll be saving many lives, but also be improving the quality of many many more lives. No, David. Breakthroughs and advances that have been made because of pain to flow in tensorflow. We've been working hard with the community with all of you to make this and machine learning platform for everybody. So today I want we want to share a few of the new advances that have been working on that. So including will be looking at robots on Vincent's going to come out in a few moments the show with robots that learn and some of the work that they've

been doing to improve how robots learn Debbie is going to be from nurse advancements including showing how building a simulation of the entire universe will help us understand the nature of the unknowns in our universe like dark matter and dark energy, but first of all, I would love to welcome from the magenta team. We have dug through the principal scientist. Thanks Lawrence. Thanks very much. All right. Day 3 we're getting there everybody. I'm Doug. I am research

scientist at Google working on a project called magenta. And so before we talk about modeling the entire known universe, so we talked about robot. I want to talk to you a little bit about music and art and how to use machine-learning potentially for expressive purposes. So I'm I want to talk first about a drawing project called sketch iron in where we train a neural network to do something as important as draw the pig that you see on the right there and I want to use this as an example a to highlight a few I think important machine learning Concepts that we're finding to be crucial for using

machine learning in the context of Art and music. So let's dive in. Looking a little technical but I hopefully will be fun for you. All we're going to do is try to learn to draw not by generating pixels, but actually by generating pen strokes and I think this is a very interesting representation to use because it's very close to what we do when we draw so specifically it we're going to take the data from the very popular Quick Draw game playing Pictionary Against the Machine learning algorithm at that was captured is Delta s Delta y movements of the pain. We also know when the pain is put

down on the page and when the pain is listed up and we're going to start raining the main We didn't necessarily need a lot of this data. What's nice about the data is that it fits the creative process? It's closer to drawing I argue then pixels are to drawing the actually modeling the movement of the pain. Now we're going to do with these drawings as we're going to push them through something called in autoencoder what you're saying on the left. The encoder networks job is to take those Strokes of that cat and encode them in some way so that they can be stored as a late and

Vector the yellow box in the middle. The job of the decoder is to decode that Layton Factor back into a generator to catch and they're very important point is that the only point that you really need to take away from this talk is that that leighton's actor is worth everything to us first. It's smaller in size than the encoded or Dakota drawing so I can't memorize everything and because it can't memorize we have to get some nights at work example, you might notice if you look carefully that the cat on the left which is actual data and it's been pushed through the trains model and

decoded is not the same as the cat on the right, right the cat on the left has five whiskers, but the model regenerated this cash with six Whiskers Whiskers is General. It's normal to the model. Where is five Whispers is hard for How to make sense of this idea of having a tight low dimensional representation this late in Vector the contrave and lots of data the goal is that this model might learn to find some of the generalities in a drawing learn General strategies for creating something so here's an example of starting each of the four corners with a drawing

done by human David the first author and those are encoded in the corners and now we just move linearly around the space not the space of The Strokes but the space of the Layton vector and if you look closely what I think you'll see is that the movements and the changes from these faces say from left to right or actually quite smooth. The model has dreamt up all of those faces in the middle yet to my I really do kind of fill the space of possible drawing Finally as I pointed out with the cat whiskers these model generalize not memorized. It's not that interesting to memorize a drawing.

It's much more interesting to learn General strategies for drawing. And so we see that with the 526 wistar the cat I think more interesting lately. I think it's all so suggestive. We also see this with doing something like taking a model of its only seen pigs and giving it a picture of a truck and what's that model going to do? It's going to find a pig truck cuz that's all it knows about right and if that seems silly which I grant it is in your own mind. Think about how how hard it would be least for me if someone has draw a truck that looks like a pig such a kind of hard to make that

transformation and these models do it. Finally I get paid to do this. I just want to point that out of place and allergies another example of what's happening in his late in space is obviously if you add and subtract pen Strokes, you're not going to get far with making something that's that's recognizable. But if you have a look at the personalities, we take the late and Vector for a cat head and we had a big body and we subtract the pig head. And of course it stands to reason

that you should get a cat body and we can do the same thing in reverse and this is real data this actually works. And the reason I mention it is it shows that the latent phase models are learning some of the geometric relations between the forms that people draw I'm going to switch gears now and move from drawing to music and talk a little bit about a model called and since which is a neural network synthesizer that takes audio and learns to generalize in the space of music. You may have seen from the beginning of Ohio with bathing that has been put into a harbor unit call

Vincent super how many people have heard of Henson super how many people want an Ensign super good? Okay. Well that's possible as you know, okay. So I want it supposed to be that didn't see the opening that have a short version of the making of ants in super like to roll that now to give you guys a better idea what this models up to. Let's go That's wild to me. There's a flu. Here's a snare. That's in the middle of this is what it sounds like. how to feel like returning a corner of what could be new possibility

it could generate a sound that might imply resume The fun part is like even though you think you know what you're doing. There's some weird interaction happening that can give you something totally unexpected. Why did that happen that way? Okay. So what you see here, by the way, the last person with the long hair was Jesse angle. Who was the the main scientists on the Innocence Project Vista grid that you're seeing this Square where you can move around. This place is exactly the same idea as we stopped at those faces. So the idea

that you're moving around the Layton space in your able to discover sounds that hopefully have some similarity and because they're made up of learning what makes humans the same way as a big truck gives us that maybe some new ideas about how Sound Works And as you probably know you can make these yourself, which I think is my favorite part about the Ensign super project is that this is open source GitHub for those of you who are makers and like The Thinker. Please give it a shot. If not, we'll see some coming available from tons of people who are born

on their own. So I want to keep going with music, but I want to move away from audio and I want to move now to musical scores in a musical notes something that you know, think of last night with with Justice driving a sequencer and talk about basically the same idea, which is Kimmy learning late in space where we can move around what's possible in in in a musical note musical score rather. So what you see here is some three-part musical thing on the top and some one part musical thing on the bottom and then finding Anna Layton space something that's in between.

Okay. And now I put the faces underneath this what you're looking at now is a representation of a musical drum score where time is passing left to right and what we're going to see if we're going to start my playlist for you. It's it's a little bit long. So I want to set this up. We're going to start with a drum beat one measure of drums and we're going to end with one measure of drums in you're going to hear those first. You can hear A and B, and then you're going to hear this late in space model try to figure out how to get from A to B and

everything in between is made up by the model in exactly the same way that the phases in the middle are made up by the model listening basically listen for whether it makes musical sensor. Not the intermediate drums. Let's give it a roll. So you have it? Moving right along it turns out take a look at this command this make sense to some of you may be we were surprised to learn after a year of doing magenta that this is not the right way to work with musicians and artists. I know I laughed too but we really thought he is a great idea guys. It's like space this into terminal and they're like What's

terminal and then, you know, you're in trouble, right? So we've moved quite a bit towards trying to build a tool that users can use this is a drum machine actually that you can play with online built around tensorflow. Jas and I have a short clip of of this being used we're going to see is all the red is from you as a musician you can play around with it and in the blue is generated by the models. Let's give this a real this is quite a bit shorter. so This is available for you as a codepen, which allows you to play around with HTML and CSS and JavaScript and

really amazing a huge. Shout out to tarot Padre Island. Who who did this. He grabs one of my train magenta models and he use tensorflow JS and he have too much of code to make it work and he put it on Twitter and we had no idea this was happening and then we can I reset my hero. This is awesome. He's like, oh you guys care about this course we care about this is our Dream to have people not just us playing with this technology. So I left that we've got in there. I'm so proud of what I want to talk about today actually close with we've cleaned up a lot of the code in factorio

help and we were able to introduce magenta. JS, which is very tightly integrated with tensorflow JS and it allows you for example to grab a checkpoint in model and set up a player and start sampling from its own tree lines of code. You can set up a little drum machine or music sequencer and we're also doing the same thing was Katherine and so we have the our side as well. And we've seen a lot of them was driven by this a lot of really interesting work both by Google Earth and by people from the outside. And I think it's highly unlikely as well with what we're doing in magenta. So to

close we're doing research in Canada models were working to engage with musicians and artists very happy to see the JavaScript stuff come along which is really seems to be language for that. Hoping to see better tools come and have engagement with the open source Community. If you want to learn more, please visit G. Co stock magenta, you can follow my Twitter account. I post regular updates and try to be a connector for that. So that's what I have for you and now I'd like to switch gears and go to robots very exciting with my colleague from Google Play invented the nuke. Thank you very

much. Thanks. My name is Vincent as I leave the brain robotics research team robotics research team at Google we when you think about you may think about precision and control you may think about robots, you know that live in factories that one very specific job to do and they got to do it over and over again. But as you saw in the keynote earlier more and more robots are about people by their self driving cars that are driving in our streets and drafting with people

now leaving our world not their world. And so they they really have to adapt and perceive the world around them and learn how to operate in this human-centric environment. So how do we get robots to learn instead of having to program them? This is what we've been embarking on and it turns out we can get robots to learn. It takes a lot of robots. It takes a lot of time and but we can actually improve on this if we both ought to behave collaboratively. So this is an example of a team of robots that are learning how to

do a very simple tasks like grasping objects the beginning they have no idea what they're doing and they try and try and try and sometimes they will grass something every time the grass something because I'm a reward And over time they get better and better at it. Of course, we use a deep running for this basically have a convolutional network that map's those images that the robots see of the work space in front of them two actions and possible actions and this collective learning of robots enables us to get you some levels of performance that we haven't seen before but

robots. In fact, we would much rather use WhatsApp computers if we could instead of lots of robots. And so the question becomes could we actually use a lot of rum stimulated robots virtual robot to do this kind of tasks and teach those robots to perform tasks and actually matter in the real world. What would the learn in simulation actually apply to real tasks? I need turns out the key to making this work is to learn simulations that I'm more and more faithful to reality.

So on the right here, you see what a typical simulation of robot would look like. This is a virtual robots trying to grasp objects in simulation what you see on the other side here may look like a real robots doing the same task. But in fact, it is completely assimilated as well. We've learned machine learning model that Maps images to real images. The real looking images are essentially indistinguishable from what a real robot would see in the real world and by using this kind of data in a simulated

environment and training a simulated model to accomplish tasks using those images. We can actually transfer that information and make it work in the real world as well. So there's nothing we can do with this kind of simulated robots. This is Rainbow Dash our favorite Little Pony and what you see here. Is he taking his very first steps. And a very first hops I should say. He's very good for somebody who's just starting to learn how to walk and the way we accomplish. This is by having a virtual Rainbow Dash running in

simulation. We trained it using deep reinforcement learning to run around in the simulator and then we can only basically download the model that we've run the simulation aren't you the real robots and actually make it work in the real world as well. There are many ways we can scale up Robotics and Robotics learning in this way. One of the key ingredients turns out to be learning by itself self supervision self. Learn. This is an example of for example, and what you see at the top here is somebody driving a car and what we're

trying to learn in this instance is the 3D structure of the world the geometry of everything and what you see at the bottom here is a representation of how far things are from the car. You probably are looking at avoiding obstacles and looking at other cars to Glide with them. And so you want to learn about the 3D geometry based on those videos the traditional way that you would do this is by involving for example of a 3D camera or lighter or something that gives you a sense of depth You were going to do none of that.

We're going to Simply look at the video and learn directly from the video the 3D structure of the world. And the way to do this is to look at the video and try to predict the future of this video. You can imagine that if you actually understand the 3D geometry of the world, you can do a pretty good job at predicting what's going to happen next in a video. So we're going to use that signal that tells us how well we're doing at predicting the future to learn what the 3 geometry of the world looks like at the end of the day what you end up with is

yet another big convolutional networks that map's what you see at the top to what you see at the bottom without involving any 3D camera or anything like that. This idea of self learning or just learning without any supervision directly from the data is really really powerful another problem that we have when we trying to teach robots how to do things is that we have to communicate to them what we wants what we care about right and the best way you can do that is by simply showing them what you want them to perform. So here's an example of one of my

colleagues basically doing the robot dance and a robot that is just looking at him performing those tasks and trying to imitate a visually what he is doing and what's remarkable here is that you know, even though the robot for example doesn't have legs he tries to do this crouching motion as best you can give in the breeze a freedom that is it has available and all of this is very entirely self supervised. The way we go about this is that if you think about any ditching somebody else for example of somebody pouring a glass of

water or can of Coke it all relies on you being able to look at them from a third-party View and picturing yourself doing the same thing from your point of view what it would look like if you did the same thing yourself, so we collected some of this data that looks like that where you have somebody looking at somebody else do it tasks and you end up with those two videos of one taken by the person doing the tasks and another one taken by another person and what we want to teach the robots is that those two

things are actually the same thing. So we going to use again machine learning to perform this matchup. We're going to go have a machine learning model day's going to tell us the same task of this image on the rights. And once we've learned that correspondence lots of things we can do with this. One of them is just imitation like this imagine you have somebody pouring a glass of water the robots sees them. They try to picture themselves doing the same task and try best. They can to imitate what they're doing. And so using again deep reinforcement

learning we can train robots to learn those kinds of activities completely based on visual observation without any programming of any kind. I want. Robot for me if you're quite yet, but it's very encouraging that we can just look at robots that understand it centrally what the nature with the fundamentals of the task. Regardless of whether they're pouring a liquid or they're wearing beads or whatever the glasses look like on the containers. All of that is obstructed and understand deeply what the task is about.

So very excited about this whole perspective on teaching robots how to learn instead of having to program them at some point. I would want to be able to tell my Google Assistant a OK Google. I need to go fold my laundry, right and for that to happen we going to have to rebuild the science of Robotics from the ground up you're going to have to visit on understanding and machine learning and perception. And of course we're going to have to do that. I do those go But that's I'm going to give the stage to

Debbie who's going to talk to us about cosmology. Thank you. Thank you. Good afternoon, everyone. I my name is Debbie boss. I'm used to looking about something a little bit different from what you've had so far. So I need the data science engagement. Ask us is the national Energy Research scientific Computing Center eyewear. Supercomputing Center Lawrence Berkeley National Lab. Just over the baby from head. We are the mission Computing sensor for the Department of energy offices science and what this means is

that we have something like seven thousand scientists are using all stupid confuses to welcome some the biggest questions in science today. And what I think is really cool this well is that I get to what was some of the most powerful computers on the planet, especially the last couple of years has received a scientist recently tanning to deep learning machine learning methods to solve some of these be questions that they working on everything these questions showing up and now what clothes on I'll see if it confuses I want to focus on one particular topic are it's very close to my heart. I

wish is cosmology because I'm a cosmetologist by training and my background is in cosmology research because I've always been interested in the relief of the most fundamental questions that we have in science about the nature of the universe from one of the most basic questions. You can ask about the universe is what is it made of these days. We have a very good feel for how much dark energy there is in the universe how much. Massa how much regular Master there is in the universe and then you about 5 to sense of regular masa, which is everything that you and I and all

the stars in all the dust and all the gas and all the galaxies out that that made a regular masa and not make such a pretty tiny proportion of the contents of the universe. The thing that I find really interesting is we just don't know what the rest of it is. Massa. We don't know what that's made all that we see indirectly the gravitational effect. Dark Energy, we don't know what that is a tool. That was any recently discovered about 15 years ago and dark energy just the name that we give to an observation which is the accelerated expansion of the universe. This is

I think we're exciting the fastest. There is so much that we have yet to discover means that they're tremendous possibilities for new ways for us to understand the universe and we are building a bigger invested telescopes reflecting days are all the time. I'm taking images and observations of the sky to get more days is to help us understand because we only have one Universe to exist. So we need to be able to collect as much data as we can. Universe and we needs to be able to extract all the information we can from all days are from observations. And cosmologists are increasingly turning

to deep learning how to extract meaning for my data and I'm going to talk about a couple of different ways that we doing that but that's what I want to kind of brown this in the background of how we actually do. Experiments in cosmology an experimental Science Guy in the way that many other physical sciences, uh does not know what we can do to experiment with the universe you call it when you do much to change the nature of space-time. Oh there it be fun if we could but instead we have to run a simulation. So we run a simulation in stupid computers or theoretical

Universe has different physical models under different parameters that control those physical models and that's how we experiment to our observations of the real Universe around us this comparison with tipica using some statistical measure of some kind of reduced a statistic like the power Spectrum, which is ministration this animation hits house spectrum is a measure of how matter is distributed throughout the Universe whether it's kind of distributed fairly evenly throughout space or whether it's

costed on small scales. Genesis Illustrated in the images on the top of the slide through which is snapshots of a simulated Universe run in a computer and you can see the overtime gravity is pulling Master together. And so that's. Master and regular mass and gravity is not collapsing them into a typical cluster and sedimentary type stretches. Where is dark energy is expanding space itself expanding the volume of a minute to universe and so far about the nature of the master itself how gravity is acting will not send what Kennedy is doing.

But as you can imagine running these kinds of simulations is very computation expensive, even if you any simulating a tiny universe, but still requires a tremendous amount of compute power and we spend billions of computer towers on super computers around the world on these kinds of simulation. Including of the stupid computers that I went with one of the ways that we using deep learning is to reduce the need for such expensive stimulation Simmons to the previous speak with talking about Vincent smoking about Miracle fix we exploring using

generative networks to produce in this case G6 on Pulaski and I meant National maps of the universe. So these are two dimensional maps of the mass concentration of the universe. So you can imagine the three-dimensional volume I Collapse into a two-dimensional projection of the mass density in the universe has you looking at the sky and we use it again topology to produce a new maps maps maps face on simulations. So this is an augmentation. We using a Dish Network to your old man's existing

stimulation for Juiced new maps. Let me see this doing a pretty good job. So just by looking by at the generated images, they look pretty similar to the real input images the real simulation images, but as a scientist kind of squinting at something and saying all you have Alex about right is not good enough and what I want is to be able to quantify this real to quantify. How are the network is working like the real images Our Generation images of a scientific because

Scientific types of usually a very often has a suitcase is statistics with it. So it's a distance that you can use to evaluate the success of your mother was so in this case we were looking at would you statistics that describe the patents in the the maps? I like the power Spectre another measures of the topology of the maps me see that but the statistics are contained in those Maps match those from the real simulations so we can quantify the accuracy of that what this is something that could be useful for the why did Atlantic Community I'm using

scientific data that has these Association statistics could be really interesting thing to deal ending practitioner has been trying to quantify how well your network. Is it working? So I mentioned before that. This is an augmentation a game that we've been working on sofa it can produce new maps based on a physics model that it's already seen skating this up introducing a physics models of the network has never seen before so I'm making this into a true and Eliza and this will help reduce the need for these very computation expensive simulations and allow cosmologist. No promises face a

bit more freely. I'd like to Floor little bit further. What does network is actually learning? I am I sore really interesting to it this morning here on this kind of thing how we can use machine learning to gain insight into the nature of the day today. We working with that. I'm showing here we were looking at which structures in a mass maps of contributing to the modeling show me country. Do you think the most by looking at a quantity food saline see inside by saying it's too much is a black and white image. If you can see that the peaks in the Saints team at

the peaks in the mouth map answer these peaks in the master. These are concentrations of Matthew needs corresponds to Galaxy clusters to fit me in the real universe and we've known for decades of Galaxy clusters are really good way of exploring cosmology. Irregular and they're showing some structure and this is something that's really interesting to me on the indications that some of the smaller Mass concentration to showing off his important speeches. Network and that's passed a little bit unexpected so taking his kind of introspection into the features that on that watches

learning we can start to learn something about the days when getting insight into some of the physical processes that are going on, you know what they say and then what kind of energy is a real strong point of declining when you allowing the network to land features for itself rather than imposing features doing feature engineering with how to get any particular statistics networks. Tell you something about your day so that we might surprise you. So far that was looking at

two dimensional Maps, but Across the Universe in order to dimensional place at least four dimensions. Depending on your favorite model is string theory in the king asked me to mention just getting the stuff another another level three dimensional make sure these two dimensional convolution. This is something that's computation expensive and it's something that can run really well on a super confusing architecture. 13cm you recently demonstrated to the first time that take learning can be used to determine the physical model of the Universe from three-dimensional

simulations of the full Master distribution. So this is the full three-dimensional Master rather than a cheetah metronome projectionism acid density able to make significant estimates of the promises that describe the physics of the simulated Universe compared to traditional methods and why you might be looking at one of these statistics like the power Spectrum in so this is a really nice example of how the network was able to learn. I'm what structures in this three-dimensional massive only more important rather than just looking at statistics

that we in advance. Then we watching on scanning this up the moments in collaboration with nurse. You see Buck see in Thailand cray who are our industry pain is it must be using logic stimulation volume even more days and we're using tensorflow running on 5002 CP United's match evening with several passers-by petaflops of performance on Corey, which is a flagship see if you can see that but that's the most important part of this is it we're able to predict more physical parameters would even Grace accuracy by scaling up the training of this and this

is something that we're really excited about I think it's was talking a little bit more Integrity so that how you achieved its performance how we are using tensorflow on us. You can fuses to get this kind of performance to get his kind of insight into a dinosaur an interest science. Now, it's stupid computers off fetty specialized. We have a specialized Hardware to allow the tens of thousands of computers. We have only stupid computers to act together as one compute machine we want to do this machine as efficiently as possible to train on that. We have a little performance available to us.

We want to go to take advantage of that when we were running a tense of flow to the approach we take using a foodie synchronous. They stop having an approach wedge. Each node is training on a subset of the day. So we started off as many people do you using a grpc for this way communication with the promises of us. She said that parameter updates in the entire that sent back and forth. But I like many other people have notes. It's that this is not a very efficient way to run at scale and we found that if we are running On 290 still so then we had a real communication

ball tonight between the computer on the Promises of is so instead. We use NPI, which is a message passing interface to allow all I can keep notes to communicate with each other directly to removing the need for Promises of is and this is also has it can really take advantage of our high-speed interconnects specialized Hardware that connects a computer note. So we use the gradient aggregation. And for this we use a specialized NPI Collective oragies, which is only a bike ride upon is with us to Pecan pieces. And this

is pretty neat. It's able to avoid in ballot is in a note performance the straggler effects MV might run into its over laughing communication and computes in a way that allows very effective scaling and we seen the we're able to run tensorflow on What is a computerized very little drunken efficiency on something that I've been really excited to see he is coming soon in tensorflow excited to see how this is going to work in a large community. Say the three things. I want you to take away from this tool that says he's a cosmology has a

really cool science problems, and some really cool deep learning problems II. Is it scientific notation different from now. He's one of the statistics that we often have Associated the scientific day, so could be real used. I think in the Deep Learning Community multiple knows I'm looking forward to seeing how the rest of the community is going to work for this as an alarm 10 things back to Lawrence. Thank you. Thank you. Debbie, great stuff actually simulating

universes. So we're running very short on time. So I just want to share. These are like just three great stories, but there are countless more stories out there. This is a map that I created a people who starred Penta flow and GitHub and who share their location and we've people from the Outback of Australia to the green fields of I learned from the North Pole Arctic Circle in Norway all the way down to Deception Island Antarctica. There are countless Stories being created countless great new things being done with tensorflow on with machine learning. We think some of those stories out

yours, and if they are, please get in touch with us, would love to hear them and would love to share them. So, it's not I just want to say thank you very much for attending today. Enjoy what's left of Iowa and have a safe journey home. Thank you.

Cackle comments for the website

Buy this talk

Access to the talk “Advances in machine learning and TensorFlow”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “2018 Google I/O”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
159
app store, apps, development, google play, mobile, soft

Similar talks

Jumana Al Hashal
Product Leader at Google
+ 1 speaker
Todd Kerpelman
Developer Advocate at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Yufeng Guo
Developer and Machine Learning Advocate at Google
Available
In cart
Free
Free
Free
Free
Free
Free
Priya Gupta
Software Engineer at Google
+ 1 speaker
Anjali Sridhar
Software Engineer at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Advances in machine learning and TensorFlow”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
558 conferences
22059 speakers
8245 hours of content