TensorFlow World 2019
October 30, 2019, Santa Clara, USA
TensorFlow World 2019
Video
TensorFlow World 2019 Keynote
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
41.65 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Jeff Dean
Google Senior Fellow at Google
Megan Kacholia
Engineering Director at Google
Frederick Reiss
Chief Architect at IBM
Theodore Summe
Head of Product for Cortex (Twitter Machine Learning) at Twitter
Craig Wiley
ДолжностьDirector, Product Management CloudAI Platform at Google
Kemal El Moujahid
Product Director, Tensor Flow at Google

Jeff joined Google in 1999 and is currently a Google Senior Fellow in Google's Research Group, where he leads the Google Brain team, Google's deep learning research team in Mountain View. He has co-designed/implemented five generations of Google's crawling, indexing, and query serving systems, and co-designed/implemented major pieces of Google's initial advertising and AdSense for Content systems. He is also a co-designer and co-implementor of Google's distributed computing infrastructure, including the MapReduce, BigTable and Spanner systems, protocol buffers, LevelDB, systems infrastructure for statistical machine translation, and a variety of internal and external libraries and developer tools. He is currently working on large-scale distributed systems for machine learning. He received a Ph.D. in Computer Science from the University of Washington in 1996, working with Craig Chambers on compiler techniques for object-oriented languages. He is a Fellow of the ACM, a Fellow of the AAAS, a member of the U.S. National Academy of Engineering, and a recipient of the Mark Weiser Award and the ACM-Infosys Foundation Award in the Computing Sciences.

View the profile

Megan Kacholia is an Engineering Director on TensorFlow/Brain, and a long-time Googler. She specializes in working on large-scale, distributed systems, and finding ways to tune and improve performance in such environments.

View the profile

Fred Reiss is the Chief Architect at IBM’s Center for Open-Source Data and AI Technologies in San Francisco. Fred received his Ph.D. from UC Berkeley in 2006, then worked for IBM Research Almaden for the next nine years. At Almaden, Fred worked on the SystemML and SystemT projects, as well as on the research prototype of DB2 with BLU Acceleration. Fred has over 25 peer-reviewed publications and six patents.

View the profile

Having spent the last couple years building AWS SageMaker, I love ungating the power of data and driving improved decision making through the application of machine learning and advanced analytics. Whether leading analytic teams tasked with driving business metrics or building tools/platforms so that others can do analytics that they previously thought impossible, I seek to create more optimal decision making for my team and the world.My passion is driving corporate profitability and fueling sustainable business growth for multi-billion dollar companies. I am a highly-analytical and growth-centric product visionary and business strategist with a consistent record of success. I am committed to exceeding top and bottom line growth through data-driven decision making that aid C-level executives.A few accomplishments include:==> I drove substantial incremental profit and margin improvement for Amazon.==> I significantly improved Free Cash Flow in year one in the role.==> I increased conversion and growth rate through improved customer interfaces and improved marketing targeting.Recognized as an expert in building or enhancing systems for analyzing and interpreting sophisticated data and metrics while steering revenue and profit growth for Amazon.com, I have received several vertical promotions for the last 7 years.

View the profile

Entrepreneur, passionate about solving big problems with AI. Currently head of Product for TensorFlow, the world's leading open source machine learning platform.Previously at Facebook, leading the Messenger platform, connecting 20M businesses with 1.3B people, Wit.ai, the leading NLU developer platform, and M, Facebook's 200M MAU assistant.Prior to Google and Facebook, I built 3 companies in Fleet Management, Edtech, and Enterprise collaboration. I sold my last startup, LiveMinutes, to Fuze, a leading videoconferencing vendor, where I led Product and BD.

View the profile

About the talk

O'Reilly and TensorFlow are teaming up to present the first TensorFlow World. It brings together the growing TensorFlow community to learn from each other and explore new ideas, techniques, and approaches in deep and machine learning.

Presented by:

Jeff Dean, Google

Megan Kacholia, Google

Frederick Reiss, IBM

Theodore Summe, Twitter

Craig Wiley, Google

Kemal El Moujahid, Google

Share

I'm really excited to be here. I think it was almost four years ago to the day that we were about 20 people sitting in a small conference room. And one of the Google buildings we've woken up early because you wanted to kind of time this is for an early east coast line for returning on the 10th of low. Org website and releasing. The first version of Santa Claus is an open source project project and I am really really excited to see what it because it's just remarkable to see the growth and all the different kind of ways in which people have used this system for all kinds of interesting

things around the world. So one thing that's interesting is the growth in the use of tensorflow off also kind of mirrors the growth and interest in machine learning and machine learning research generally around the world. So this is a graph showing the number of machine learning archive papers that have been posted over the last for 10 years or so and you can see it's growing quite quite rapidly much more quickly than you might expect and that that lower Red line is kind of a nice doubling every couple of years growth rate of exponential growth rate. We got used to

in computing power for do the Moore's law for so many years that's not kind of slow down, but you can see that the machine learning research Community is generating research ideas at faster than that, right, which is pretty remarkable. We've replaced computational growth with somebody else and we'll see those both together will be important and really the excitement about machine learning is because we can now do things we couldn't do before right as little as five or six years ago computers really couldn't see that well and starting in about you know, 2012/2013. We started to have people

use deep neural networks to try to attack whole computer vision problems image classification obvious detection things like that. And so now using deep learning and deep neural networks, you can feed in the Raw pixels of an image and Fairly reliable e get a prediction. What kind of object is in that image? No, feeding the pixel there red green and blue values in a bunch of different coordinates and you get out the prediction leopard this works for me to swallow. You can feed in audio waveforms and by training on lots of audio waveforms and transcripts of what's being said in those

waveforms. We can actually take a completely new recording and tell you what is being sad emitter transcript Moser Como tale Vu you can even combine these ideas and have models that taken pixels and instead of just predicting classification classifications of water is in the object can actually write a short sentence of short caption that a human might write about about the image of a cheetah lying on top of a car. That's one of my vacation photos, which was kind of cool. And so just to show the progress in computer vision in 2011 Stanford

Hostin imagenet contest every year to see how well computer vision systems can predict one of a thousand categories in a full color image and you got about a million images 239 and then you got you know, a bunch of pets damages you your model has never seen before and you make it need to make a prediction in 2011 the winning infant got 26% of her, right so you can kind of make out what that is, but it's pretty hard to tell we know from Human experiment that's human error of a well-trained human someone who's practice at this particular task and really understands the Thousand

categories gets about 5% error was not a trivial task and in 2016 the winning entrance at 3% error. So just look at that tremendous progress in the ability of computers to resolve and understand computer imagery and and and have computer vision that actually works. This is remarkably important in the world because now We have systems that can perceive the world around us. And can we do all kinds of really interesting things of that? I'm wishing similar progress and speech recognition language translation things like that. So for the rest of the talk, I'd like the kind of

structure at around this nice list of 14 challenges that the US National Academy engineering put out and felt like these were important things for the science and engineering communities to work on for the next hundred years. They they put this out in 2008 and came up with this list of 14 things after some deliberation and I think you'll agree that these are so pretty pretty good large challenging problems that if we actually made progress on them that will actually have you know, you know, a lot of progress in the world will be healthier will be able to learn things better.

We'll be able to develop better medicines that you know will have all kinds of interesting Energy Solutions on something to talk about a few of these And the first one I'll talk about is restoring and improving Urban infrastructure. So we're on the cusp of the sort of widespread commercialization of a really interesting new technology that's going to really change how we think about transportation and that is autonomous vehicles. And you know, this is a problem that has been worked on for quite a while but it's now starting to look like it's actually

completely possible and commercially viable to produce these things and a lot of the reason is that we now have computer computer vision and machine learning techniques that can teach in sort of raw forms of data that the sensors on these cars collect, you know, so they have like the spinning light ours in the top that give them 3D Point cloud data, they have cameras and lots of different directions. They have radar in the front bumper in the rear bumper and they can really take all this raw information in and where the Deep neural networks use it all together to build a high-level

understanding of what is Going on around the car. Is it another card in my side? There's a pedestrian up here to the left. There's a light post over there. I don't really need to worry about that moving and really help to understand the environment in which their operating and then what actions can they take in the world that are both legal safe of a all the traffic laws and get them from A to B. And this is not some distant far-off dream alphabets. Waymo subsidiaries actually been running tests in Phoenix, Arizona, normally when they run tests they have a safety driver in the front

seat ready to take over at the car. Something kind of unexpected but the last year or so they've been running tests in Phoenix with real passengers in the back seat and no Safety drivers in the front seat running around Suburban Phoenix. So she has a slightly easier training ground then say downtown Manhattan or San Francisco, but it's still something that is like not really far off. It's something it's actually happening. And this is really possible because of things like machine learning and the use of tensorflow in in these systems. Another one that I'm really really excited

about is Advanced Health informatics. This is a really broad area and I think there's a lots and lots of ways that machine learning and the use of healthcare data can be used to make better Healthcare decisions for people. So I'll talk about one of them and really I think the potential here is that we can use machine learning to bring the wisdom of experts through a machine learning model anywhere in the world and that's really a huge huge opportunity. So let's look at this through one problem. We've been working on for a while which is diabetic retinopathy. So

diabetic retinopathy is the fastest growing cause of preventable blindness in the world. And screaming every year if you're at risk for this and if you if you have diabetes or early sort of symptoms that that make it likely you might develop diabetes. You should really get screened every year. So there's four hundred million people around the world that should be screened every year for the screening is really specialized doctors can't do it. You really need ophthalmologist level of training in order to do this effectively and the impact of the shortages significant. So in

India, for example, there's a shortage of 127000 eye doctors to do this sort of screening and as a result 45% of patients who are diagnosed to this disease actually have suffered either full or partial vision loss before they're actually diagnosed and then treated and this is completely tragic because this disease if you catch it in time is completely treatable is very simple 99% effective treatment that we just need to make sure that the right people get treated at the right time. So what can you do? So it turns out diabetic retinopathy screening is

also a computer vision problem and the progress we've made on General computer vision problems where you want to take a take a picture and tell us that the leper turn after I figure a car actually also works for diabetic retinopathy. He can take a retinal image which is what the screaming camera sort of the raw data that comes off the screen camera and try to feed that into a model that predicts 1 2 3 4 5 that's how these things are rated, you know, one being no diabetic retinopathy five being proliferative in the other numbers being in between. Turns out you can get a collection of data of

retinal images and have ophthalmologist label them turns out if you ask to ophthalmologist to label the same image, they agree with each other 60% of the time on the number one, two, three, four five. I'm but perhaps slightly scarier to ask the same opthamologist to grade the same image a few hours apart. They groom themselves 65% of the time but you can fix this by actually getting each image label buy a lot of opthamologist so you can get it labeled by 7 opthamologist if five of them say that the two into them safe to 3000 more like a tooth in a three eventually you have a nice

high-quality didn't that you can print on like many machine learning problems high-quality did it is the right rodrian, but then you can apply basically an off-the-shelf computer vision model trained on this data set and now you can get A model that is on par or perhaps slightly better than the average board certified ophthalmologist in the US, which is pretty amazing. It turns out you can actually do better than that. And if you get the data label by retinal specialist people would have more training in retinal disease and have and change the protocol by which you

label things. You got three retinal specialist to look at it image discuss it amongst themselves and come up with a what's called a coordinated assessment and one one number then you can train a model and now be on par with retinal specialist, which is kind of the gold standard of care in this area. And that's something you can now take and distributed widely around the world. So when is she with with people with Health Care kinds of problems is you want explainable models you want to be able to explain to a clinician? You know, why is

this person? Why do we think this person has moderate diabetic retinopathy. So you can take a retinal image like this and one of the things that really helps if you can show in the models assessment why this is a to Renata free and Rye highlighting parts of the input data. You can actually make this more understandable for clinicians and label them to sort of really sort of get behind the assessment that the the model was making and we seen this in other areas as well. It's been a lot of work on explainability. So I think the notion that deep neural networks are sort of

complete Black boxes at the bit overdone. There's actually a bunch of good techniques that are being developed in more all the time that will improve s So a bunch of advances dependent be able to understand text so and we've had a lot of really good improvements in the last few years online language understanding. So this is a bit of a story of research and research Builds on other research filling 2017 a collection of Google researchers and interns came up with a new kind of model for text called the Transformer mod. So unlike for current models

where you have kind of a sequential process where you absorb one word or one token at a time and update some internal stage and then go on to the next token the Transformer model enables you to process a whole bunch of texts all at once in parallel make it much more computational efficient and then to use attention on previous text to really focus on if I try to predict what the next word is, you know, what are other parts of the context of the left that are relevant to predicting that Flypaper was was quite successful and showed really good results on language translation tasks with a lot

less compute. So blue score there in the first two columns for English to German English to French tire is better. And then the the compute cost of these models shows of this is getting sort of state-of-the-art results at that time with 10 200x less confused than other approaches. Then in 2018 another team of Google researchers built on the idea of Transformers everything you see there in a blue oval is a Transformer modules and they came up with this approach called bi-directional encoder encoding representations from Transformers or Bert or

suburb has this really nice property that in addition to using context of the left did uses context all around the language of the sort of the surrounding text in order to make predictions about text. And the way it works is you start with a self supervisor objective. So the one really nice thing about this is there's lots and lots of text in the world. So if you can figure out a way to use that text to train a model be able to understand text better. So we're going to take this text and in the burg training objectives to make it till

supervisor going to drop about 15% of the words. Sexy pretty hard, but the model has been going to try to fill in the blanks eventually try to predict what are the missing words ever dropped and because we actually have the original words. We now know, you know, if the model is correct in its guesses about what does in the bikes and by processing freely in the words of text like this you actually get a very good understanding of contextual cues in language and how to actually fill in the blanks in a really intelligent way. And so that's essentially the training objective for bird you take text

you dropped 15% of it and then you try to predict those missing words. And one key thing that works really well is that step one? You can pre treino model on lots and lots of texts of using this fill in the blanks spell supervisor objective function and then step to you can then take a language task you really care about like maybe you want to predict is this, you know, a 5-star review or one star review for some hotel, but you don't have very much labeled text for that for that actual task. You might have 10000 reviews in Noda Stark County Beach

review, but you can then fine-tuned the model starting with the model train and step 1 on Trillium two words or text and now use your paltry 10000 examples for the text ASCII really care about and that works extremely. Well, then particular Bert gave state-of-the-art results across a broad range of of different text understanding Benchmark in this glue Benchmark sweet pretty cool and people have been using vert now in this way to improve all kinds of different things all across the language understanding and LP. So one of the grand

challenges was engineer the tools of scientific discovery. I think it's pretty clear machine learning is actually going to be an important component of making advances in a lot of these other Grand Challenge area things like autonomous vehicles or other kinds of things and it's been really satisfying to see what we'd hoped would happen when we released tensorflow is an open source project has actually kind of come to come to the past as we were hoping lots of people would sort of pick up tensorflow use it for all kinds of things people would improve the core system. They would

use it for 4 past. We've never imagined and that that's been quite that assigns a people have done all kinds of things. Some of these are our uses in try the inside of Google summer outside Insight academic institutions summer, you know, scientists work and conserving whales or understanding like ancient scripts many times. He thinks she's pretty much the breath of breath of uses. Is it really amazing? These are the 20 winners of the google.org impact challenge where people could submit proposal for how they might use machine learning and AI to really tackle a

local challenge. They saw in their communities and they have all kinds of things people ranging from like trying to predict better ambulance dispatching to identifying sort of illegal logging using speech recognition or audio processing pretty meat. How many of them were using Country Club so one of the things we're pretty excited about is automl which is this idea of automating some of the process by which machine learning experts sit down and try to make decisions to solve machine learning problems. So currently you have a machine

learning expert sit down. They take data they have complication they run about the experiments that kind of stir it all together and eventually you got a solution to a problem you actually care about wondering if we'd like to be able to do though is see if we could eliminate a lot of a need for the human-machine money expert to run these experiments and instead automate the experimental process by which a machine learning expert come to buy a high-quality solution for Bradley Beach FL. So one of you know lots and lots of organizations around the world have machine learning problems, but

many many of them don't even realize they have a machine running problem. Let alone have people in their organization that can tackle the problem. So one of the earliest pieces of work are researchers did in the states with something called neural architecture search. So when you sit down and design a neural network to tackle a particular task you make a lot of decisions about you know shapes of this and that is like shit if you 3 by 3 filters in layers 17 or 5 by 5, all kinds of things like this. It turns out you can automate this process by having a model generating model and train the

model generative model based on feedback about how well the model of the degenerates work on the problem. You kept the way this will work. We're going to generate a bunch of models was just descriptions of different neural network architectures or didn't rain each of those for a few hours. And then we're going to see how well they work and then used the accuracy of those models as a reinforcement learning signal for the model Jenner any model to steer it away from model didn't work very well and towards models that were ever going to repeat many many times. And over time we're going to get

better and better by steering the search to the parts of the face of models. That worked. Well. And so it comes up with models that look a little strange it bitterly, you know human probably would not sit down and wairarapa exactly but they're pretty effective. So if you look at this graph, this shows kind of the best machine human machine learning experts experts machine learning research trees in the world. Stop producing a whole bunch of different kinds of models in the last 4 or 5 years wings, like resnet50 densenet to A-1

Inception resnet all kinds of things that black dotted line is kind of a frontier of human machine learning expert model quality on the y-axis and computational cost to Max axis. What you see is as you go out the x-axis you tend to get more accuracy because you're applying more computational cost. But what you see is the the blue dotted line is automl Bay Solutions system where we've done this automated Addiction instead of redesigning any particular architecture and you see that it's better both of the high-end where you care about most accurate

model, you can get regardless decapitation cost but it's also accurate at the low-end were you care about a really lightweight model in my running a phone or something? I got and in 2019, you got you been able to improve that significantly. This is a set of models competition that and has a very good of a slider about you can't real computational cost and accuracy, but they're all way better than human sort of guided experimentation the on the Block that box out of line there and this is true for image recognition for classification. It's true fraud detection. So the red line there is automl

the other things or not. It's true for language translation. So the black line there is various kinds of Transformers. The red line is we gave the basic components of Transformers 2 and automl system in allow that to fiddle with it and come up with something better. It's true for computer vision models used in autonomous vehicles. So this is a collaboration between Lima and Google research. We're able to come up with models that are you know, significantly lower latency for the same quality or wheat, they could trade it off and get significantly lower error rate at the

same latency. It actually works for tabular data. So if you have lots of like customer records in you want to predict which customers are, you know going to be spending $1,000 with your your business next month, you know, you can use automl to cope with the high-quality model for that kind of problem. Okay. So what do we want? I think we want the following properties in a computer in a machine learning model. So 1 is we tend to train separate models for each different problem. We care about the I think this is a bit misguided like really want one model that does a lot

of things so that you can build on the knowledge in how it does thousands or millions of different things so that when the millionaire first thing comes along you can actually use its expertise from all the other things that knows how to do to know how to get into a good State for the new problem with relatively little data and relatively little computational cost. So these are some nice properties. I have kind of a cartoon diagram of something I think might make sense to imagine. We have a model like this. We're very sparsely activated so different pieces of the model, you know, I have

different kinds of cookies and they're called upon when it makes sense, but they're mostly idle. So it's relatively computational even power-efficient, but it can do many things. And now I eat components here is some piece of a machine learning model with different kinds of State parameters in the model and different operations and a new task comes along. Now you can imagine something like neural architecture search becoming squint at it. Just try to now turn it into neural pathway search. We're going to look for components that are really good

for this new task we care about and maybe we'll search and find at this path through the model actually get this into a pretty good state for this new task cuz maybe it goes through components that are trained and related tasks already. And now maybe we want that model to be more accurate for the purple task so we can add a bit more, you know computational capacity add a new component start to use that component for this new task continue training it and now that new component can also be used for solving other related tasks and each component itself might be running some sort

of interesting architectural search inside finding something like that is the direction. We should be exploring as a community. It's not what we're doing today, but I think it could be pretty interesting Direction. Okay, in finally, I'd like to touch on thoughtful used today. I decided to be seeing more and more uses of machine learning in our products and an around-the-world. It's really really important to be thinking carefully about how we want to apply these Technologies, you know, they can like any technology these systems can be used for amazing things or things. We might find a

little sort of detrimental in various ways until we come up with a set of principles by which we think about applying sort of machine learning and AI to our products and we may be public about a year-and-a-half ago as a way of sharing our thought process with the rest of the world and I I pretty ugly like these I'll point out many of these are sort of areas of research that are not fully understood yet, but we will aim to apply the best of the state-of-the-art method for for example for reducing bias in machine learning models, but also continue to do research and Advance the

state-of-the-art of these areas and so this is just kind of a taste of different kinds of work. We're doing in this area. How do we Machine learning with more privacy using things like Federated learning. How do we make models more interpretable? So the clinician can understand the directions of making Ian on diabetic retinopathy start of examples. How do we make machine learning more fair? Okay hand me that. I hope I've convinced you that people that's a machine-learning you're already here. So maybe already convinced of this but are helping make sort of significant advances in a lot of

hard to get assigned problems computer vision speech recognition language understanding General use of machine learning is going to push the world Ford. Thank you very much and I appreciate you all being here. Everyone. Good morning. Just want to see you. First of all welcome today. I want to talk a little bit about tensorflow too. Oh and some of the new updates that we have. They're going to make your experience with tensorflow even better. Before I dive into a lot of those details, I want to start off by thanking you

everyone here everyone on the livestream everyone who's been contributing to tensorflow all of you who make up the community center open source to help accelerate the AIC old for everyone. You've used it in your experiments you deployed in your business's you've met some amazing different applications that were so excited to Showcase and talk about something that we get to see a bit here today, which is one of my favorite Parts about conferences like this and you've done so much more and all of this has helped make tensorflow what it is today. It's the most popular in all

ecosystems in the world. And honestly that would not happen without the community being excited and embracing and using this and getting back. So I'm behalf of the entire tensorflow team. I really just the first one to say thank you because it's so amazing to see how tensorflow is used. That's one of the greatest things I get to see about my job is the applications in the way folks are using tensorflow. I want to take a step back and talk a little bit about some of the different user groups and how we see them making use of tensorflow tensorflow is being used across a wide range of

experiments and applications. So if you're calling out researchers data scientist and developers, and there's other groups kind of in between as well. Researchers use it because it's flexible. It's flexible enough to experiment with and push the state-of-the-art and deep-learning. You heard this even just a few minutes ago with looks from Twitter talking about how they're able to use tensorflow and expand on top of it. In order to do some of the amazing things that they want to make use of on their own platform and a Google we see examples of this when researchers are creating Advanced models

like excel math, and some of the other things that Jeff reference in his talk earlier. Take me to Fort Worth looking at data scientist data scientist and Enterprise Engineers who said they rely on tensorflow for performance and scale in training and production environments. That's one of the big things about tensorflow that we've always emphasizing looked at from the beginning. How can we make sure this can scale to large production use cases for example quantify in BlackRock use tensorflow to test and deploy in real world in a few instances such as text organization

as well as classification. I'll be one step forward looking it up application developers application developers use tensorflow because it's easy to learn ML on the platforms that they care about Arduino wants to make it ml simple microcontrollers. So they rely on tensorflow preet remodels and tensorflow Lite micro for deployment. Each of these groups is a critical part of the tensorflow ecosystem. And this is why we really wanted to make sure that tensorflow 2. O works for everyone. We announce the alpha at our deaths limit earlier this year and

over the past few months. The team has been working very hard to incorporate early feedback. Again. Thank you to the community for giving us that early feedback so we can make sure we're developing something that works well for you, and we've been working to resolve bugs and issues and things like that and just last month in September. We were excited to announce the final General release for tensorflow 2.0. You might be familiar with tentacles architecture, which is always supported. Ml lifecycle from training through deployment again, one of the things

we've emphasized since the beginning when tensorflow was initially open-sourced a few years ago, but I want to emphasize how times have loved to. Oh makes this workflow even easier and more intuitive. First invested in Keras and tensorflow making it the default high-level API many developers love carrots because it's easy to use and understand again. You heard this already mentioned a little bit earlier and hopefully we'll hear more about it throughout the next few days by tightly integrating Cara send

it to. Oh, we can make Kerris work even better with Primitives like tfdata we can do performance optimization behind the scenes and run distributor training again. We really wanted to. Go to focus on usability. How can we make it easier for developers? How can we make it easier for users to get what they need out of tensorflow? For instance lose it a customized weight loss app. So they use TS. Kara's for Designing their Network by leveraging mirrored strategy distribution into. O, they were able to utilize the full power of theirs. If you use its people act like this that we

love to hear. And again, it's very important for us to know how the community is making use of things how the community is using two out of the things they want to see so that we can make sure we're developing the right framework. And also make sure you can contribute back. When you need a bit more control to create event algorithms too. Oh comes fully loaded with eager execution making it familiar for python developers. This is especially useful when you're stepping through doing the bugging making sure you can really understand step-by-step what's happening. This also means there's lots of

coding required when straining your model all without having to use session that run again usability is a focus To demonstrate the power of training models of two. Oh, I'll show you how you can train a state-of-the-art in LP model in 10 lines of code using the Transformers in LT Library by hugging face again by Community contribution this popular package how some of the most advanced NLP models available today like burnt GPT Transformer axle axle nut and now supports tensorflow too. Oh, let's take a lot. Here kind of just looking through the code. You can see how you

can use to. Oh to train hugging faces to still burp model for text classification. Continue to Simply load the tokenizer model and the data set then prepare the data set and use TS. Care has compiled and fit apis and with a few lines of code. I can trade my model. I was just a few more lines we can use the train model for tasks such as text classification using eager execution again, it's examples like this where we can see how the community take something and is able to do something very exciting and amazing by making use of the platform in the ecosystem

that tensorflow is providing. The building in training a model is only one part of tensorflow two. Oh, you need the performance to match? That's why we worked hard to continue to improve performance with tensorflow to. O it delivers up to 3x faster training performance using mix Precision on Nvidia Volta and touring gpus in a few lines of code with models like resnet50 and Bert. As we continue to double down on 290 in the future performance or made a focus with more models and with Hardware accelerators. So the next upcoming tensorflow release you can

expect GPU and CPU pot support along with NYX Precision for gpus the performance of something that we're keeping a focus on as well while also making sure usability really stands to the Forefront. But there's a lot more to the ecosystem. So beyond model building in performance. There are many other pieces that helps round out the tensorflow ecosystem add-ons and extensions are very important piece here, which is why we wanted to make sure that they're also compatible with tensorflow too. Oh so you can use popular libraries like some other ones called out here whether it's tensorflow

probability TSA agents or TF text. We also introduced a host of new libraries to help researchers and ml practitioners and more useful waste. So for example, neural structure learning helps to train neural networks with turn signals and the new fairness indicators add-on enables regular computation and visualization of fairness metric. These are just the types of things that you can see kind of us part of the tentacle ecosystem these add-ons that they can can help you make sure you're able to do the things you need to do not with your model the kind of Beyond just that Valuable ass

back of the tent of ecosystem is being able to analyze your email experiments in detail. So bored tensorboard is tensorflow visualization tool which is what helps you accomplish this. It's a popular tool among researchers and ml practitioners for tracking metrics visualizing model grass and perimeters and much more. It's very interesting that we've seen users in Torrey tensorboard so much so that even take screenshots of their experiments and then use those screenshots to be able to share with others what they're doing with tensorflow. This type of sharing and collaboration in the NL

Community is something we really want to encourage with tensorflow again. There's so much that could happen by enabling the community to do good things. That's why I'm excited to show the preview of tense of bored. That's a new free manage tensorboard experience that lets you upload and share your ml experiment results with anyone you'll now be able to host and tractor. Ml experiments answer them publicly. No setup required simply upload your logs and then share the URL so that others can see the experiments and see the things that you are doing with the flow as a preview. We're

starting off with a scalars dashboard, but over time will be adding a lot more functionality to make the sharing experience even better. But if you're not looking to build models from scratch and want to reduce some computational cost tensorflow has always made pre-trained models available through tensorflow Hub. And today we're excited to share and improved experience of tensorflow Hub. It's much more intuitive where you can find a comprehensive repository a pre-trained models and the tensorflow ecosystem. This means you can find models like Bert and others related to

image text video and more that are ready to use with tensorflow Lite and tensorflow. JS. Again. We wanted to make sure the experience here is vastly improved to make it easier for you to find what you need in order to more quickly get to the task at hand. And since tensorflow is driven by all of you. Tensorflow Hub is hosting more treat free train models from the community. You'll be able to find curated models by Deep mine Google Microsoft AI for an Nvidia ready-to-use today with many more to come we want to make sure the tensorflow Hub is a great place to

find some of these excellent for train models. And again, there's so much the community is doing we want to be able to Showcase those models as well. Tensorflow gelato also highlights tensorflow score strength in areas of focus, which is being able to go from model building experimentation through the production no matter what platform you work on you can deploy into an MLP pipelines with tensorflow extended or tf-x. You can use or models on mobile and embedded devices with tensorflow Lite for on device and friends and you can train and run modeled in the browser or no. JS with tensorflow

JS. You'll learn more about what's new in tensorflow in production during the keynote sessions tomorrow. You can learn more about these updates by going to tensorflow. Org where you'll also find the latest documentation examples and tutorials for 2. O again, we want to make sure it's easy for the community to see what's happening. What's new and enable you to just do what you need to do with tensorflow. We've been thrilled to see the positive response to do out at 2 to. Oh, and we hope you continue to share your feedback. Thank you, and I hope you enjoy the rest of the F

world. Hello everyone. I'm Fred wrice. I work for IBM. I've been working for IBM since 22006 and I've been contributing to tensorflow core since 2017, but my primary job at IBM investor service tech lead for code eight. That's the center for open source data and AI Technologies. We are an open source lab located in downtown San Francisco and we work on open source technologies that are foundational to Ai and we have OnStaff 44th full-time developers who work only on

open source software That's that's a lot of developers a lot of Open Source Developers. Or is it well, if you look across IBM at all of the ivy Amber's who are active contributors to open source in. They have committed code to GitHub in the last 30 days. You'll find that there are almost 1,200 IBM verse in that category. So our 44 developers are actually a very small slice of a very large pie. I went and those numbers they don't include Red Hat when we close that acquisition earlier this year. We more

than doubled our number of active contributors to open stores. So you can see that IBM is really big in open source and more and more the bulk of our contributions and open in the open are going towards the foundations of a i and when I say, I mean AI in production, I mean AI at scale scale is not an algorithm is not a tool. It's a process. It's a process that starts with data and then that data turns into features and those features train models and those models get deployed in applications and its applications

produce more data and the whole thing starts all over again, and at the core of this process is an ecosystem of Open Source software. And if the core of this ecosystem is tensorflow, which is why I'm here on behalf of IBM open-source to welcome you to tensorflow world. Now throughout this conference, you're going to see talks that speak to all of the different stages of this AI life cycle, but I think you're going to see a special emphasis on this part moving models into production. And one of the most important aspects of moving models into production is that one

of your model gets deployed in a real world application is going to start having effects on the real world and it becomes important to ensure that those effects are positive and that they're fair to your clients to your users. No did IBM here's a hypothetical example that are researchers put together about a little over a year ago. They took some real medical records data and they produced a model that predicts which patients are more likely to get sick and therefore should get additional screening and they showed that if you naively train this model you end up

with a model that has significant racial bias, but that by deploying state-of-the-art techniques to adjust the data set and the process of making the model they could substantially reduced this model to produce a model you just this by us to producing mod. That is much less much more fair. You can see a jupyter notebook with the entire scenario from end-to-end including code and equations and results at the URL down here again, I need to emphasize this was a hypothetical example. We we built a flood model deliberately so we could show how to make it better. How about no patients were

harmed in this exercise. However Last Friday I sat down with my morning coffee and I opened up the Wall Street Journal and I saw this article with the bottom of page 3 describing a scenario eerily similar to our hypothetical, you know, when your hypothetical start showing up as newspapers headlines, that's kind of scary. And I think it is incumbent upon us as an industry to move forward the process that use the technology of trust in a I trust and transparency in AI which is why IBM and IBM research have released our tool kits of state-of-the-art

algorithms in this space as open stores under AI fairness 360. May I explain ability360 and adversarial robustness 360 it is also why IBM is working with other members of the Linux Foundation a I trusted AI committee to move forward Open Standards in this area so we can all move quickly more quickly to trusted AI If you'd like to hear more on this topic, my colleague, Auntie Madge sing will be giving a talk this afternoon at 1:40 on trusted AI or the full 40 minute session. Also, I'd like to give a quick shout-out to my other coworkers from Cody to have come

down here to show you cool open source demos at the IBM do assets Booth 201i. Also, check out our website developer. Ibm.com and today. Org on behalf of IBM. I'd like to welcome you all to tensorflow World. Enjoy the conference. Thank you. I untied some Twitter before I get started my conversation today. I want to do a quick plug for Twitter. What's great about events. Like this is you have to hear people like Jeff Dean talk and don't forget to hear from colleagues and people in the industry that are facing some more challenges as

you have conversations around developments in data science and machine learning. What's great is that actually available every day on Twitter Twitter has phenomenal for conversation on dating sites in Machinery people. Like Jeff Dean other thought leaders are currently sharing their thoughts and are developments. You can follow that conversation and engage in it and only that you can bring that conversation back to your work place and come off looking like a hero just something to consider. So without that famous plug, my name is had some Yardley product for cortex core, Texas

Twitter Central machine learning organization there any questions for me or the team? Feel free to catch me on Twitter me to follow up later. So before we get into how we're accelerating and Malik Twitter, let's talk a little bit about how are even using MLA Twitter Hooters largely organized against three customer needs the first of which is our health initiative. That might be a bit confusing to you. You might think of it as a user safety, but we think about it as improving the health of conversations on Twitter and machine learning is already used here. We use it to text ban. We

can all agree algorithmically an upscale to text spam and protect our users from Cimorelli. And the abuse face we can practically flag content of potentially abuse toss it up for human reviews and act on it before our users even get impacted by it. A third space for reusing machine on in here at 7 call NSFW not safe for work. I think you're all familiar with the acronym. So how can we at scale identify this content and handle it accordingly another use of machine learning in the space. There's more than we want to do here in this more than we're already doing. Kimberly the consumer

organization. This is largely what you think of the Big Blue app of Twitter and hear that the customer job with were serving is helping connect our customers with the conversations on Twitter that their interests and what are the primary veins are we in which we do this as our timeline our timeline today is ranked. So if you're not familiar users follow accounts content and Suites associated with this account funneled into Central feed and we rank that based on your past engagement and interest to make sure we bring forth the most development and most relevant conversations for you

now is lots of conversations on Twitter and you're not following everyone and so is also a job we have to serve about bringing forth all the conversations that you're not practically following but are still relevant to you. This is surface in a recommendations product, which is the machine learning to scan the Corpus of content on Twitter and identify what conversation to be mowing most interesting to you and push you in a notification. The inverse of that is when you cuz you know what the topics you want to explore our but you're looking for the conversations around that that's where

we used Twitter search. This is another surface area in the Big Blue app. They were using machine learning. The third job beyond for a customer just helping connect brands with their customers. I think of this is Rihanna's product in this is actually the OG and machine learning at Twitter the first team that implemented it and here we use it for you might expect a drinking that's kind of like the the timeline ranking but instead of tweet. It's at and identifying most relevant ads for all users and as signals to go into that. We also do use your targeting understand your past engagement as

understand which adds or are in your in your space. And the third and safety, you might not think about this when you think about machine learning and advertising but you're a company like United you want to advertise on Twitter. You want to make sure that your ad never shows up next to a tweet about plane crash. So how do we add scale protect our brands from those off brand conversations? We use machine learning for this as well. As you can tell machine learning is a big part of all of these organizations today and where we have shared interest in shared investment want

to make sure we have a shared organization that serves that and that's the need for cortex. Cortex's Twitter Central machine learning team and our purpose is really quite simple to enable Twitter with ethical and advanced AI in to serve that purpose we've organized in three ways. The purses are applied research group. This group applies most advanced. Ml seeking a techniques for industry and research to our most important surface areas where they be new initiatives or existing places. This team you can kind of think of is like an internal task force Arkansas and see if we can redeploy

against the company's top initiatives. Second hand signals when using machine learning having shared data assets are broadly useful right as more leverage examples of this will be our language understanding team that looks it quits and identifies named entities inside them those can I be offered up as features for other teams to consume in their own applications of machine learning. Our media understanding team looks at images and can create a fingerprint of any damage and therefore we can identify ever used to that image across the platform. These are examples shared

signals over producing that can be used for machine learning at scale inside the company. And 1/3 are organizations are platform team. And this is really the origins of Cortex here. We were vide tools and infrastructure to accelerate ml development at Twitter increase the velocity and I'll practitioners and this is really the focus of the conversation today. Let me set out to build this in my platform decided we wanted to share it. Ml platform across all of Twitter. And why is that important to be shared across all of Twitter while we want transferability. We want the great work

being done in the add steam to be where possible transferable to the to benefit the health initiative where that's relevant and similarly. If you have a great talent in the consumer team that's interested in moving the add steam if they're on the same platform, they can transfer without friction build a ramp up quickly. So we set out with this girl is having a shared ml platform across all of Twitter and when we did that we looked at a couple product requirements. First needs to be scalable. It needs to be able to operate a Twitter scale II needs to be adaptable. This space is developing

quickly. So we need a platform that can evolve with data science and machine learning develop. Third is the town. So we want to make sure that we have a development environment Twitter that appeals to the ml researchers and Engineers that were hiring and developing. 4th is the ecosystem. We want to be able to lean on the partners that are developing industry-leading tools so we can focus on technologies that are Twitter specific. Fourth is documentation that you going to understand that we want to be able to quickly unblock are practitioners as they had issues. It was inevitable any

platform and finally usability. We want to remove friction and frustration than the lies of our team so they can focus on delivering value for our end customers. So considering these product requirements. Let's see how tensorflow is done against them. Versus scalability we validated this by putting tensorflow by way of our implementation. We called deeper against timeline ranking every tweet that's ranked in the timeline today runs through tensorflow so we can consider that tests validated. Second is adaptability the

novel architectures at 10 so vulkan support as well as a custom loss function allows us to react to latest research and employee that inside the company example to be published on this publicly is our use of a split that architecture and a drinking so 10 suppose been very adaptable for us. Third is the council meeting about the tile pool and kind of two types. There's the mo engineer in the NL researcher and other proxy of these audiences. We looked at the GitHub data on these and clearly tensorflow is widely adopted among several engineers and similarly the archive Community

show strong evidence of white adoption the academic community on top of this proxy data. We've also have anecdotal evidence of the speed ramp up for a male researchers and Emily Engineers inside the company. The fourth is ecosystem. Whether it's tensorboard TF data validation TF model analysis TF Metals. Tortilla pubg gfx pipelines is a slew of these products out there. And therefore, they allow us to focus on developing tools and infrastructure that is specific to Twitter's needs and lead on the great work of others are really grateful for this intense

foot is great. Just being documentation. Now. This is what you would go to any go to tensorflow and you see that phenomenal documentation as well as great educational resources, but what you might not appreciate and we've come to really appreciate is the value of the user-generated content would stack Overflow another platform can provide in terms of user-generated content is almost as valuable as anything tensorflow itself can create and so tensorflow given to the widespread adoption is great tensorflow website has provided phenomenal documentation for

ML practitioners. Finally usability and this is why we're really excited about tensorflow 2.0 the orientation around the carousel API makes it more user-friendly. It also still continues allows for flexibility for more advanced users eager execution enables more rapid and intuitive debugging they close the gap between emel engineers and modelers. So clearly from this checklist. We're pretty happy with our engagement with tensorflow work cited about continuing develop the platform with them and push the limits on what it can do gratitude to the community for their

participation and involvement in the product and appreciate their conversation on Twitter as we advance it. Do you have any questions for me as I said before you texted me, but I'm not alone here today A bunch of my colleagues are here as well. So if you see them roaming the Halls before you engage with them or are they shared before you can continue the conversation on Twitter here? Are there handles? Thank you for your time. I just want to begin by saying I've been dabbling in like cloud and Cloud machine learning for a while and during that time. I

never occurred to me that we'd be able to come out with something like we did today because this is only possible because Google cloud and tensorflow can collaborate unbelievably closely together within within Google to begin. Let's let's talk a little bit about tensorflow 46 million downloads has been massive growth the last few years. It's expanded from from the Forefront of research, which we seen earlier this morning to businesses taking it on as a dependency for their business to operate on a day-in day-out basis just super exciting

piece as someone who spends most of their top most all of their time thinking about how we can bring a Ai and machine learning into businesses seeing tensorflow as commitment and focus on deploying Actual ml in production is super exciting to me. With this gross though comes growing pains and part of that is things like support right when my model doesn't do what I expected it to her or my training job fails. You know, what options do I have and you know, how well does your boss respond when you say Hey you yes, I have I don't know why my models

not training but not to worry. I've put a question on slacker. Hopefully someone will get back to me, you know, we understand that businesses who are taking a bet on tensorflow as a critical piece of their Hardware architecture there. They're stacked need more than this. Second it can be a challenge to unlock the scale and performance of cloud. For those of you who like me have gone through this journey over the last couple of years, you know for me it's started on my laptop, right and then eventually I outgrew my laptop. And so I had a gaming rig under my desk

right with a GPU and eventually there were eight GP there were eight gaming rigs under my desk and when you open the door to my office the whole floor new because it sounded like Antoine Wright and but now with today's Cloud that doesn't have to be the case you can go from that single instance all the way up passive scale univ seamlessly. So with that. Today we bring you tensorflow Enterprise. Tensorflow Enterprise is designed to do three things one give you enterprise-grade support. 2 cloud-scale performance And three managed

Services when and where you want them at the abstraction level you want them. Enterprise-grade support. What does that mean? Fundamentally what that means? Is it as these businesses take a bet on tensorflow many of these business have businesses have it policies or requirements that the software have a certain longevity before they're willing to commit to it and production and so today for certain versions of tensorflow. When used on Google Cloud we will extend that one year of sort of support a full

three years. That means that if you're building models on 11.15 today, you can know that for the next three years, you'll get bug fixes and security patches when and where you need them. simple and scalable stealing from an idea on a single note to production a massive scale can be daunting right? You are saying to my to my boss. Hey, I took a sample of the data was something that previously seemed to totally reasonable. But now I have to train on the entire Corpus of data and that can take days

weeks. We can help with all of that by deploying tensorflow on Google Cloud on network has been running tensorflow successfully for years and has been a highly optimized for this purpose. So scalable across our world-class architecture the products are compatibility tested with the cloud their performance optimized for the cloud and for Google's world-class infrastructure. What is this mean? So if any of you have ever had the opportunity to use bigquery bigquery is Google clouds massively parallel

cloud-hosted data warehouse, and if by the way if you haven't tried using big Prairie highly recommend going out and trying it it is it returns results faster than than can be imagined that speed in bigquery. We wanted to make sure we were taking full advantage of that and so recent changes and recent pieces included in tensorflow Enterprise have increased the the speed of the connection between the data warehouse and and tensorflow by three times right now all of a sudden those jobs that were taking days take hours.

Unity gaming wonderful customer and partner with us. You can see the quote here Unity leverages these aspects of tensorflow Enterprise in their business their monetization products reach more than 3 billion devices 3 billion devices worldwide game developers rely on a mix of scale and products to drive installs and revenue and player engagement and unity needs to be able to quickly test Bill scale deploy models all at massive scale. This allows them to serve up the

best results for their developers and their advertisers. Managed Services as I said tensorflow Enterprise will be available on Google cloud and will be available as part of Google Cloud AI platform will also be available in in DM's if you prefer that or in containers, if you want to run them on Google Cloud kubernetes engine or using Coupe flow on kubernetes engine. Infirmary Center tensorflow Enterprise offers enterprise-grade support that continuation that full free years of support that it departments are

accustomed to cloud-scale Performance so that you can run it massive scale. And works seamlessly with our managed services and all of this is free in fully included for all Google Cloud users Google Cloud. becomes the best place to run tensorflow but there's one last piece which is for companies for whom AI is their business not companies for who may I might help with this part of their business or that or might help optimize this this campaign or this back-end system, but

for four companies where AI is their business right where they're trained hundreds of thousands of hours of training a year petabytes of data, right using cutting-edge models to meet their unique requirements. We are introducing tensorflow Enterprise with white gloves support. This is really for cutting-edge AI engineering to engineering assistance when needed close collaboration across Google allows us to fix bugs faster if needed. One of the great opportunities of working in Cloud. If you ask my kids, they'll tell you that the reason I working in Cloud Ai, and an in kind of

machine learning is in an effort to keep them ever from learning to drive there 8 and 10 years old. So I need people to kind of hurry along this route if you will, but you know, one of the partners one of the customers and partners we have is Cruz automotive and you can see here there a shining example of the work we're doing on this on their Quest toward self-driving cars. They've also experienced hiccups than challenges and scaling problems. And we've been a critical partner for them in helping ensure that they can achieve the results. They need to just solve this kind of

generational defining problem of our of autonomous vehicles. You can see not only did we improve the the accuracy of their models, but also reduced training times from 4 days down to one day. Discreet this allows them to iterate at speeds previously unthinkable. None of this as I said wouldn't would have been possible without the close collaboration between Google cloud and tensorflow. I look back on on Megan's recent announcement of tensorboard. Dev. We will be looking at bringing that type of functionality into an

Enterprise environment as well in the coming months, but we're really really excited to get tensorflow Enterprise into your hands today to learn more and get started. You can go to the link. As well as sessions later today. So if and if you are on The Cutting Edge of AI we are accepting applications for the White Glove Service as well. We're excited to bring this offering two teams were excited to bring this offering to businesses that want to move in into a place where machine learning is increasingly a part of how they create value.

Thank you very much for your time today. Hi, my name is Kamal on the Prague director for tensorflow earlier. You heard from Jeff and Megan about the Prague directions. Now, it'll I'd like to talk about is the most important part of what we're building and that's the community. That's you. Thank you, sir. Have you seen in the video? We had a great Roadshow 11 event planning five continents to connect Community with a sense of I personally was very lucky to Summer cuz I got to travel to Morocco and Ghana Shanghai amongst

other places just to meet the community and to listen to your feedback and we heard a lot of great things. So as we're thinking about how can we best help the community? It really came down to three things. First we would like you to help you to connect with the larger community and to share the latest and greatest of what you've been building. Then we also would like you want to help you learn learn about ml learn about 10 or so. And then we want to help you contribute and give back to the community. So let's start with Kinect.

So why connect well, first of the community isn't supposed to be moody has really grown a lot. It's huge 46 million download 2100 commuters again. I know that we've been saying that all along but I really want to see if you snitch on behalf of the tensorflow team for making the community water district today. Another aspect of a community that we're very proud of is that is truly Global in this is a revised a map of her about get up first and if you could see recover all time zone in keep

growing so the communities huge it's truly Global really want to think about how can we bring the community closer together? And this really what initiated the idea of tensorflow world. We wanted to create an event for you. We want an event where you could come up and connect with the rest of the community and share what you've been working on. And this has actually started organically 7 months ago the tensorflow user group started and I think now we have close to 50. The largest one is in Korea has

46,000 members. We have 15 China. So if you're in the audience when you live stream and you looking this map and you're sinking wait, I don't see no, I don't I don't see a. Where I live and you have a tense of flow member that you connecting with and you want to start a utensil User Group Well, we'd like to help you. So please go to test for the community and how to get it started. So that next year when we look at this map. We have dots all over the place. So what about business is talk about developing what about business is one thing we heard from businesses is if they have this

this business problem, they think I might can help them but they're not sure how and that's a huge missed opportunity. When we look at the Staggering 13 trillion dollars that I will bring to the global economy over the next decade. I'm so you have those business on one side and then you have Partners on the other side of you who know by the mail they know how to use tensorflow. So how do we connect those two while this was the inspiration for launching our trusted partner pilot program which helps you as a business connect to a partner will

help you solve your problem. Search you go on 10th floor, or you'll find more about a trusted partner program. I'm just a couple examples of cool things have been working on one partner help take our insurance company shorten the insurance claim processing time using image processing techniques. Another partner help the global net tech company by automating the shipping labeling process using object recognition techniques, and you'll hear more from his partner later today. I encourage you to go check out the talks another aspect is that your

partner and you're interested in in in in getting this program. We also would like to hear from you. Let's talk about learn. We've invested a lot in producing quality material to help you learn about ML and about tensorflow. One thing that we did over the Summers with was really exciting is for the first time. We were part of the Google summer of code. We had a lot of interest Rabil to select 20 very talented student and they got to work the whole summer with amazing mentors on the tense of flow engineering team and it worked on very

inspiring project going from 2.0 Swift UGA to see if agents so we were so excited with you to the success of this program that we decided to participate for the first time in the Google coding program. So this is the same program before pre University students from 1317 is a global online contest and introduce two teenagers to the world of you're contributing to open source development. So that mention we've invested a lot this year on ML education material

but one thing we heard is that there's a lot of different things and I know what you want is to be guided through Pathways of learning. So where we worked hard on that. I'm excited to announce the new learn ml page on principles are treated for you by any chance of 14 and organized by level. So you have from beginners to Advanced you can explore books courses and videos help you improve your knowledge of machine learning and use that knowledge and use tensorflow to solve your real word problem. And for more exciting

news that will be available on the website. I'd like to play a video by a friend Andrew. Yang. Harvey one. I'm in New York right now and wish I could be there to enjoy the conference, but I want to share with you some exciting updates. Do you say I started a partnership with the tense of sew team? Was it go about making world-class education available for developers on the coast Aero platform since releasing the Deep learning specialization. I seen so many of you. I just a thousand there in the fundamental schools with username on tonight. It would be able to

compliment that's what the tens of flow in practice specialization to help developers. Learn how to build Mo applications for computer vision and LP sequence models and more today. I went to share of you and exciting new project that the Deep learning AI in tensorflow teams have been working on together. Being able to use your models in the real world scenario is when machine learning gets totally exciting. So we're producing and you full course specialization called Heather so data and deployment don't let you take your mlschools the real world the toy models the web mobile devices and

more it will be available on Kyocera in early December. I'm excited to see what you do with the babies all this keep learning. Alright, this is really cool. You know since we started working on these programs, it's been pretty amazing to see hundreds of thousands of people. They just course and the goal of these educational resources is to let everyone participate in the ml Revolution regardless of what your experience with machine learning is And I'll contribute. So

great way to get involved is to connect with your G. We now have 126 machine learning CDs Global eat. We love her titties are amazing amazing things for the community this year alone. They gave over for tech box 250 workshops. They Road 221 articles reaching tens of thousands of Developers. And one thing that was nudist your is a big help with. Springs. So dogs are really important are critical, right? You really need good quality docks to work on machine learning

and often the documentation is not available in people's native languages. And so this is why we went our ship with rdgs. We launched the docks friends over 9000 API docs were updated by members of the tensorflow community in over 15 countries. Are we heard amazing stories of you know, the power a power outage in power running out and people, you know coming back later to finish the dock Sprain and actually writing dachshund are phones. So if you've been helping with docks and thank you, I appear in the room over LifeStream. Thank you so much. It's you're

interested in helping translate documentation in her native language. Please reach out and will help you organize a duck spring. I don't think that the g d's help with is experimenting with the latest features. So I want to call out of town visiting an MLG de from Singapore was already experimenting with two point accent fuse when you can hear him talk later today to hear about his experience. So if you want to get involved, please reach out your GED and and start working on Central. Another really great way to help

is to join a sick as big as a special interest group and it helps you work on the things that you're the most excited about intensive. So we have now 11:6 available. Add ons iron networking in particular really supported the transition to 2.0 by embracing the parts of contribute and putting them into 2.0 and 6 Bill ensures that your friends will everywhere on any us in your architecture and plays well with a python the python library and we have many other really exciting 6, so I really encourage you to join one.

Another really great way to contribute is through competition. And for those of you who are there the death Summit back in March. We launched our 2.0 Challenge on death post and the grand prize was an invitation to this event tents with the world. And so we would like to honor our 2.0 challenge winners and I think we are lucky to have two of them in the room Victor and Kyle if you're here. The Victor worked on hand tragedy has a library for for typing hand gesture in the browser. And then I kind of worked on a Python 3 package to

accumulate in body and body stimulation. So when do we hurt you during our travels is? Oh. That the hoxton was great. But I totally missed it. Can we have another one? Well, yes, let's do another one. So if you go on Jeff Hall. Death was.com, we're launching a new challenge. You can apply your 2.0 skills and share the latest and greatest and when cool prizes. It's a really excited to see what you're going to build. Another great community that we're very excited to partner with his can I go

so we're alone we launched the contacts on toggle to challenge you with a question answering model based on Wikipedia article. You can put your national language processing skills to the test and earn $50,000 in prizes. It's open for entry until January 22nd. So best of luck. So we have a few action items for you and they're listening to slide. But remember we created a sense of the world for you. Philips you connect and share what you've been working on so our

main action item for you in the next two days is really to get to know the community better and with that I'd like to thank you and I hope you enjoy the rest of Jeff world. Thank you.

Cackle comments for the website

Buy this talk

Access to the talk “TensorFlow World 2019 Keynote”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “TensorFlow World 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “AI and Machine learning”?

You might be interested in videos from this event

March 11, 2020
Sunnyvale
30
205.62 K
dev, google, js, machine learning, ml, scaling, software , tensorflow, web

Similar talks

Zak Stone
Product Manager for Cloud TPUs at Google
Available
In cart
Free
Free
Free
Free
Free
Free
Raziel Alvarez
Software Engineer at Google
Available
In cart
Free
Free
Free
Free
Free
Free
Kangyi Zhang
Software Engineer at Google
+ 3 speakers
Brijesh Krishnaswami
Software Engineering Manager at Google
+ 3 speakers
Joseph Paul Cohen
Postdoctoral Fellow at University of Montreal
+ 3 speakers
Jared Duke
Software Engineer at Google
+ 3 speakers
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “TensorFlow World 2019 Keynote”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
558 conferences
22059 speakers
8190 hours of content