TensorFlow World 2019
October 30, 2019, Santa Clara, USA
TensorFlow World 2019
Video
Introduction to TensorFlow 2.0: Easier for beginners, and more powerful for experts (TF World '19)
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
80.62 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speaker

Josh Gordon
Developer Advocate at Google

Josh Gordon works as a developer advocate for TensorFlow, and teaches Deep Learning at Pace University. He has over a decade of machine learning experience to share.

View the profile

About the talk

TensorFlow 2.0 is all about ease of use, and there has never been a better time to get started. In this talk, we will introduce model-building styles for beginners and experts, including the Sequential, Functional, and Subclassing APIs. We will share complete, end-to-end code examples in each style, covering topics from "Hello World" all the way up to advanced examples. At the end, we will point you to educational resources you can use to learn more.

Presented by: Josh Gordon


Share

My name is Ashley. I'll be your host for today will gets the show on the road. I like to introduce our first Speaker Josh Gordon who's on the tensorflow team. Josh is going to talk about the ease of tensorflow 2.0 and will walk us through the style three styles of metal building apis complete with coding examples. So please welcome help me in welcoming Josh. Thanks so much. How's it going? Everybody? So let me just unlock the laptop and then we will we will get started. So I have only good news about tensorflow 2.0 which is about a month old. It's

wonderful. It is massively easier to use and it scraped both for beginners and for experts and also from a teaching perspective. It has a lot of thanks recommended that I'll talk about 2. And I know big deal maybe we can close the doors back so beloved. Thanks, but one thing I wanted to mention right off. The bat is tensorflow to has all the power of graphs that we had in tensorflow, except they're madly madly matte the way easier to use in tensorflow. One of the name tensorflow a tensor is basically a fancy word for an array. So skeleton tense earlier than 10:30 cube is a tensor

and flow refers to a data flow graph and in annually to find a graph and you would execute it was a session and just felt a little bit like metaprogramming and this is exactly the system you would have wanted several years back if you were an engineer and the challenge you faced was massively distributed training skyscraper that however as a developer or students seeing how she looks basically you can think of a tensor in a very similar way to numpy and be right and so you can work with them in terribly exactly as you expect

in Python. So you no longer need to use things like sessions or anything like that. And it works as you'd expect which is great but details and here's some of the things I'd like to talk about. So this is a rough schematic of how tensorflow to looks intense with a very very large system with many moving pieces on it the whole framework for doing machine learning and what I'd like to do here is show you a couple of the pieces and what some of your options are to use it. So we will start with designing models using my favorite API of all time, which is Kara and

was awesome in tensorflow to and this is a really important Point. There's a spectrum of you and all these are built into the same framework and you can mix and match as you go and what this mean is if your total novice to deep learning you can start with something called the sequential API, which is by far the easiest and clearest way to develop deep learning models today and it's wonderful. You can build a stack layers. You can call things like a pile and fit 100% valid tensorflow to code it is just as fast as any other way of writing code. There's no downside to it at all. If you

use case Falls in that bucket and what's really important in tensorflow to is as is helpful to you. You can optionally scale-up and complexity using things like a functional Epi or going all the way to stop clapping all in the same framework and you can mix and match as you go and what this means is what I'm teaching this, for example, I can start me really simple clear easy ways. And then what I want to write grading system scratch, I can do that or right custom layers to scratch I can do that. It's very busy. So let's take a look at 1 what song is this look like and I only have one side

on sequential in the reason is we have so many tutorials on it. There's an entire book on it, which I recommend at the end. So I've just one side on this but in case you're new to this quantity, I'd let you find a stack of layers. And this is by far the most common way to go do models and something like a light eighty to ninety percent of machine learning models will fit into this framework. That's great. What's interesting is when you're using the sequential API and the functional although a lot of developers don't realize this what you're actually doing it to finding a data structure. And

this means you can do things like model. Summary and see a printout of all the layers and all the weights. It also means that we can do compile time check so he called Model. Compiled we can make sure all your layers are compatible. It also means that when you share your model with other people, for example, imagine that you want to do fine tuning for transfer learning if you have a model is defined with the sequential API are the functional Epi because it has a data structure of a stack of layers or with a functional Epi graphic layers, you can expect that data structure and you can pull

layers out of it and get the activations and you can do fine too many things like that really easily any way to finding a stack of this works exactly like it doesn't care that I owe with multipack and Cara. Slept great one thing that's also extremely powerful that a lot of people are new to is the functional EPI and especially if the eyes are building Stacks the function with the eyes are building bags of grass and I just want to show you how powerful this is. So to be honest most of what I've been doing myself is using I didn't sequential ATI or going all the way to subclassing and just

write everything from scratch, but I heard a really awesome talk on the functional Epi couple weeks ago in Montreal and I've been using it a lot since and I love it. So I just want to show you what it can do. Until I just want to show you what a quick model would look like for something like Visual question-answering and a lot of the time when you start learning to spend a lot of your time or deep learning rather you spend a lot of your time and doing things like cats and dogs, but we can take a look at a slightly more sophisticated modeling and you're given two in

one you're giving an image. So here we have a pair of dog and you give it a question in natural language and hear the question is asking what color is a dog on the right? And so to answer a question like this. You need too much much much more sophisticated model than just an image classifier. You can still phrase. This is a classification problem and you Google for beachy way. There's two really excellent papers that will go into detail, but you can imagine, you know, you have some model but let's talk about how we would do this. Can we have a model with two inputs? We have an image in a

question. And if you take a machine learning course, you'll learn about the processing images with convolutional layers and Max pooling layers and you learn about processing text with things like lstms and abetting and one thing that's really powerful about deep learning is all of these layers regardless of what they are they take vectors input and if you're a dense layer, you don't care if your input happens to be the output of some convolutional layer of your input happens to the output of some lstm. It's just numbers that you're taking it into it so and deep-learning there's no

reason that we can't process the image with a CNN process to text with an LS p.m. And then concatenate the results and see that into a dense layer and phrases to the classification problem. So you can imagine the output of 10 player might have a thousand different classes and each class corresponds to one possible answer. So here's the answer is golden. We want to classify both of these inputs jointly as golden and I want to show you how quickly Something like this for the function y a TI which is really amazing. So cool. I probably should have put to the slide but this is what I was just

talking to so this is the architecture that we want we want. This is one model and it's going to have two heads and the first head is going to be a standard stack of CNN's and Max pooling layer. And this is exactly the same model you would use to classify cats and dogs and you can do all the same tricks to learn about their you can import like here. I want to show you how to write it from scratch but there's no reason that you couldn't import like mobile Madden Z whatever and use that to get activation for the images, but basically we're going to go from the ER and in the other head. We're

in the process the question. I'm going to go from a question to a vector and to do that. We have new bedding in lstm at the end we can catnip results in classified and this is nearly the complete code to this entire bqa model. Actually. It is the complete code for the hell of it, which is nuts when you look at it. So here this is our image classifier and you would want something much deeper, but this would be Hello world image classifier. So a vectors going to go in and just some like rain and chips. I can see my slides at school. So you'll notice in the first layer of this is a

convolutional layer has 64 filters Each of which is 3 by 3 and it has rainbow activation and you see in the input shape their I'm specifying how large my images although you'll find the carrots can often infer do you put shape whenever you can and it's just one less thing that can go wrong so fully specified catch bugs early after that. We're doing Max pooling in the important parts were flattening it. So that's actually sequential model that were using inside the functional API after that. We're creating an input later and this is for the functional API and we're beginning to chain layers

together to build up a graph. So here's what we're doing is we're changing the vision model to the input layer. so that's the first half of her model and here's the second half and we're almost done so this is the model that's going to process the question here we're creating another input and I don't have the pre-processing here but you can imagine that we've tokenized the text and we vectorized it we padded it and then what we're doing is were feeding that into an embedded and then into an LX p.m. and this is exactly what you might do if your training a text classifier the important

thing is that a vector is coming out and we're changing these Galaxy wrap and at the very end in this is the magical about deep learning we can simply concatenate the results nothing simple but it's it's one line of code which is nice nothing simple conceptually but we can do is we can keep the results and now we just have a vector and just like any other problem now that we have the Spectre we can feed it into dense layers and we can classify it And so here's the caliber model and now we have a tensorflow to model that will do vqa and this will work

exactly like any other Keras model. So if you want you can call model. Fit on this thing. You can call model. Train on batch. You can use call back. If you want. You can write a custom Training with using gradient tape. And so I think this is really powerful and what's nice about these functional apis just like the country models because there's a data structure behind the scenes. There's a graph tensorflow to can run compatibility check to make sure your layers worked in each other. So it's really really nice. So basically if you haven't use the functionality I either you just learned

about Carol sequential from lot of books or you're coming from pytorch and you only use things like subclassing. I really heard you to try and laugh. Two weeks. So I love it. Other things of course so that crap I just made in Google Slides, but because you have a data structure instead of calling something like mobile. Summary, you can call model Claudel and goal actually is nice and rendering of a graph that looks exactly like what I showed you so for complicated models. So this is this is cool. The only time I found it really useful as when you have complicated models things like

rednecks, you can actually fly out the whole graph and just make sure that it looks as you expect after stumbling it so it's really nice. Anyway, then there's another style in tensorflow to sew the last two things. I showed you were built into what you'll find. It care. I thought I owe and that's mostly back in Kerala. And this is something that's new. So this is subclassing in this is the chainer / pytorch hostile in developing model, which is also really really nice and basically you'll see this little Spectrum hear you're getting increasing control a

TV and Sony style. You can be a researcher or student learning is for the first time but what you're saying is I just want to write everything he's crashed. So cute. What we're doing is worth finding a subclass model and I also really love this just feels a lot like object-oriented numpy development. So what we're doing in the idea is basically it's a very very similar in all these Frameworks the framework gives you a class and give the spot happens to be model and there's two chunks to writing a subclass model you have the Constructor and

the call method or the Ford method or the predict method. Can you define your layers? So here I am creating a pair of dense layers and it's the exact same layers that you find in the sequential model and the calm after the Ford method. You describe how these layers are chained together. So here I have some inputs and feeding the inputs through my dense layer and then I'm beating that results to my second end layer and returning it and what's nice is this is not symbolic. So if you're curious what exit is you can just do printex, of

course like you wouldn't Python and that will give you the activations of that first then player if you want you can modify it. So for example here I've highlighted railu. Let's say for some reason I'm not interested in using the built-in radio activation. I want to write my own what you can do is just remove that and you can write your own with just regular python float right there. So there are Brittany railu using the built-in nothing, but I could also just right that you can regular python. So this is great for Hacking on things. It's great. If you want to really know the details of what

exactly is flowing in and out of these layers. It's the perfect way to do it. Also, for example, there's nothing here that same that you have to use these built-in Den players. So if you look at the code to the dense layer, that's doing something like w x + B. You can absolutely just write that from scratching in Python. Amanda Caesars whole bunch of references. How do you use each of these three styles of chaos API, how do you write custom layers and stuff like that? They're great. And then we have a couple tutorials are recommended

segmentation one is super nice. We voted the summer. It runs really really fast, and I think you'll like it. Also, by the way, because this is an intro to tensorflow to talk. Let me just show you what the tutorials are in case you're new to these and it's just one thing. I want to point out. So this is the tutorial on a website big surprise when it's mentioned in the think this is a really nice feature of tensorflow. Org. Obviously, this is a web page HTML, but this is just a direct rendering this jupyter. Notebook for the web

page is just a jupyter notebook. And the reason we've done that is all the tutorials are runnable end-to-end locally run toriel for declining collab. This is exactly the same page if you haven't answered flood at Ord. For all of these you can do run time run along and this has the complete code to reproduce the results that you see here and what this means is that are such Orioles are testable and freshen up a trust but verify so for a long time, I've seen like nikkietutorials that are like

key pieces left as an exercise to the reader. So at least all the controls on the website all of them run and then which is really really nice to some we still have plenty of work to do cleaning them up. But at least they guarantee you they have the complete code to do this thing. So I really like that a lot. All right. I want to talk a little bit about our training model. And so basically there are several ways to train models. And again you can use the nice thing about tensorflow to is you can use

the one that's most helpful for your use case. So you don't always need to write a custom Training Boot and scratch so you have other options and the first is that you might be familiar with some careless. It's just simply calling model. Fit. And what's really nice about model. Fit. This doesn't it doesn't care. If you have a sequential model of functional model or sub bot model the works for all of them and model. Fit fast is performing simple. One thing that's a little bit less obvious. When you do model. This is not just a baby way of training model. So if you're working in a team and you

call model. Fit, you've reduced your code footprint buy a lot. This is one less thing that your friends need to worry about when they're playing with your models down the road. So if you can use the simple things you should unless there's a reason for more complexity. Of course just like a regular software engineering Accuracy, by the way tensorflow to has really nice metrics for things like precision and recall built-in. You can also write custom metrics and something is really helpful to is call backs. So for instance, these are things that I

don't see a lot of new developers using and they're super helpful. So call back someone typically and a really wonderful way to do that is to make lots of your piano lost over time and so on and so forth. These callbacks can do things like that for you automatically we can be helpful as well. You can also write custom call back. So cool thing would be like, let's say you're taking all your training model to take very long time to train. You can write a call back to send you a slack notification after every training complete.

And then I don't have sides for train on back here, but I did want to show you a custom Training with a gradient tape because this is also very powerful especially for students or learning this for the first time and don't want black box score for researchers. And so here's a custom Training Loop and I have an example of this for you in a minute just with linear regression so you can see exactly how it works. But this is a custom Training and what we're doing here. We have some function and for now you can pretend that at t a function in The Orange Box doesn't exist. So that's optional

pretend. It doesn't exist. We have some function is taking features and labeled and whenever we're doing training and deep-learning we're doing gradient descent. The first step in doing great assistant is getting the gradient and the whale Frameworks and the implementation in tensorflow if we start recording operations on a tape, so here we creating a tape recording what's happening. So we have just regular python code is calling the Ford method. Call method rather on your model. So we for the features to remodel and where Computing some lost and maybe we're doing

regression that squared are and then what we're doing is we're getting the greatest of the Lost with respect to all the variables in the model and to print those out you'll see exactly what the gradients are and then sear were doing gradient descent manually were applying them on an Optimizer. We can also write her own Optimizer and I'll show you that the second anyway, this is a custom Training loop from scratch and what this means is that if you so model. Said you can use optimizers like rmsprop and Adam and all this stuff. But if you like to write like the Sarah Optimizer you can go ahead

and write it in Python and they will fit right in with your model. So this is great for research, by the way. First of all, you never need to write it in your coat and work the same but if you do want a grass in tensorflow to or if you will basically do you want to compile your coat and have it run faster? You can write that TS function annotation and what this means is that tentacle to will trade your computation compiled it and the second and on time that you run the functional be much much much faster because it's running entirely in C plus plus so all the rest of grass in tensorflow to is

basically at t a function but it's optional you don't even need to use it but it's easy performance if you need it. And then I just want to make the super concrete cuz this isn't getting started with tensorflow to talk. There's a lot of awesome tutorials that will quickly show you one website how to train your buyers what not. But I think a good place to start to is just looking at linear regression. And the reason is its gradient descent and a nice place to start exactly what that is. And because I have I know it's tiny but because I have a

Graphic here on the left. I'm just briefly going to explain how linear regression works and it's the same it's the same pattern for deep neural networks to which is really surprising. So In linear regression or deep neural networks. Do you need three things? The first thing you need is a model which is a function that makes it predicted and so a model for linear regression you might have learned in high school can be y equals MX plus b, we're trying to find the best fit line and we can define a line Y equals MX plus b that means we have two parameters or variables that we need to

set we have em, which is the slope right? And we have B which is the intercepts and by wiggling those variables we can fit the line to our data. So on the right will see a plaid. We have like a scatter plot with a bunch of points and we have the best fit line and the idea is Now that we have a model or line that we can wiggle. We need a way of saying or quantify how well does this line fit the data one way to quantify how well the line fits? The data is squared are what that means is you drop a line on the page and then you measure the distance from your lines all the points and you take

the sum of the squares of that the higher the sum of the squares is the farther the worst Shoreline data the battery line fits the data to lower the sum of the squares so you can have a single number which is called loss that describe how badly you're lying fit the data and then you want to reduce that lost and you know that the Lost get to a minimum your line will fit the data. Well and you found the best fit line the way we reduce the loss is gradient descent and on the left and we're looking at Wawa store a squared error as a function of two variables m&b

and you can see that if we said m&b with Ramen Jessica start a loss of pretty high and then we wiggle them we can The trick is how do we figure out which way do we go m&b? And briefly? I don't want to go in too much of a tangent. There's two ways to do that. If you forget calculus, you can find the gradient numerically. It's not rocket science. You take em, you wiggle it up a little bit Rican food your loss. Then you take em, and you wiggle it down a little bit and you recompute your loss you figure out which way makes the Wasco down. That's that's the direction you give me William do the

same thing for bee that's very very slow and it's faster ways to do it too, but I want to show you what this code looks like in temperature supposed to So basically you don't have to use carriss at all. You can also use tensorflow to a lot like you use numpy and basically whenever you see something like tensor just replace that your head with numpy India, right? So we have constant and you think you found a constant has a shape in a dating site one nice thing about 10 to book answers. They have a numpy method single straight from sensors, and I'm by his grace.

Cuz he's got to shape and then and then just like you expected numpy. We sings like distribution. So if you want to feel normal. He's really quick and you can do math in tensorflow to a lot like you would do nothing then pie. So basically just like you had things like the idea is the same the names might be slightly different you might have to poke around a little bit but the names are all there and example of like very very simple, but this is more concrete. So

can we have a constant at 3 and we have a function which is x squared. + so if you think about you will live Calculus if we have 3x squared * 3. And you can also do that with all the variables in layers at once. So here we have a pair of dense layers, and we're calling the dense layers on something and we're getting the gradients. And let me show you what it looks like when your aggression just to make his concrete. So this is code for y equals MX plus b. And the first thing I wanted to mention is how you install tensorflow to if you're running and collab right

now and collab tensorflow is installed by default. But there's a magic flag you can run. So that magic man will give you the latest version of tensorflow to if you're running this locally, you can visit tensorflow tutorial / install and you can do pip install tensorflow. So anything locally, but I have this year convenient. Tensorflow is just a python Library. You can Port it important as you always would. And the first thing we do in this notebook. I know I flew through some of those code examples but we're going to do is just create a scatter plot to just some random data and I'm

looking to find the best fit line. So he let me see my brothers before yeah. And I'm replying the data and here's what we get. Tensorflow constants in these are tensorflow variables and constants are constant variables can be adjusted over time. You almost never need to write code this low-level. This is just pretending that we don't have carrots. We don't have any built-in fit methods. We just want to do this from scratch. But here's how you would do it and scratch them credits and variable

and then hear this stuff looks scary doll. This is the predict function for linear regression. So this is our equation for a line Y equals MX plus b and her goal is going to be windy. Here's our loss function and what we're doing is we're taking the results that we predicted. Mine is the result that we wanted. We're squaring it and we're taking the average so that's where they're And then if we go to this notebook we can scare Square there when we start. And here's gradient descent from scratch

pretending that we didn't have anything like mold off fifth. Go for some number of steps. What we're doing is we're taking her ex's and reporting to him through the model to predict. Otherwise, we're getting the square there which is the single number and then we're getting the gradient of M&D back to the wasp and this was literally tell us if you print those out those are just numbers and the gradients point in the direction of steepest Ascent. So if we move in direction of gradients are lost will increase decrease. So we remove remove in the reverse direction of the gradients, which is great

and here again, this is like the lowest possible level way to write this code. We're doing gradient descent from scratch. So we don't have any Optimizer. What we're doing is we're taking a step in the negative gradient X learning rate and that adjust m&b as we go and if you run this code you'll see the Lost is decreasing. And you'll see the final values Batman be in place of best fit line. And then what I did for you is I wrote this is Lil Bit of weird, but I wrote some code to produce this diagram just so you can see exactly what the greatest so that's how you would write things off

and scratch and tensorflow two and what's really really awesome. When you move to things like neural network, this code is basically copy and paste it. So if you can compare this custom Training for linear regression to the custom Training for deep dream or any of the fancy model some website. It looks almost like how to tape in the steps of the thing. You make a prediction you get your loss to get the gradient and you go from there. That's really nice. All right, all the things I wanted to mention. I got to move a lot faster.

Don't terms of datasets. Basically, you have two options in tensorflow to the first option is at the top and these are all the existing care updated stats that you find it and these are great start with their numpy format and they're usually really tiny they fit in the memory. No problem. Then we have this enormous collection of research data sets, which is awesome and that's called tensorflow dataset and I'm showing you how you can download something like cycle Gannon tensorflow dataset what's important to be aware of I just have a couple quick tip. If you're downloading a dataset in

tensorflow data format, by the way, it's going to give you something called the data sets not going to be in tfdata and tfdata it's a high-performance format for a it's slightly trickier to use than what you might be used to and so if you're using tensorflow dataset, you have to be very very careful to Benchmark your input pipeline if you just imported data set and try and call model. Fit on the dating site in might be slow. So it's important to take your time and make sure that your data pipeline can read images off discs and things like that efficiently. I just a couple tips that might be

helpful tensorflow dataset to recently added to in memory flag. So if you don't want to write fast input pipelines you can pass in memory and you can flirt the whole thing it around so that would make it really easy. And it also added this cashing function, which is really really nice. So here's some code for tfdata and maybe we have an image dataset and we have some code to process the images and let's pretend that pre-processing code is expensive and we don't want to run it every time he needs to know what you can do with you can have this cash line at the end cash will keep the

results of the peace process has been run through pipeline much faster. So cash is a really anything to be aware of the goal here is not to give you all the details for this is just a point has been things that are useful to know about you can also cash to files so cash without any parameters will cash it into RAM. If you pass a filename, you can actually cash it to file on this to One thing is awesome in tensorflow to that you if your next birthday you'll care about and if not, you'll care about down the road is distributed training and I'm going to skip some slides

to move faster. What I wanted to say briefly is distributed training in tensorflow to is awesome. And it's awesome because if you're doing single machine multiple GPU sync, mr. Date of parallel training or you're doing multi machine multigp synchronous state of parallel training. You don't need to change the code of your model, which is exactly what I care about that. So it's awesome and basically here is some Carole tomorrow to be built an application for Resident, but it doesn't matter and I just want to show you how we run this code on one machine with multiple GPU. We just

wrap it in a block that's it and model. It is distributed where and work so you don't need to change your model to run on multiple gpus and this particular strategy is called them. There's different strategies you can Distributor not your model. There's another strategy. It's like weird multi worker which you can change that one line. And then if you have a network with multiple machines on it again, your code doesn't change. So that's awesome. I really really like the design of this. All right, all the things that are awesome about tensorflow to that. It really encourage you

to check out especially if you're learning or you give students is going Beyond python. So we talked about training model in Kerala and I just want to show you some of the cool things you can do to deploy them and roughly there's a bunch of different ways that you can deploy your model the way that I was used to few years ago as a python developer was I throw up a rest API and I can do that you serving or plaster whatever you want by servicing Biden API. That's what I know how to do there's a couple things. I've been learning since then we've been great and that's models in the browser with

tensorflow JS deploying them on Android and iOS using tensorflow Lite and very recently running them on Arduino using tensorflow Lite micro and I have a couple congested projects for you. They just wanted to point you to sew the first Disney. Tiny Amell and this was a blog post. This is a guest article on her blog a few weeks ago by the Arduino team and this article is basically tutorial and what we're looking at is an Arduino. It's a microcontroller and if you're new to a microcontrollers, it's a system on a chip, but it also has a pin that I can run voltage to read voltage from

and so for instance you can plug in LED light bulb into one of those pins and you can have C code which runs voltage to the pin and turns on the light likewise you could have an accelerometer attached to one of the pin and you can decode that leads from the accelerometer and gives you some time series data. So that's what a microcontroller is is computer. Plus he's basically pins. Tensorflow Lite is the code. We used to deploy tensorflow models on to phone but recently tensorflow Lite micro now that you employ them on to Arduino and these things are smaller than a stick of

gum. This is a really nice one that has this is a nano that has built-in sensors. I have one at home, but it's still about 30 bucks and it has a built-in accelerometer temperature sensor. But anyway, we're looking at here. This is a demo using accelerometer someone trying to model to recognize to gestures. I want to take a punch and what is an uppercut and you can see they're holding. The laptop is recognizing the jesters, but the workflow in this isn't the blog post is not bad at all considering a how much power you're getting out of it. So what you're doing and let me see if I can show

you this. Basically the first thing you need to do with captured data, and I wanted to bring her to be with me as I would be better just to quickly show you gift, but you can plug the Arduino into your laptop with a USB cable. Need to collect training data for your model. So what you do is you hold the Arduino on your hand and you collect a bunch of data for your punching gesture. And you say that's a disc is a Time series and the diagram on the right there. I was just capturing the IDE this morning as movie Cell rated R movie Arduino around accelerometer and what you get out. It's just a

CSV file with time series data exactly like you would have if you were looking through the forecast in Victoria 100. Org the same thing for your other gestures. So what you do is you gather data you save CSV files you upload them to collab collab to classify the data and that's just a regular python model. You don't need to know anything special about tensorflow like to do it. And then what I wanted to show you And you can find a complete code in a blog post. There's a very small amount of codeine need to convert your model from python down into tensorflow Lite

format to run on device. And this is a tiny amount of code. Once you have this model and this is a little unusual. It's we're going to convert it to a c array and that's because this Arduino it has like one megabyte of disk space then I think like 256kb. So we're converting it into the smallest simplest possible format, but you can burn it to a Sea Ray and then what you can do we have an example and you can paste the C array into the example and now you're running your your tensorflow model on device and it's amazing flag had so much fun doing this to you

just to train a Time series forecasting model, which is really valuable skill and you're pretty good tutorial for an interesting but it's vastly more interesting if they can send it on device. Super cool. Also this is a brand-new area. So tiny male referring to do machine learning a small devices. I think there's a lot of comment another way to the player models which is super powerful is in JavaScript and this is using tensorflow. JS is another project suggestion. When the first tutorials you might run through in deep

learning at sentiment analysis, so give it a sentence predicted positive or negative also really valuable skill can be somewhat dry right, but the first time I've ever been super excited about 10 years ago. But you can run 10 analysis live in JavaScript. So for example, this is just a web page and down to the bottom here. Movie was awesome and you can see that predicted positive. And seriously, that's pretty good negative. And what's nice about doing JavaScript in the

browser from the user perspective is there's nothing to install which means that you're a python developer. And your goal is to have a cool demo instead of throwing up a rest API. You can create a web page and share with friends and what's nice. This model was written in tensorflow to using Cara and we have a converter scrap stable converted into tensorflow JS format to run the browser. So following examples. I am not a JavaScript developer at all. I can get through the examples and it's not too bad. So it's possible to do but it's a really good opportunity. If you

have friends that are good jobs for developers. This is issued collaboration opportunity where you can develop models and your friends can help you to play it in the browser. Also, if you haven't seen it, you can also right models from scratch and JavaScript and I just wanted to show you a couple demos here. I'm almost certainly going to accidentally unplugged this laptop. but another Super convincing thing about why you might want to do JavaScript in the browser. This is a model called posing at this could be the end of the presentation. This is the model that it's running

entirely client-side in the browser some nothing being sent to a server. And it's not meant for this many people but you can see that it's starting to recognize where people are in the audience and what school I know. This is all obvious for web developers, but I'm not so this is all new to me for me to do this in Python would have been a nightmare like I would have to be streaming data from the video camera sending it to a server classifying its herbicide sending the results back. There's no way I'd be able to do that in real time like that also for privacy reasons, that would not be cool. But

because this is running client-side in JavaScript, we can do that and so immediately like you can see all the things that we can do on top of it. There's other models like that to that are just really compelling. and then for people looking for applications So there's sentiment analysis for model built right on top of that in the browser is just text classification, but this is multi-class text classification. And this is a toxicity detector is yours and you want to figure out if this is like a comment that you might want to post on YouTube or something like

this, but you can build tools like this that analyze text privately quickly client-side you can imagine for example, if you had a job is like a wicked wicked media moderator and you wanted to take a look at article edits to see if there was something you wanted to publish or not. You might spend a lot of your time looking for toxic, but something like this you could very quickly pre-process you can immediately have code that writing web page highlights the bad parts of the article and I need to move a little bit quicker. So stop If you knew the tentacle Jazz by far the best demo,

there's two in the right here one is PostNet what you can get it that link is also Pac-Man. If you haven't seen Pac-Man you can control Pac-Man with your face. You can train a model live in the browser to do that is awesome. And then flying through this last comment then learning more If you're working in collab and are used to using chaos what you don't want to do with import tariff. You want to save from tensorflow in Port Cara and that will give you the version of chaos, and only problem collab if you ever see message

and then last flight here are four books that I'd recommend the first the very first book is about tensorflow to and this will give you low level details. It's great only by the second edition the first edition teacher's tensorflow one, which you do not need to learn how to use tensorflow to the second book doesn't mention the word tensorflow it all if that's the last book by Francoise Chalet, but it's outstanding if you too deep learning it's a perfectly good place to start all the code for the second book will also work inside tensorflow to buy

saying from tensorflow in Port Cara. Nothing else will change. It's all completely good. The next book is careless in JavaScript. So deep learning with JavaScript, which is great. The fourth book is brand new boarding time. Thanks very much, and I'll stop there. I'll be around after 4 question. I'll be right outside. Who is sitting in the back? There's some more seats over on this side of the room. I will be back in about 10 minutes. See you soon.

Cackle comments for the website

Buy this talk

Access to the talk “Introduction to TensorFlow 2.0: Easier for beginners, and more powerful for experts (TF World '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “TensorFlow World 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “AI and Machine learning”?

You might be interested in videos from this event

March 11, 2020
Sunnyvale
30
205.62 K
dev, google, js, machine learning, ml, scaling, software , tensorflow, web

Similar talks

Paige Bailey
Product Manager (TensorFlow) at Google
+ 1 speaker
Brennan Saeta
Software Engineer at TensorFlow team
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Kangyi Zhang
Software Engineer at Google
+ 3 speakers
Brijesh Krishnaswami
Software Engineering Manager at Google
+ 3 speakers
Joseph Paul Cohen
Postdoctoral Fellow at University of Montreal
+ 3 speakers
Jared Duke
Software Engineer at Google
+ 3 speakers
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Introduction to TensorFlow 2.0: Easier for beginners, and more powerful for experts (TF World '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
558 conferences
22059 speakers
8190 hours of content