Duration 39:19
16+
Play
Video

Intro to machine learning on Google Cloud Platform

Sara Robinson
Developer Advocate at Google
  • Video
  • Table of contents
  • Video
2018 Google I/O
May 10, 2018, Mountain View, USA
2018 Google I/O
Video
Intro to machine learning on Google Cloud Platform
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
69.79 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speaker

Sara Robinson
Developer Advocate at Google

Sara is a developer advocate on the Google Cloud Platform team, focused on machine learning. She helps developers build awesome apps through demos, online content, and events. Before Google she was a developer advocate on the Firebase team. When she's not programming she can be found on a spin bike, listening to the Hamilton soundtrack, or finding the best ice cream in New York.

View the profile

About the talk

There are revolutionary changes happening in hardware and software that are democratizing machine learning (ML). Whether you're new to ML or already an expert, Google Cloud Platform has a variety of tools for users. This session will start with the basics: using a pre-trained ML model with a single API call. It'll then look at building and training custom models with TensorFlow and Cloud ML Engine, and will end with a demo of AutoML Vision - a new tool for training a custom image classification model without writing model code.

Share

Hello everyone. Welcome to intro to machine learning on Google Cloud platform. My name is Sarah Robinson. I've been developing developer Advocate on the Google Cloud platform team and I focus on machine learning you can find me on Twitter @s Rob tweets. Let's Dive Right In My talking about what is machine learning that a high-level machine learning involves teaching computers to recognize patterns in the same way that our brains do but humans it's really easy for us to distinguish between a cat and a dog but it's much more difficult teaching machine be able to do the same thing. In

this talk, I'm going to focus on what's called supervised learning. This is when during training you give your model labeled input and we can pick up almost any supervised learning problem in this way so we can provide labeled infants to our model and then our model outfits a prediction that we know about how our model works is going to depend on the tool we choose to use for the job and the type of machine learning problem. We're trying to solve That was a high-level overview. But how do we actually get from input to description? I got this is going to depend on the type of machine learning

problem. We're trying to solve on the left hand side. Let's say you're solving a generic task that someone is already solved before and I case you don't need to start from scratch. On the other side of the spectrum. What's higher solving a custom task that's very specific to your data set. Then you're going to need to build and train your own model from scratch some more specifically. Let's think of this in terms of image classification. So what time you want to label this image? As a picture of a cat? This is a really common machine learning tasks. There's tons of models out there that

exist to help us label images so we can use one of these pre-trained models. We don't need to start from scratch. But let's say on the other hand that this cat's name is Chloe. This is our cat and we won't identify her apart from other cats across our entire image Library. So we're going to be to train a model using data from scratch that I can differentiate Chloe from other cats. More specifically, let's say that we want our model to return a bounding box showing where she is in that picture. We're also going to need to train a model on our own data. Let's also think about this

in terms of a natural language processing problem. So let's say we have this text from one of my tweets and I want to extract the parts of speech from that text. This is a really common natural language processing tasks. So I don't need to start from scratch. I can utilize an existing NLP model to help me do this. Let's say on the other hand that I want to take the same to eat and I want my models out but these tags I want my model to know that this is a tweet about programming and more specifically. It's a tweet about Google Cloud. I just need to train my model on thousands of tweets about

each of these things so that I can generate these predictions. How many people see the term machine learning and they're a little bit scared off? They think it's something that's only for experts now if you look back about 60 years ago, this is definitely the case. This is a picture of the first neural network invented in 1957 called the perceptron and this is a device that demonstrated an ability to identify different shapes so back then if you want to work on machine learning you need access to extensive academic and Computing resources. But if you fast forward to today, we can

see that just in the last 5 or 10 years the number of products a Google they're using machine learning has grown dramatically. Connect Google we want to put machine learning into the hands of any developer and data scientist with a computer and a machine learning problem that they want to solve. That's all of you. We don't think machine learning should be something only for experts. So if you think about how you're doing machine learning today, maybe you're using a framework like scikit-learn xgboost Kerris or tensorflow. Maybe you're writing your code in jupyter notebooks. Maybe

you're just experimenting with different types of models building proof-of-concept or maybe you've already built a model and you're working on scaling it for production. So what I want you to take away from this talk, is that no matter what your existing machine learning toolkit is we've got something to help you on Google Cloud platform. And that's what I'm going to talk to you about today. So we have a whole spectrum of machine learning products on gcp on the left hand side. We have products targeted towards application developers to use these you need little to no machine learning

experience on the right-hand side of the spectrum. We have products that are targeted more towards data scientist and machine learning practitioners. The first set of products I'll talk about is our machine learning API API that give you access to pre-trained models with a single rest API request. As a moose for the middle, we have a new product as I'm super excited about that. We announced earlier this year in January called automl currently in Elsa and the first automl products even out is automl Vision this lets you build a custom image classification model train on your own data without

requiring you to write any of the model code. As we move further to the right towards more custom models you want to build your model model in tensorflow. We have a service call to Cloud machine learning engine to let you train and serve your model at scale. Then a couple of months ago. We announced an open source project called kubb flow this lets you run your machine learning workloads on kubernetes. And then finally are most do-it-yourself solution is let's say you have a machine learning framework other than tensorflow and you want to ride it on gcp you can do this using

Google compute engine or Google kubernetes engine, but we don't have tons of time today. So we're going to focus just on three of these and let's Dive Right In starting with machine learning as an API. My Google Cloud platform bf5 apis to give you access to pre train models to accomplish common machine learning test. So they let you do things like analyzing images and videos converting audio to text analyzing that text further and then Translating that text. I'm going to show you just two of them today. So let's start with Cloud Vision. This is everything the vision

API lets you do credit score the vision API provides label detection. This will tell you what is this a picture of snow for this image of my return elephant animal excetera? What direction does one step further and it'll search the web for similar images and it'll give you label is based on what it finds OCR stands for optical character recognition. This is how you find texting your images tells you where that text is and what language it's in logo detection lights on a fight, and logos in an image Landmark detection will find, landmarks crop him. Well suggest crop dimensions for your photos

and then explicit content of section will tell you is this image appropriate or not. This is what I requested Vision API looks like so again, you don't need to know anything about how that pretty train model Works under the hood. You pass it either a URL to your image in Google Cloud Storage or just the basics before and coated in what types of feature detection you want to run. That's just a rest API so you can call it in any language you like. I have an example in Python here using our Google Cloud client library for python. I create an image annotator client and then I run label

detection and analyze their I'm guessing but have you heard about an elk? It announced a couple days ago? So if you want to use ml kit for a Firebase, you can easily call The Vision API from your Android or iOS app and this is an example of calling it and switch. I don't like to get too far into a top without showing a live demo. So if we could switch to the demo. And what we have here is a product page for the vision API here we can upload images and see what the vision API response. So I'm going to upload an image here. This is a selfie that I took seeing Hamilton. I live

in New York was super excited to have scored tickets to Hamilton and I wanted to see what the vision API said about my selfie. So, let's see what we get back here. Pranking live demos. You never know. How log out. There we go. Okay, so we can see that it's on my face an image that's able to identify where my face is different features in my face, and it's also able to detect emotion so we can see the joy is very likely here. I was super excited to be saying Hamilton. What I didn't notice at first about this image is that it has some texting it

when I send it to the beach. Maybe I didn't even notice that there is text here, but we can see that the API using OCR is able to extract that playbill text from image. And I'm finally in the browser. We can see the entire Jason response we get back from the vision API is a great way to try that API before you start writing any code upload your image in the browser see the type of response you get back and if it's right for your app, and I'll provide a link to this at the end. So that is the vision API if we can go back to the slides. Next time I talk about the natural-language API,

which lets you analyze text with a single a rest API request. The first thing it lets you do is extract tea entities from text. It also lets you tells you whether your text is positive or negative and then if you want to get more into the linguistic details of your text, you can use the syntax analysis method and finally the newest feature. This API is content classification it'll classify your text into many of over seven hundred different categories that we have available. Pearson python code to call the natural-language API. It's going to look pretty similar to The Vision API Cody's on

the previous page. And again, we don't need to know anything about how this model works. We just ended our text and get back the results from the motto. Now it starts with demo of the natural language API. So again, this is our product page for the natural language API and here we can enter text directly in the text box and see what the natural language API response. So I'm going to say I loved Google IO but the NL talk was just okay. I'll see what it says the

review I might find on a session and let's say that I didn't want to go through all the sessions but I wanted to extract key and titties and see what the sentiment was so we can see her it's extracted two entities Google why I went and I'll talk and we got to score for each entity. The score is a value from -1 to one that'll tell us overall how positive or negative is a sentiment in this entity. So Google IO got my day. Mostly positive even get a link to the Wikipedia page for Google Io and then ml talk got a neutral score right in the middle of zero cuz it was just okay. We also get the

aggregate sentiment for the sentence and we can also look at the syntax in detail see which words depend on other words get the parts of speech for all the words in our text. Our text was longer than 20 words. We can make use of this concept categorization feature, but you can also try right in the browser and you can see a list of all the categories that are available for that. That is the natural language API if we could go back to the slides But I talked briefly about some companies that are using these apis and production giphy is a website

that lets you search for gifs and share them across the web. They use efficient apis optical character recognition feature to add search by text functionality to all of their gifts, as you know, lots of guests have texting them before they used to Vision API, you couldn't search by text and now they've dramatically increase the accuracy of their search results Hurst is a large publishing company and they're using a natural language apis content categorization method across over 30 of their media property used to categorize there news articles. The script is a new app that lets you

transcribe meetings and interviews and they're using the speech API for that transcription. So all of these three companies are using just one API. We are all the seen a lot of examples of how many combining different machine learning apis. So seen it is a crowdsourced video platform. They got thousands of videos uploaded daily. So they have no time to manual. He's had those videos using a combination of video intelligence Vision speech and natural language to automatically tag, all of that video content that they're seeing on the platform. Maslow is a new mobile app. It's an audio

journaling app. So you can answer your journal entries through audio and they're using a speech API to transcribe audio and then they're using the natural language API to extract key entities give you some insights about your journal entries and they're doing all that processing with Cloud functions and storing the data in firestore. So that is the natural language API. Do all the products are talked about so far have been abstracted that model for you and a lot of times when I presented the apis people ask me those API sound great, but what if I want to train

them on my own custom data. We have this new product that I mention called Auto on television is currently an alpha stim needs to be whitelisted to use. It just lets you train an image classification model on your own image data. This is best seen with the demo. Somebody introduced it first before this demo. Let's say that I am a meteorologist. Maybe I work at a weather company and I want to predict weather trends and flight plans from images of clouds. The obvious. Next question is can we use the cloud to analyze clouds? And as I've learned there's

many many different types of clouds and they all indicate different weather patterns. So when I first started thinking about this. I thought maybe I should try the vision API first and see what I get back. So it's humans. If you look at these two images, it's pretty obvious to us. If these are completely different types of clouds the vision API, we wouldn't expect it to know specifically what type of cloud these images are restrained across a wide variety of images to recognize all sorts of different product categories, but nothing as specific as the cloud types for these images. You can

see even for these images of obviously different clouds. We get back pretty similar labels sky cloud blue Etc. so this is where automl Vision comes in really handy provide the UI to help us with every step of training on machine learning model from importing the data labeling IT training and in generating predictions using a rest API, so Best way to see it is by again jumping to a demo. So here we have DUI for automl vision again, it takes us through every step of building our model. The first up here is

importing our data to do that. We put our data in Google Cloud Storage repair images of Google cloud storage. And then we just create a CSV where the First Column to CSV is going to be at the URL of our image and in the next column will be the label or lable is associated with that image what damage can have multiple labels letter images here? I've already done that for this example. Let me move over to the labeling Tab. And in this model, I've got five different types of clouds that I'm classifying can see them here and we can see how many images I have for each one. I don't know

if it only requires 10 images Pearl able to start training they recommend at least a hundred for high-quality predictions. So the next step is to review my image labels. So I can see all my iCloud pictures here. I can see what label they are. If this one is incorrect. I could go in here and switch it out. Let's say that I didn't have time to label my dataset. I have a massive image dataset. I don't have time to label it. I can make use of his human labeling service which gives you access to a set of In-House human labelers that will label your images for

you and then it just a couple of days. You'll get back a labeled image dataset. The next part once you got your labeled images is to train your model and you can choose between a base or Advanced model. I'll talk about that more in a moment and to train it. All you do is press the strain train button the simple as that. Once your models trained you'll get an email and the next thing you want to do is you want to see how this model performed you think some common machine learning metrics, so I'm not going to go into all of them here, but I do want to highlight the confusion Matrix. So it

looks confusing call the confusion Matrix, but let's take a look at what I mean. So for a good confusion Matrix, what we want to see is a strong diagonal from the top left. So what this is telling us is when we uploaded or images automl it automatically split or images for us into training and testing so it took most of our images you supposed to train the model and then it reserved a subset of our images to see how the model performed on images that had never seen before. So what this is telling us is that for all of our altocumulus cloud images in our tests that Armada was able to

identify 76% of them correctly, which is pretty good. Now the train tab you saw the base and advanced models and I've actually trained both. So we're looking at the advanced model here. I can use the UI to see how different versions of my model performed and compare it. So you would expect that the advanced model would perform better across the board. So let's take a look. So it looks like it did indeed for phone a lot better for most all of the categories here and see a 23% increase for this one 11% for this one. But hey, wait, we look at our altostratus images looks

like our Advanced model actually did worse on that what this is actually pointing out is that there may be some problems with our training images here. So the advanced model has done a better job of identifying where there is potentially problems with our training dataset cuz remember our model is only as good as a training day today we give it. so if you look here we see that 14% of our altostratus clouds are being mislabeled as cumulus clouds and if I click on this I can look and see where my models getting confused. And it turns out that these are images that I could have

potentially labeled incorrectly. I'm not an expert on actual clouds. Some of these images have multiple types of clouds in them. So they actually are pretty confusing images. So with the confusion Matrix can help me do is identify where I need to go back and improve my training data. So that's the evaluate tab. The next part the most fun part of my pinion is generating predictions on new data. I'm going to take his image of a cirrus cloud and we're going to see what our model predicts again. It has never seen this image before it wasn't used during training and

we'll see how it performs. There we got so we can see that our model is 99% confident. This is a serious God. She's pretty cool considering it's never seen this image before. So the UI is one way that you can generate predictions. Once your models been trained. It's a good way to try it out right after training completes, but chances are you probably want to build an app that's going to clear your train model and there's a couple different ways to do this. I want to highlight the vision API here. So if you remember the vision API request from a couple of slides back, you'll

notice that this doesn't look much different. I'm all I need to add is this custom label detection parameter? And then once my model trains I get and ID for that train model, and that only I have access to or anybody that I've shared my project with so if I have an app, but let's say it's just attacked it whether or not there's a cloud in an image. I want to upgrade my app. I want to build a custom model. I don't have to change much at all about my app architecture. So now point it to my custom model. I just need to modify the request Json a little bit. So that is automl Vision

if we could go back to the slides. A little bit about some companies that are using automl vision and have been part of the alpha Disney is the first example they train a model on a different Disney characters product categories and colors and they're integrating that into their search engine to give users more accurate results. Urban Outfitters is a clothing company. They've got a similar use case. They tryna model to recognize different types of clothing patterns and neckline Styles and are using that to create a comprehensive set of product attributes and I'm using it to improve

product recommendations and give users better search results. The last example is the Zoological Society of London. They've got cameras deployed all over the Wild and they built a model to automatically tag all the wildlife that are seeing across those cameras so that they don't need somebody to manually review it. Are the two products are talked about so far the apis and auto ml have entirely abstracted the model from us. We don't know anything about how that model Works under the hood. But let's say you've got a more custom prediction test that

specific to your data set are use case. So one example is let's say you have just launched a new product at your company. You're seeing it posted a lot on social media and you want to identify where in an image that product is located. You're going to need it to train a model on your own data to do that or let's say you got a lot of logs coming in and you want to analyze those logs to find anomalies your application. These are all require a custom model train on your own data. We got two tools to help you do this tensorflow for building your models and machine learning engine

for training and serving as models at scale. So from the beginning of the Google brain team wanted everyone in the industry be able to benefit from all of the machine learning products. They were working on they made tensorflow an open source project on GitHub and the uptake has been phenomenal is the most popular machine learning projects on GitHub. It has a believe over 90,000 GitHub Stars actually need to update the slide. It just crossed over 13 million downloads and because it's open source, you can train and serve your tensorflow models anywhere. He wants you to build

your Tesla Model you need to think about training it and then generating predictions at scale also known as servant. So if you're at becomes a major hit you're getting thousands of production requests per minute. You're going to need to find a way to disturb that model at scale. And again because tensorflow is open source, you can train and server tensorflow bottles anywhere out of this talk is about Google Cloud platform something to talk about are managed service for tensorflow Saint Cloud machine learning engine. You can run distributor training with gpus and CPUs are custom

Hardware designed specifically to accelerate machine learning work clothes and then you can also deploy your train model to machine learning engine and then use the ml engine API. That's a scalable online and bats prediction for your model. One of the great things about ml engine is it there's no lock in so let's say that I want to make use of M L engine for training my model, but then I want to download my model and serve it somewhere else that's really easy to do and I can do the reverse as well. Can I talk about two different types of custom models using tensorflow

running on cloud machine learning engine? First have a costume. I want to talk about his transfer learning which else is update an existing train model using our own data. And then I'm going to talk about training a model from scratch using only your data. The transfer of learning is great if you need a custom model, but you don't have quite enough training data. So what the fuss is do is it let us utilize an existing pre-trained model. It's been trained on hundreds of thousands, maybe millions of images or text to do something similar to what we're trying to do and then we take the

weights of that train model and update the last couple layers with our own data so that the outcome is output that's generating Productions on our own training data. If I wanted to build an antenna example showing how to train a model and then go all the way to serving it and building an app that generates predictions against it. I know a little bit of Swift so I decided to build an IOS app that could detect breeds of pets. Is it gif of what the app looks like you upload a picture of your pet it's able to detect where the pet is an image and what type of breed it is. The build-out model I

use a tensorflow object detection API, which is a library that's built on top of tensorflow. It's open source to what you do specifically object detection, which is identifying a bounding box of where something is an image. I tried the model a machine-learning engine. I also started on machine learning engine and then I use a couple of Firebase apis to build a front end for my app. So this is a full architecture diagram and iOS client is actually a pretty Thin Client what it's doing is it's uploading images to cloud storage for Firebase and then I got a cloud function that's listening

on that bucket. So that's going to be triggered any time and images uploaded it then download the image. Base64 encode is it send it to machine learning Engine 4 prediction and then the prediction that I get back is going to be a confidence value a label and then bounding box data for where exactly that pet is in my image. So then I write that new image with a box around it to cloud storage and then I store the metadata in the fire store. So here's an example showing how the front end works. So this is a screenshot of fire store and we can see that whenever my image date has uploaded

the fire store. I write the new image with a box around it to a cloud storage bucket. That was an example of transfer learning. We have a custom task and we've got enough data to building training model entirely from scratch. Listen to show you that I built a model that predicts the price of wine. So I wanted to see give it a wines description and variety. Can we predict the price? This is what an example input and prediction for a model with d. The one reason this is well suited for machine learning is I can't write rules to

determine what the price of this wine should be so I can't say that any wine with vanilla in the description. And Pinot Noir is going to be a mid-price wine. Where is a fruity Chardonnay is always going to be a cheap wine. I can't write rules to do that. So I wanted to see if I could build a machine learning model to extract insights from the state of As I mentioned because I'm training this model from scratch. There's no existing model out there that does exactly this task that I want to perform. I'm going to need a lot of data I use cattle to get the data kaggle is a data science

platform. It's part of Google if you need a machine learning and looking for interesting data sets of play around with I definitely recommend checking out travel So Cal has this wine reviews data set has data. I found a city set on a hundred fifty thousand different kinds of wine for each wine that has a lot of data on the description the points rating the price Etc 7sn just using the description the variety and the price for this model. The next step is to choose the AP. I'm going to use to build my model which tensorflow API and then the type of model I want to use. Guy tries to use

of the TF. Kara S A P I hear so Caruth is an open source machine learning framework created by Francois shelay works at Google and the tensorflow API includes a full implementation of the cast is a higher level tensorflow API that makes it easy for us to define the layers of our model. You can also use a lower level tense tensorflow API if you'd like more control over the different operations in your model. Play choices the TF. Charis API and a wide and deep model to solve this task and wide and deep is basically so fancy way of saying I'm going to represent the inputs to my model in two

different ways. I'm going to use a linear model the white. That's the why part that's good at memorizing relationships between different inputs. And then I'm going to use a deep neural net which is really good at generalizing Based on data that has been seen before. Put it another way to input two are wide model is going to be sparse wide vectors and the inputs Hardy model going to be dense and bedding vectors. Let's take a look at what the code for a model of a look like So the first step is all using the cross functional Epi. Is it to find a wide model and to represent my text in my wife

model. I'm going to what's use what's called a bag of words representation. So what this says it takes all the words that occur across all my wine descriptions and I'm going to choose the top. I told you the top 12000. This is kind of a hyperparameter that you can tune each input to my model is going to be at 12000 Almont Dr. With ones and zeros indicating whether or not the word for my vocabulary is present in that specific description. So it doesn't take into account the order of words or the relationship between words. This wide model just takes into account whether or not this word

for my vocabulary is present in that specific description. So the input this time it's going to look like this is going to be at 12000 element bag of words Vector the way of representing my variety my wide model was pretty simple. I've got 40 different wine varieties of my data set is just going to be a 40 element one hot doctor with each index in the vector corresponding to a different variety of wine. Then I'm going to concatenate these and put layers. And I'll put my model is just a float indicating what the price of that wine is. So if I just wanted to use the wide model, I could take the

wide model that I had here run training and evaluation on it using Keras, but I found that I had better better accuracy when I come by and wide and deep the next step is to build my Jeep model and provide a model when I'm using is called and embedding layer. What word embeddings let us do is they did find a relationships between words in Vector space. So words that are similar to each other are going to be closer together in Vector space. There's lots of reading out there on word embedding. So I'm not going to focus on the details here the way it works is I can choose the dimensionality

of that space again, that's another hyperparameter this model. So that's something I should tune and see what works best for my dataset. So this case you can see I use an eighth dimensional embedding space and I obviously I can't feed the text directly into my model. I need to put it into a format that the model can understand. What I did was I encoded each word as an integer. So this is what the input to my Jeep models going to look like and obviously the impact need to be the same length, but not all my descriptions are the same length. So I'm using a hundred and seventy has a length.

None of my descriptions are longer than 170 words that are shorter. I'll just pad that doctor with zeros at the end as I did hear. Stop is going to be the same or still protecting the price. Using the cross functional API It's relatively straightforward to then combine the outputs from both of these models and then to find my combined model. And so now it's time to run training and evaluation on that model I chose to do that using machine learning engine. The first step here is to package my coat and put it in Google Cloud Storage. This is pretty simple. So I put my model code in a

trainer directory. I put my wind data in a data directory and then I have to set up that pie file, which is the finding the dependencies the remodels going to use. Enter run that training job. I can use g-cloud which is our Google Cloud CLI for interacting with a number of different Google Cloud products. So I run the streets have command to take off my training job and then I save my model file, which is a binary file to Google Cloud Storage. And it lets me a demo of generating some predictions on that model. So for this demo, I'm going to use this tool called collab level have a link to it

at the end. And this is a cloud-hosted jupyter. Notebook that comes with a free GPU. I'm so here. I've already tried my model and what I'm what I've done as I say that model file in Google Cloud Storage here, and now I'm going to generate some predictions to see how it performs on some raw data and make us a little bit bigger. I'm so here. We're just importing some libraries that we're going to use unloading my model. Next step is to load the tokenizer for my model to this is just an index associating all the words in my vocabulary with the number and then I'm loading my Variety in

Kotor 2. Ignore that warning and you aren't going to load and some raw data. So what I have here is data on five different ones. I've got the description the variety and then the associated price for each of these wines. I want to show you what the info looks like for each of these who needs now. We need to encode each of these into the wide and deep format that our model is expecting. So to do that we have this vocabulary look up I'm just printing a subset of it here. So this is going to be $12,000 mens long for the top twelve thousand words in our vocabulary and Karis

has some utilities to help us extract those top words. And then my variety encoder is the 40 element or a that looks like this with each index corresponding through different variety. So, let's see what the input to are wide model looks like so the first thing is our texts and this is a bag of words back there. So it's a 12,000 element Vector with zeros and ones indicating the presence or absence different words in our vocabulary. And then the variety Matrix. I'm just putting it out for the first one. I believe that first one was a Pinot Noir which corresponds with this index

here and if you look up here we can confirm that. Yes. That was a Pinot Noir. So that's the input to Ry model. And then Rd model, we're just encoding all the words are not first description as integers and padding. It was zeros since this wasn't a very long description. And then all I need to do to generate predictions on my Keras model is called. Predict. And what I'm doing here is I'm now going to Loop through those predictions and see how the model performed compared to the actual price so we can see that it did a pretty good job on the first 146 compared to 48. This one is about

$30 off but it was still able to understand that this was a relatively higher price wine and it did pretty well on the rest of these as well. So I have a link to this at the end collab require zero set up so you can all go to this URL enter your own wine descriptions and see how the model performs go back to the slides. I talked briefly about a few companies that they're using a tensorflow and machine learning engine in production to build custom models. The first example is Rolls-Royce. They're using a custom tensorflow model in their Marine Division to identify and track the

objects that a vessel can encounter at sea. Ocado is the UK's largest grocery delivery service says you can imagine they get tons and tons of customer support emails every day and they build a custom model has able to take those emails and predict whether or priors an Urgent Response, or maybe do you know that don't require a response at all, and they've been able to save a lot of customer support time using that model and then finally Airbus has build a custom image classification model to identify things in satellite imagery. I know I covered a ton of different products.

I wanted to give you a summary of what resource is all of these products require. So these are just for resources that I thought of that you needed to solve a machine learning problem. There's probably more that I don't have on here first is training Jada how much training did I do need to provide to train your model? How much model code do you need how much training or serving infrastructure do you need to provision? And then overall? How long is a task going to take you? So if you look at our machine learning if you guys great thing about these you don't need any training data, you can just

start generating prediction on one image. You don't need to write any of the model code. You don't need to provision any training or serving infrastructure and you can probably get started with these in less than a day very little time. Bimota automl the cool thing about automl is that you will provide some of your own training data because you're going to be creating a more customized model and it'll take a little more time because you will need to spend some time processing those images uploading uploading them to the cloud and maybe labeling them. You still don't need

any model code or any training or serving infrastructure. And then finally if we think about a custom model build a tensorflow running on cloud ml engine, you will need a lot of training data. You will have to write a lot of the model code yourself and you'll need to think about whether you want to run your training and serving jobs on premise or if you want to use a managed service like Cloud ml engine to do that for you. And obviously this this process will take a little bit more time. Finally, so if you remember only three things in his presentation, so I know I covered a lot first

thing can use a pre-trained API to accomplish, machine learning tasks like image analysis natural language processing or translation second thing. If you want to build an image classification API train on your own data use automl Vision. I'm really interested to hear. If you have a specific use case for auto and All Vision come find me after I will be in the cloud sandbox area. Last thing for custom machine learning tasks, you can build a tensorflow model with your own data and train and serve it on machine learning engine. But here is a lot of great resources covering

everything I talked about today. I let all of you take a picture of that slide. The video also be up after if you can grab it from there as well. And that's all I got. Thank you.

Cackle comments for the website

Buy this talk

Access to the talk “Intro to machine learning on Google Cloud Platform”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “2018 Google I/O”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
159
app store, apps, development, google play, mobile, soft

Similar talks

Wei Chai
Principal Engineer and Tech Lead Manager at Google
+ 2 speakers
Brahim Elbouchikhi
Product Manager at Google
+ 2 speakers
Sachin Kotwani
Lead Product Manager at Google
+ 2 speakers
Available
In cart
Free
Free
Free
Free
Free
Free
Josh Gordon
Developer Advocate at Google
+ 1 speaker
Laurence Moroney
Staff Developer Advocate at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Yufeng Guo
Developer and Machine Learning Advocate at Google
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Intro to machine learning on Google Cloud Platform”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
558 conferences
22059 speakers
8245 hours of content