TensorFlow World 2019
October 31, 2019, Santa Clara, USA
TensorFlow World 2019
Video
Day 2 Keynote (TF World '19)
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
5.96 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Konstantinos Katsiapis
TensorFlow Extended Lead at Google
Anusha Ramesh
Product Manager at Google
Tony Jebara
VP of Engineering, Head of ML at Spotify
Mike Liang
Product Manager at Google
Ujval Kapasi
VP of Deep Learning Software at NVIDIA
Anna Roth
Principal Program Manager at Microsoft
Jared Duke
Software Engineer at Google
Sarah Sirajuddin
Software Engineer at Google
Sandeep Gupta
Product Manager at Google
Joseph Paul Cohen
Postdoctoral Fellow at University of Montreal
Chris Lattner
Senior Director and Distinguished Engineer at Google
Tatiana Shpeisman
Senior Engineering Manager at Google Brain
Ankur Narang
Vice President - AI and Data Technologies at Hike Messenger

Konstantinos (Gus) Katsiapis is the über tech lead of TensorFlow Extended (TFX), an end-to-end machine learning platform based on TensorFlow. He’s worked on Sibyl, a massive-scale machine learning system (precursor to TensorFlow) widely used at Google, and was an avid user of machine learning infrastructure while leading the mobile display ads quality machine learning team at Google. Previously, Gus gathered knowledge and experience at Amazon, Calian, the Ontario Ministry of Finance, Independent Electricity System Operator, and Computron. He holds a master’s degree in computer science with a specialization in artificial intelligence from Stanford University and a bachelor’s degree in mathematics, majoring in computer science and minoring in economics, from the University of Waterloo.

View the profile

I am VP of engineering for personalization at Spotify. I also lead the company-wide machine learning strategy.From 2015 to 2019, I led a team of machine learning experts that drive personalization at Netflix. We developed novel algorithms that have been rolled out to 150 million members world-wide. Our advances include:1) Artwork personalization that selects assets for each member to show why a title is relevant.2) Lifetime value models to align many algorithms towards increased member retention.3) Search algorithms that use machine learning to return more engaging results.4) Experimentation techniques like covariate adjustment to increase AB testing power.5) Marketing algorithms that decide which titles to advertise to attract new members.6) Marketing budget algorithms that calculate spend levels across channels and regions daily.7) Causal personalized messaging algos to decide when and how to message each member.8) Ranking algorithms with tensorflow deep learning to predict which titles a user will like.9) Combinatorial page algorithms that personalize each page's layout to improve discovery.10) Promotion algorithms that explore billboards of new titles to find their target audience.11) Content analytics algorithms that group titles into clusters and estimate their impact.

View the profile

Specialties: Machine Intelligence, Big Data, Mobile Advertising, Digital Advertising

View the profile

Sarah leads Tensorflow's mobile and embedded efforts (TensorFlowLite). She is a long time Googler, and prior to this, she has spent many years building Google's advertising systems and web search infrastructure.

View the profile

Product management and technology strategy leadership, building and bringing to market AI-based solutions.Product manager for Google's open-source ML framework - TensorFlow, providing powerful machine learning framework for solving challenging and high-impact problems. Focused on usability, adoption, and enterprise use-cases.Deep domain experience in healthcare, medical imaging, life sciences, and industrial imaging and analytics.

View the profile

Interests:Machine LearningComputer VisionIndustrial AutomationAndroid/Web/JavaScript DevelopmentCyber Security ResearchStatic/Dynamic Source Code Analysis

View the profile

I am an engineering manager in TensorFlow team leading the work on building next generation machine learning compiler infrastructure to deliver top-level performance and usability across a range of hardware platforms.

View the profile

Dr. Ankur Narang has 25 years of experience including in senior Technology R&D leadership and mgmt positions across MNCs including IBM Research India, Sun (Oracle) Research Labs (Menlo Park, CA) and Mentor Graphics and Atrenta (Synopsys Inc). His technical experience spans across multiple areas including Machine Learning/AI, Full Stack Tech & Product Design & Implementation, High Performance Computing, Parallelyzing Compiler, Massive Scale Sims and Formal Verification. He has around 40+ publications in international conferences and journals and book chapters in: Machine Learning/AI and High Performance Computing. He has 15 patents granted by USPTO and 4 pending approval in verticals including Telecom Analytics, Oil & Gas, Electronic Design Automation, High Perf Comp and Systems Sw.He completed PhD in Computer Science from IIT Delhi in 2011, MS Engg Mgmt from Santa Clara University in 2000 and B.Tech in Computer Science from IIT Delhi in 1994.He has held multiple positions at intl. conferences including Industry Track Chair at ICDCN 2013 2014, PC member at IPDPS, HiPC, SBAC-PAD, IndoSys, ICDCIT and others. He has given invited talks at IWDS, 2009 (“Getting Performance from Multicore: Challenges”) and at Industry Track ICDCN 2014 (“Exascale Computing: Challenges & Directions”).

View the profile

About the talk

O'Reilly and TensorFlow are teaming up to present the first TensorFlow World. It brings together the growing TensorFlow community to learn from each other and explore new ideas, techniques, and approaches in deep and machine learning.

Presented by:

Konstantinos Katsiapis, Google

Anusha Ramesh, Google

Tony Jebara, Spotify

Mike Liang, Google

Ujval Kapasi, NVIDIA

Anna Roth, Microsoft

Jared Duke, Google

Sarah Sirajuddin, Google

Sandeep Gupta, Google

Joseph Paul Cohen, University of Montreal

Chris Lattner, Google

Tatiana Shpeisman, Google

Ankur Narang, Hike

Share

Hello, everyone. Good morning. I'm guess catch a piece and I'm impossible engineering tfx endemol classroom extended otherwise known as gay sex. So the discipline of social engineering has evolved over the last 5 + decades to a good level of maturity if you think about it, this is both a blessing and a necessity because our lives usually depend on it at the same time the popularity of ml has been increasing rapidly over the last couple of decades and over the last decade or so. It's been used very much very actively both next time production settings. It does no longer on Coleman 4

ml to power widely used applications that we we use everyday so much like was the case for software engineering the BYU's of animal technology necessitates the evolution of the discipline from M. According to Admiral engineering As most of you know what to do. I'm balling production. You need a lot more than just a timer for example, the trainer called dinner and reproduction system is usually five to 10% of them are intimately the amount of time that Engineers fans on the trainer is often do are quite amount of time and you never spend time preparing the doctor

and shouldn't get some good quality and shooting it unbiased Etc at the same time research eventually makes its way into production and ideally one would need to change stocks in order to evolve an idea and put it into a product. So I think what is needed here is flexibility and robustness and a consistent system that allows you to apply in the mail in the product. And remember that the mo called itself is a tiny piece of the puzzle. No cute is a concrete example of the difference between ml coding animal engineering as you can see in this use case. It took about three weeks to build

a model. It's about a year. It's still not the blood introduction similar stories used to be common at Google as well. But we made things noticeably easier over the past decade by building and platforms like the effects. No image. Songs in Google is not a new thing. We've been building Google scale machine learning platforms for quite a while now Sibyl existed as a precursor to pfx. It started about 12 years ago a lot of the design code and best practices that begin to assemble have been incorporated into the design of the FX now

seven core principles with Sybil. It also arguments it under several important Dimensions. This made cfx to be the most widely used in 20 ml platform at alphabet while being available on Genesis and on gcp Division of tf-x is to provide an end-to-end. Ml platform for Everyone by our goal is to ensure that we can translate the use of M&M's getting dust improving ml powered applications, but let's discuss on what it means to be animal platform. And what are the various

parts that are required to help us realize this vision. Set the day we're going to tell you a little bit more about how we enable Club of scale. Emily take me to get to Google from best practices and libraries all the way to a full flight and 20. Ml platform. Let's start from the beginning. Muscle lady gets hard doing as well as harder. And applying do I get to introduction and pouting applications is actually even harder. We want to help others avoid the many many pixels. That's why I haven't called into the past and to that end. We actually published papers blog posts and other material

that's captured a lot of our Learning Center best practices. So can I buy the few examples of our Publications the cops are collective lessons learned more than a decade of a 5ml of Google and several of them like the roles of machine learning are quite comprehensive. We won't have time to go into them today as part of his talk obviously, but we encourage you to take a look when you get a chance. Why do best practices are great communication of best practices alone would not be sufficient. This does not scale because it does not get applied in code. So we want to capture a

learning send best practices in code are users to reuse these best practices and at the same time give them the ability to pick and choose to that extent. We offer standard and data public libraries. Shoot a few examples of light reset the offer for different phases of machine learning to a developers as you can see we offered like this for almost every step of your starting from data validation to feature transformations to analyzing the quality of a model always been serving that in production transfer Learning Easy by

providing tensorflow hub for recording and retrieving metadata for the Melville clothes. Now, the best part about these libraries is that they are highly modular which makes it easy to plug into your existing ml infrastructure. We have found that libraries are not enough words in alphabet and we expect the same elsewhere not all users need their wonderful sex ability as some of them might actually be confused by it and many users prefer out of the box Solutions. So what we do is manage release of our libraries, when sure they're nicely packaged in

optimize but importantly you are so after high level of the ice and those conflict with me in the form of wineries or compound or a container store. Libraries and Vine reefs provide a lot of flexibility to all users, but this is not sufficient for a manual close types of artifacts. So we provide component which in trap with well-defined and strongly typed that affect CPI the components also understand the context and environment in which they operate on Gandy

interconnected with one another right components for which lies ation of the said artifacts. That's incest when you functionality be launching intensive love World on netf X component in a notebook as you can see here. You can run to FX component sell bison this example showcases a couple of components. The first one is example Gene example, Jen ingested. I ain't with the FX Pipeline and this is typically the first component that you use the second one is satisfaction, which compute statistics for virtualization an example validation a component like

satisfaction in Notebook something like this which showcases that's on your data and it helps to detect anomalies the benefits of running tf-x components in a notebook is twofold. First it makes it easy for users to on wood on the TSX. It helps you understand these components of the effects and how you connect them in the order in which you can go it also helps with debugging the steps of your email workflow as you go to The Notebook. What experience though? We've learned that components aren't actually sufficient for a production. Ml manually orchestrating

components. I can become cumbersome and importantly are prone and they're also understanding the lineage of all the artifacts that are produced by those components that produced or consumed by this component is often found a metal Bolt from a device in perspective and but many times from a compliance perspective as well as other ways of creating task driven by points of component components together in a tossed him in fashion. We have also found that that has kale and advanced use cases also necessary at this platform this pipeline to actually be reacting to the

environment, right so we found that other time we need more or something like that. I've driven components. No, the interesting part is that the components we offer at the same company as I can operate both might a state that has thriven mode. And then I'd rather the remote thereby enabling more flexibility and the most important part is that the ark artifact lineage structure which helps experimentation. So she just putting it all together. It is kind of a canonical production end-to-end MLB pipeline. It starts with example generation sadistic

generation to ensure the data is a good quality for stays with transformations to augmented data in ways that make it easier to feed the model training the model after we train the motorway and show that it's a good quality. And only after we shoot it. It makes the quality bar. That's where I'm comfortable with do we actually pushed to one of the seven systems of choice? That's a server or a mobile application PDF. Notify plant apology Hills fully customizable, right so you can actually move things around as you please and importantly if one of the Arrow come out of the box

components we offer doesn't work for you. You can create a custom component with custom business logic and all of this is under the single handle pipeline. Know what does it mean? What does it mean to be an internal parts from right? So I think there's some Key Properties to it and wanted some listen to graichen, right? We want to make sure that all the components within those pipeline actually in the pipeline actually seamlessly interoperate with each other and we have actually found that within Google the value added to our users gets larger as

they move higher after stock, you know, as a move higher from library is going for the top two components and further up into library reading to the pipeline itself. This is because operating at a high level of the apps option allows us to give better robustness and support. Another important aspect of animal fats from visiting their ability with the Environmental Protection. So each of those platforms might be employed in different environments, you know salmon premises. How much is it be at cetera? And we need to make sure that we interact with their ecosystem that you

have been writing. So TSX actually works with other parts of it from the metal part of the ecosystem like Apache beam Apache spark fling airflow Etc. Listen to something else that's very important components and extension points with mbml platform that allows you to if something doesn't work out of the box for you. It allows you to customize it to your business needs Surprised Patrick platform, but we strive to collect feedback and improve it. So give it to us. Internally

tf-x Lots Empower several alphabet companies within Google it power several of our most important products that you're probably familiar with also integrates with Cloud AI platform. Ml engine and data flow products. And that's helping you realize you that Mel needs to Wauseon gcd. You shouldn't that automate and simplify I meant for you to check them out. Turn on volte f x is available as an end-to-end Solution on Twitter who spoke at the came out yesterday talked about Dave only publish like a

fascinating blog post on how they sleep on their home timeline using tensorflow flow model analysis and tensorflow hub for sharing word embeddings. Evaluated several other Technologies and Frameworks and decided to go ahead with tensorflow ecosystem for that production requirements. We also have several other partners while using T. I hope you will join us right after this talk to hear from Spotify on how they are using the effects for that production workflow needs. We also have another detail talk later today

called tfx production. Ml pipelines with tensorflow. So we have two great shots 1 by Spotify the other one details off on TSX if you're interested in learning more check these two dogs are the effects to get started. Thank you. Excited to be here. So my name is Tony Guevara today. And what are you talkin to you about Spotify where I work today and how we must be taking personalization and moving on to tensorflow. I'm the VP of Engineering also the head of machine learning and I'm going to describe our experience

moving on to tensorflow and to the Google Cloud platform and Q flow which has been really an amazing experience for us and really has open up a whole new world of possibilities. So just a quick note as Ben was saying before I started at Spotify. I was at Netflix and just like today would have talked about spotify's homepage also at Netflix. I was working on personalization algorithms and the home screen of Netflix as well. So you may be thinking of that sounds like a similar job. They both have you no entertainment and streaming and home screens and personalization, but there are fundamental

differences and I learned about those kind of differences recently a couple of months ago, but the biggest Metal different Siamese. It's a difference in volume and scale and I'll show you what I mean in just a second. So to look at movies vs music or TV shows versus podcast, you'll see that there's a very different magnitude scale. So on the movie side, there's about a hundred fifty eight million Netflix users on the music side. There's about 230 million Spotify users. That's also a different scale. Also the content really is a massively different scale problem. There's only about

five thousand movies and TV shows on the Netflix service. Whereas on Spotify we've got about 50 million tracks and about half-a-million almost podcasts. So you think that the amount of a. Kontaveit the index that's a huge scale difference. There's also contact duration. Once you make a recommendation off the home screen on let's say Netflix is going to consume that recommendation for 30 minutes for TV show maybe several seasons. Sometimes two hours from movie only 3 and 1/2 minutes of consumption per tracklist a on Spotify.

Replay as often on the same movies, but you'll replay song is very often. So it's really a very different world of speed and scale and we're getting a lot more granular data about the users every 3 and 1/2 minutes to change the trash listen to something else engaging differently the service and they're touching 50 million + pieces of content. That's really a very granular data. That's one of the reasons why we had to move to something like tensorflow to really be able to scale and do something. That's high speed and in fact real time. So this is our Spotify home how many people here use

Spotify? All rights over half of your not trying to sell a butterfly on anyone I'm just trying to say that many of you are familiar with this screen this is the homepage so this is basically driven by Machine learning and every month hundreds of millions of users will see this home screen and everyday tens of millions of users will see this home screen and this is where you get to explore what we have to offer it to two dimensional grid every image appears we call a card and the cards are organized into rows we call shelf and what we like to do is move these cards are

shells around from a massive library of possible choices and place the best ones for you at the top of your screen. And so when we open up Spotify we have a user profile the Home album to score all possible cars and all fossil shells and pack your screen with the best possible cards and child combination for you. And we're doing this in real time based off of your choices of Music your willingness to recommend that the recommendation how long you play. Tracks how long you listen to a different podcast and we have dozens and dozens of features that are updating in real time. And every

time you go back to the homepage it'll be refreshed with the ideal cars and shelves for you. And so we like to say there isn't a Spotify homepage or Spotify experience. Really? There's 230 million Spotify for one for each user. So how did we do this? And how did we do this in the past up until our migration to GCT tensorflow and and cute fluff. We rode a lot of custom libraries and an API in order to drive the machine learning algorithms behind so the specific machine learning algorithm is a multi-armed bandit money. If you heard about that, it's trying to balance

exploration and exploitation trying to learn which cards are shells are good for you and score them but also trying out some new cars and shelves that might not know if they're kind of hidden gems for you or not, and we have to employ counterfactual training and log propensities and log some small knots around a relation in order to change assist. In order to avoid large-scale a B test a large-scale randomization before we looked at us for this is all done and custom let's say ati's and Veda libraries and I had a lot of challenges. So we'd always have to go back and rewrite code and it's not

as compared different choices of the model underneath the Emoji on Bandit, like logistic regression vs. Trees vs. Deep neural net then of all times of custom called rewriting that would make this system really brittle hard to innovate and iterate on and then when you want me to pick something you want to roll out when you roll it out. You're also worried that I may fail because of all this custom stitching. So now we moved over to the tense of Waco system and he says, hey, let's move onto techniques like tensorflow estimator is insensible a date of validation to avoid having to do all this

custom work. And so for tensorflow estimator, what we can do is now build machine learning pipelines will be cut to try it a variety of models and train and develop them very quickly. Things like logistic regression boosted trees and deep models and much it in a much faster and of iterative process and then also might as well with super valuable because that help us manage the workload and accelerate the pace of experimentations and roll out. And so this has been super fast for automatically retraining a scaling and speeding up our machine learning training algorithms. Another thing

that we really rely on heavily stencil data validation, which is another part of the tf-x offering one key thing we have to do is find bugs in our data pipelines in our machine like five times while we're talking about letting a man rolling them out. For example, we want to catch data issues as quickly as possible. And so one thing we can do with dfdd is quickly find out if there's some missing data or data inconsistencies in our Pipeline and we have the dashboard that quickly Plaza distribution of any feature and the accounts of different data sets and so on and and also and a mortgage

lender things like how much is the use of spending on the service? What are the preference is a swan? Look at those distribution. We caught a bug like this one on the left, which Basie was showing us that in our training data, the premier a tear data samples were missing from our training Pipeline and then on a violation the free Shuffle pure data set and samples were missing for my violation pipeline. So this is From missionary perspective, but we caught it quickly. We're Ableton Outrigger alarms and alerts and have dashboards and look at these distributions daily. So the the machine learning

Engineers don't have to worry about the data pipelines into the system. So now we have a Spotify pay path, which is a machine-learning infrastructure based off of Google Cloud Q flow and tensorflow and it has achieved significant list off of Baseline systems in popularity basic methods. And now we're just scratching the surface want to do many more sophisticated machine learning types of Explorations. And we really do this as an investment. It's an investment in machine learning engineer in the productivity. We don't want machine learning Engineers to spend tons of time fixing custom

infrastructure and catching and of silly bugs and and updating libraries and having to learn bespoke types of platforms instead. We want to have them go onto a great and a lingua Franca platform like gcp Hublot and tensorflow and really think about machine learning and the User experience and building better entertainment for the world. That's what we want to enable not necessarily building custom. Custom Lettering machine learning infrastructure, and so excited about working in a great platform that's got and have a great future ahead of it like the FX and Google cloud and

kubeflow, but also working on really deep problems around entertainment and what makes people excited and engaged with a service and music and audio and podcast then you can get this Best of Both Worlds. We're hiring, please look at these links and come work with us. Thank you so much. Good morning, everyone. My name is Mike. I'm one of the project managers on the 10th floor team. And today I like to share with you something about intense. Psaume we seen some amazing breakthroughs on what machine learning can do over the past few years and throughout this conference. You've

heard a lot about the services and tools that have been built on top of them machines are becoming capable of doing mirror Devin Myriad of amazing things from Vision to speech to natural language processing and with tensorflow machine learning experts and scientists are able to combine data and algorithms and computational power together to train Machinery malestar very proficient at a variety of tasks. Put your focus with the solve business problems or building you application. How can you quickly use machine learning in your solution? What this is for tents for Hopkins did

Google Hub is repository of pre trained and ready to use models to help you solve Nova business problems. He has a comprehensive collection of models across the tents for ecosystem and you can find Steve the art research model of your intense PornHub ass here are also models and retrained using transfer learning and I loved and recently added a lot of new models that you can deploy Straits production from Cloud to the edge through tents full light or tents with us, and we're getting many contributions from the community as well.

Principal Hobbs Rich repository model is covers a wide range in machine learning problems, for example, and image related tasks. We have a variety of models for object detection image classification automatic image augmentation and some new things like I mentioned eration for sale transfers. Intex really tax we have some of the state-of-the-art models out there like Burton and Albert and a universal tennis encoders and you've heard about some of the things that machines can do with a bird. I just yesterday these in quarters can support a wide range of natural language understanding

task such as question-answering text classification or sentiment analysis. They're also video relay tomorrow too. So if you want to do gesture recognition, so you can use some of the models here or even video generation. And we've recently actually just completely upgraded our friend an interface. So that's a lot easier to use so many of these models you can be easily found or search engines. You've invested a lot of energy and making these models Intex pool Hub Easley reusable or composable into new models where you can actually bring your own

data and through transfer learning improve the power of those models with one line of code. You can bring his models right into tents for 2 and using the high-level carrots apis or the little of apis you can actually go and retrain voice model in all these models can also be deployed straight into the machine laying pipelines like the effects if you've heard about earlier today, Recently, we've added support for models that are ready to deploy. These preteen models have been prepared for a wide range of environments across content for ecosystem. So if you want to work in a web

or a no base environment, you can deploy them to tensile JS or if you are working with Mobile in bed devices. You can deploy play some of these models to cancel life. Intense wahab. You can also a discover raid use models for Coral Edge. I keep you devices and weary Saints are any of these these devices combined Tesla light models with Fish And accelerators that allows companies to create products that can run in France right on the edge and you can learn more about that. I'll crawl. AI So yours example of how you can use test for Hub to do fast artistic style transfer

that can work on an arbitrary Penny style for generative models. So let's say you had an image of a beautiful yellow a Labrador and you wanted to see what that's not will look like in Kandinsky. Well with one line of code, you can load a one of these up retrain stock transfer models from the magenta team at Google. And then you can just apply to your countenance Valley image and you can get a new satellite image and you can learn more about some simple chores like that link below. Or let's say you wanted to train and you transfer text classifier such as

predicting whether a movie review is positive or negative rating. Well, let me take a lot of time and data going to make that work. Well, you can pull a number of pre-trained text models of which is one line of code. And then you can you corporate into tents for 2 and using standard apis Eyecare off. You can return it on your new dataset just like that. We've also integrated an interactive model visualizer in beta for some of the models and this allows you to immediately preview of what the model would do and run that

ended the web page or on a mobile app, like a playground app is a wide range of fungi. As far as I stop by Atlas project. You can directly drag an image onto the side and the model will run in real time and show you the results of such as what mushrooms were in the image, and then you can click on it to go and get more information. Many of the Tesla models also have coal a place so you can play with these models in love with the code right inside the browser and powered by the Google infrastructure with collab. In fact, the Google machine learning fairness a team also

has built-in collab notebook so I can pull texting dating these and other things are straight into to their platform. So you can assess whether there are potential biases for a standard set of tasks and you can come by and get her more about that. Pencil Hub is also powered by the community when we launched tentsile Hub last year. We were sharing some the state-of-the-art models from deepmind in Google. But now a wide range of Publishers are going to share their models from the diverse set of areas such as Microsoft AI for Earth the met or India

and these models can be used for many different tasks such as from studying Wildlife populations through these camera traps or for automatic visual defect detection Industries, and it's also generate a wide range of data through the open images extended data sets with that we can get an even richer set of ready to use models across many different specific data fits. So with hundreds of a models that are pre trained and ready to use are you can use cancel mohab to immediately begin using machine learning to stop some

business problems. So, I hope that you cannot come by or demo Booth or goes. It's hip hop. Dad and I'll see you there. Thank you. So the tensorflow team with TF2 has salt a hard problem, which is to make it easy for you to easily express your ideas and debug them and tensorflow. This is this is a big step but there are additional challenges in order to for you to obtain the best results for your research or your product designs. And I'd like to talk about how Nvidia is solving three of these

challenges. The first is simple acceleration II is scaling to large clusters and finally providing code for every level every step of the deep learning workflow. What are the ingredients of the recent success of deep learning has been the use of gpus for providing the necessary raw computer horsepower this computer? Is it Like Oxygen for new ideas and applications in the field of AI? So we designed and shipped tensor cores in our Volta entering gpus in order to provide an

order of magnitude more performance capability compute capability that was previously available and we built libraries such as could be an end to ensure that all the important math functions inside of TF can run on top of tensor cores and we update these regularly as new algorithms are invented. We worked with Google to provide a simple API so you can from your tensorflow script easily activate these routines in these libraries and train with mix Precision on top of tensor cores and get speedups for your training with examples here. For

instance 2 x 2 3x faster which helps you iterate faster on your research and also maybe within a budget of time get better results. Once you have a train model you can we provide a simple API inside of tensorflow to activate tensor RT so you can get drastically faster latency for serving your predictions, which lets you deploy, you know, perhaps more sophisticated models or pipelines, then you would be able to otherwise But you know what's optimizing the performance of a single GPU is is not enough. Let me give an example.

So Google last year released a model called vert as Jeff teen explained yesterday this model blew away the accuracy on a variety of language tasks any approach remodel previous to it, but on a single GPU it takes months to train even on a server with a cheap to use it take more than a week. But if you can train with 32 servers or 256gb use training can complete what type of flowing your hours. However, training a few large scales introduces imposes several new challenges at every level of the system. If you don't properly co-design the hardware and software and precisely

tune them then as you add more compute, you will not get a commensurate increase in performance. And you know, I think Nvidia is actually ideally uniquely suited to solve some of these challenges because we're building Hardware from the level of the GPU to servers the super computers and we're working on challenges at every level on Hardware design software design system design in at the boundaries of these you know the combination of a bunch of our work on this is the dgx superpod and to give you a note to put his capabilities sort of in visceral terms

a team of Nvidia recently was able to on the DJI superpod as part of project Megatron train the largest language model ever more than hate feeling for amateur 24 times larger than another contribution that and videos making and what we're working on is providing reliable code that anyone from individuals to Enterprises can build on top of and videos do the hard work of optimizing documenting qualifying packaging publishing maintaining code for a variety of models and use cases

for every step of the deep learning workflow from research to production. And we're curating this code and making it available to everyone both at NGC. Nvidia.com. But also other places, you know where developers might frequent such as GitHub and TF had what you just heard about as well. So I hope that you know in the short time I was able to convey some of the problems that Nvidia is working on the challenges. We're working on and how we're making available to the textbook along with Google simple apis for acceleration Sami scaling challenges putting out dgx

superpods building projects super pots and germinating code that anyone can build on top of for the entire defunding workflow. Thank you for time. I hope you enjoy the rest of the conference. The world is full of experts right like pathologist to diagnose diseases. Construction workers who know that if a certain tube is more than 40% of struck. Did you have to turn that machine off? Like right now people working support? I know how do I kind of triage tickets? And we want to be exciting things about going to the past few years is it has become

increasingly easy for people who want to take something. I know how to do and teacher to a machine and the big dream is more than anybody could be able to go into that. But I spent my time on in the past few years and worked on the team that launched cognitive services and I spent the past few years working on custom Vision. AI image classifier is an object detectors, but really has never been easier to build machine learning models shooting is really good for all here at Unser flow World confrontational techniques have gotten faster in a transfer

learning easier to use you access to computer in the cloud and an educational material. Never been better write one of my hobbies is to go in like browse the fast that I forms just to see what Learners are building and it's completely inspired. That being said. It's really hard to build a machine learning model in particular is hard to build or bust production run him also work with hundreds by the thousands of customers who are trying to automate some particular task projects fail, you know, it's it's really easy to build. Your first model is sometimes it's actually kind of a

trick right like you can get something astonishingly good a couple of minutes you get off the web like a few minutes later. I like this something that is kind of Okay, not to be robust enough to use kind of in a real environment extremely tough. It's actually hard to transfer your knowledge to a machine. So this might seem trite right but when people first one object detectors a lot of people don't put down boxes around every single object or they get stuck on like having a parsimoniousness. So

for example, and one guy at Seattle people like the Seahawks wanted to try to see if Spotify detector. It puts Bounty box is about around a bunch of football players and discovered. It is actually kind of felt a football as opposed to a Seahawks another information from another team because the Malden have that semantic knowledge that the user hat and so like stuff you can document away, right like you kind of learn to send your first hour or so, but it's naturalness of the way in which we train models today. When you keep them into computer

you're having kind of give it data that represents its own weight distribution. It's not how you were. People up a lot. So you brought that you figure it out and figure out at the prom is bullying a dataset that's really hard to do to inside walkthrough hypothetical case. So when a customer what they really wanted to do was recognized and upload it to their online photo store something that might be like personally identifiable information. So for example, you uploaded I thought of a credit card or photo of your passport. So does targets off day

scripts in one day. Write you just like go you use kind of like a search out a search API and you a bunch of images of credit cards off the web. Do you value a shins? All right. Looks like I'm going to have a baby at 1% false positive rate. What's not so good? I got a million user images. I'm going to run this on tonight at ten thousand or false positives to see how it goes. And when they tried out on real user did it turns out that the actual false positive rate as you might expect as much much much higher. All right. Now these rest another round snout add some negative classes,

right? We want to be able to make examples of other kinds of documents for non credit card things excetera excetera, but it still okay Rio Grande water day 2 of the project like this still feels good progress positives stage 3 have the experience of a usable model which is all right. Let's collect some more data. Label some more data expensive right now something that I thought was going to take me a day in the first round. I'm on like day 7 of like getting a bunch of labelers trying to get like her to work and likelihood of a large amounts of data.

Mouse won't work. So the good news was at this point so miserable. Let's try what it's kind of like a really techniques Azalea see visualization and it turned out. So when you are using kind of people take photos on their phone of something like a document using holding it, would you like not what you seem to do basically built to classify the recognized. Are you holding something that is your son in the picture? I'm about to go off. But okay all the time as nature paper

from 2017 where they were doing like Dermatology images and they are all right. Well having a ruler in an image of a mole cancerous except I think Walker at all what I said, how do surgical markings and I'm going to like his skin for people that didn't have had a cancerous moles and I think holding the right distribution of data. Extremely hard to do even for experts and even harder to do for somebody who's just getting started is Ronnie is it like real-world environments are incredibly complex. Like

this is where projects. I got two main problems with most problems people want to actually do something kind of in a real-world environment with a camera microphone a website. Where can I get user input unconstrained or incredibly challenging to build with data for examples had a customer who built the system and what you can expect people to add data for health system in multiple sensors. It all worked out things to do with rare events, right? Like if I

would have recognized explosions explosions and tracking Start that many people with hand tattoos, but you still want your mama to work in that case. And so look there's a lot of techniques for me to do this better be really hard to build a model. It's important problem here is happening at the start of sand by Syd Archer, but there is coming over all intact for any customer or any person is trying to build a high-quality bottle. What are the Big Thompson aggregate statistics High failure conditions. She might have you met this beautiful PR curve that

you have look really great. And then it turns out that you don't actually have a data set with all the features in your ear model. So that's using speech not have actually printed. It is sad that says, okay. Well with this is a woman with an accent like should have stuff class has become extremely important and becomes very expensive and difficult actually go and figure out where your model is failing and look, Sampling techniques appearing out of order Trooper models Instagram model seems that you can do but it's super challenging for a beginner to figure out what their

problems might be an even for experts. Right like you see you like this all the time finally if we have a model it can be tough to figure out what to do with it. Right most of the programs that you use don't Trump probabilistic outfits in the real. What does it mean for something to be 70% life while you were to have some right side of train models in a row. It might be a more obvious for you, but for me hard to figure out what actions you should take. And so is it for me? Nothing I said today thank us for tickling novel for the folks in. This is what you've

gone through all of these challenges before you build a model to data set 5018. * finally got it to work by boss used to say the problems are inspiring. And for me, there is a problem that is more inspiring and figuring out how can we help anybody who wants to automate some problem, How do you say I'll be able to train a machine that have a camera bust production-ready model and so everybody in this room? Welcome everyone. I'm Sarah. I'm the engineering lead potential like and I'm really happy to be here talking to you about on device machine learning

and I'm really excited to share with you our progress and all the latest updates. the first detachable light is our production ready framework for deploying machine learning on mobile and embedded devices. It is cross-platform so I can be used for deployment on Android iOS Space Systems as well as several other platforms. Let's talk about the need for tensorflow Lite and why we built and on device machine learning solution. Simply Port there is now a huge demand for doing machine learning on the

edge late is driven by a need for building user experiences, which require no latency for their factors are for network connectivity and the need for user privacy preserving features. All of these are easier down when you're doing machine learning directly on the device and that's why we released tensorflow Lite late in 2017. And this shows our journey since then we've made a ton of improvements across the board in terms of the opposite. We support Performance usability Tools, which allow you to optimize your model number of languages be supporting our API as well as the

number of platform Center flashlight runs on principal light is now on deployed on more than 3 billion devices. Globally many of Google's own largest apps are using it as our apps from several other external company. This is a sampling of Ops what you sent to feel like Google photos gboard YouTube assistant as well as leading companies like Haiku Uber and more. So what I says it feel like being used for the refined that are developers use it for popular use cases around text image and speech but we're also

seeing lots of emerging in you use cases come up in the areas of audio and content generation. This was a quick introduction about tensorflow light in the rest of this talk. We're going to be focusing on sharing our latest updates on the highlights. For more details. Please check out the 10th of July talk later in the day. Today, I'm really excited to announce a suite of tools which will make it really easy for developers to get started with tensorflow life. You're introducing a new support Library. This makes it really easy to pre-process and transform your data to make it ready

for inferencing with a machine learning model. Finished look at an example of the steps that are Developer typically goes through to use a model in their app. Once they have converted into the tensorflow Lite model format. And that's a they're doing image classification. So then they would likely need to write code which looks something like this as you can see. It is a lot of cold for loading transforming and using the data. With the new support Library the previous wall of code that I showed can be reduced significantly to this just a single line of code is needed for

each of loading transforming and using the resultant classifications. Next up. We are introducing model metadata. Now model authors can provide a meditative spec when they are creating and converting models and it makes it easier for users of the model to understand what the model does and to use it in production. Let's look at an example again, the meditative descriptor here provides additional information about what the model does the expected format of the input. And what is the meaning of the outfits? We've made our modern repository much richer.

We've added several new models across several different domains. All of them are pretty converted into the tensorflow Lite model format. So you can download them and use them right away. Having a repository of ready to use models is great for getting started and trying them out. However, most of our developers will need to customize these models in some way which is why we are releasing a set of apis which you can use to use your own data to retrain these models and then use them in your app. We've heard from your developers

that we need to provide better and more tutorials and examples severe releasing today several full examples with show code not only how to use a model by to how you would write an end-to-end app. And these examples have been written for several platforms Android iOS Raspberry Pi and even add CPUs. And lastly, I'm super happy to announce that we have just launched a brand new course on how to use tensorflow light on your dacity. All of these are alive right now, please check them out and give us feedback and this brings me to another announcement that I'm very excited about.

So we have worked with the researchers at who will bring to bring mobile birth to developers to tensorflow Lite. What is a method of free training language representations, which gets really fantastic results on a wide variety of natural language processing tasks board extensively to understand natural text on the web but it is having a transformation of impact broadly across the industry. Did the model that we're releasing is up to 4.4 times faster than standard board Wilding 4 times smaller with no loss in accuracy

model is 700 megabytes in size. So it's usable even on lower-end phones. It's available on our side ready for use right now. We're really excited about the new use cases. This model will unlock and you show you all how cool this technology really is. We have a demo coming out of Mobile Boat running live on the phone. I'll invite Jared to show you. Xterra so as we've heard birth can be used for a number of language related tasks. But today I want to demonstrate it for question answering that is given somebody of text and a question about his content bird can find

the answer to the question in the text. So let's take it for a spin. We have a nap here which has a number of pre-selected Wikipedia Snippets. And again, the model was not trained on any of the texts. So now I'm a space geek. So let's stick in the Apollo program. All right, let's start with an easy question. What did Kennedy want to achieve with the Apollo program? Landing a man on the moon and returning him safely to the Earth. Which program came after Mercury but before Apollo? Project Gemini Not bad.

All right Bird, you think you're so smart? Where are all the aliens? Moon mystery solved now all jokes aside, you may not have noticed that this phone is running in airplane mode. There's no connection to the server. So everything from speech recognition to the bird model to Texas beach was all running on device using a pretty neat. And I would like to talk about some improvements in a vessel in Sweden making in the tensorflow Lite ecosystem focused on improving your model

deployment. Let's start with performance hockey goal attempts. The light is to make your model run as fast as possible cross mobile Edge CT used gpus dsps intend to use any Investments across all of these friends, we made significant improvements without it opencl support to improve GPU acceleration Android q and an API Ops and features or previously and I was Qualcomm tsp delegate targeting mid and low here devices will be available for use in the coming weeks. And we also made some improvements in our performance in Benchmark tooling the better assist BookBub

model and after bell peppers and identifying the optimal deployment configuration. Highlight some of these improvements. Let's take a look at our performance just six months ago at Google IO using mobilenet for classification inference. Compare that with the performance of today this represents a massive reduction in latency and you can expect this across a wide range of models and devices with low-end and high-end just for the latest version of tense of light into your app and you can see these improvements today. So digging a little bit more into these numbers fling Point CPU execution is

our default path and it represents a type of solid Baseline. Enable quantization now easier with post-training quantization provide three times faster inference. 91 GTO execution provides yet more of a speed up 6 times faster than are CPU Baseline. And finally for absolute Peak Performance, we have the pixel for in your look for accessible via napi tensorflow Lite delegate this kind of expression specialized accelerator available in more and more of the latest devices most capabilities and use cases that just a short time ago were thought Impossible on mobile devices.

We haven't stopped there. Seamless and more about small conversion has been a major priority for the team and would like to give an update on a completely new tensorflow Lite model conversion pipeline. This new converter was built from the ground up to provide more to do error messages when conversion failed at support for control flow and for more advanced models like bird deep sweetie to mask rcnn and more excited to announce that the new converter is available in beta and will be available more generally soon. We also want to make it easy for any app developer to use tents for the

light to that end. We release a number of new first-class language findings including Swift effective C C sharp for Unity and more compliments our existing set of findings in C plus plus Java and python the community efforts. We seen the creation of additional buying eggs and rusko and even dark. Is it open source project we welcome and encourage these kinds of contributions. Remodel optimization toolkit Remains the one-stop-shop for compressing and optimize your model. There will be a talk later today with more details. Check out that talk. She's come a long way, but we have many

planning prevents or road map includes expanding set of supported models for the Improvement in performance as well. As someone where is Advanced features like on device personalization and trained to check out a roadmap contents fluid out of work and give us feedback again. We're an open source project and we want to remain transparent about our priorities and where we're headed. So want to talk now about our efforts in enabling ml not just on billions of phones, but on the hundreds of billions of embedded devices and microcontrollers that exist and are used in production. Globally

tensorflow Lite for microcontroller vs. Effort uses the same model form at the same conversion Pipeline and largely the same Colonel library is tensorflow light. So what are these microcontrollers small low-power all-in-one computer computers that power everyday device is all around us from microwaves smoke detectors just sensors and toys cost as little as $0.10 each and with tensorflow is possible to use them for machine learning. Arm an industry leader in the event of Market has adopted tensorflow as their official solution for a eye on arm microcontrollers optimizations that

significantly improve performance on this embedded arm Hardware. We've also Partners partnered with Arduino and just launched the official Arduino tensorflow Library. This makes it possible for you to get started doing speech detection on Arduino Hardware in just under 5 minutes. Now we'd like to demonstrate touch the lights microcontrollers for running in production. Today for motor breaks down you can cause expensive downtime and maintenance costs but using tensorflow. It's possible as simply and affordably detect these problems before failure dramatically reducing these cost. Mark Stubbs

co-founder of Shoreline iot when I'll give us a demo of how you'd they're using tensorflow to address this problem. Take a note 2 sensor that can be attached to a motor just like a sticker uses a low-power always-on tensorflow model to detect motor anomalies. And with this model their device can run for up to five years on a single small battery using just 45 microamps with its cortex and a cortex M4 CPU. So here we have a motor that will stimulate an anomaly as the RPMs increase will start to vibrate and shake and the Tesla Model should detect this as a fault and

indicate so with a red LED Alright Mark start the motor. Okay, so here we have a normal State and you can see this it's being detected with a green LED. Everything's fine. Let's Crank It Up. Starting to vibrate oscillating. I'm getting a little nervous and frankly a little sweaty red light. Boom. OK Pensacola model detected the anomaly we could shut it down Halloween disaster averted. Thank you Mark. And once again be ready thankful for the contributions that we get from our community talk later today. We have a demo Booth. Please come by and chat

with us. Thank you. My name is Sunday, but I'm the product manager for tensorflow JS and I'm here to talk to you about machine learning in JavaScript. So you might be saying to yourself that I'm not a JavaScript developer. I use Python for machine learning. So why should I care? I'm here to show you that machine learning and JavaScript enabled some amazing and useful applications and might be the right solution for your next ml problem. So let's start by taking a look at a few examples.

Google released the first ever AI inspired this was on the occasion of Johann Sebastian Bach's about anniversary and you able to synthesize a box style Harmony by running a machine learning model in the browser by just clicking on a few notes. So just more than 50 million users created these harmonies and they save them and share them with their friends. Another team in Google has been creating these fun experiences. One of these is called Shadow art where users are shown a symbol of a figure and you use your hand Shadow to try

to match that figure and character comes to life. Other teams at building amazing accessibility applications making web interface has more accessible on the bottom left. You see something called create ability when a person is trying to control a keyboard simply by moving their head and then on the bottom right is an application called teachable machine, which is a fun and interactive way of training and customizing a machine learning model directly in the browser of these awesome applications have been made possible by tensorflow. JS is are open source

library for doing machine learning in JavaScript. You can use it in the browser all you can use it server-side with no. Jazz. so why might you consider using tensorflow Js I didn't realize you would use this one is you can run any of the P existing 3 train models and deploy them and run them using tensorflow JS you could use one of the models that we have packages for you or you can use any of your tensorflow save models and deploy them in the web or another JavaScript platforms you can replant these models and customize them on your own data again using

tensorflow JS and honestly if you're a JavaScript developer wanting to write all your machine learning directly in JavaScript you can use the low-level UPS API and from scratch build a new new model using this Library So, let's see why this might be useful. So first it makes machine learning really really accessible to a few lines of code. You can bring the power of machine learning in your web application. Let's take a look at this example. We have two lines of code with which we are just sourcing library from from our house at

Scripps and we're loading a pre-trained models in this case the body pics model with a remodel that can be used to segment people in videos and images. So just what these two lines you have the library on the model embedded in your application and then we call the models estimate person segmentation method passing at the image. And you get back and add an object which contains the pixel mask off where there's a person present in this image. And there are other methods that

can subdivide this into various body parts and there are other than during utilities about 5 lines of code. Your application has all the power of this powerful machine learning model. American to use both Blind Side and server-side using browser has lots of advantages amazing interactivity and reach of Bowser as a platform application immediately reaches all your users who have nothing to install on their end by simply sharing the URL of your application that up and running you get the benefit of interactivity of browser as a easy

access to webcam and microphone and all other sensors that are attached to the browser. Another really important point is that because these are running client site uses data stays client-side. So this has strong implications for privacy sensitive applications support GPU acceleration through web DL. So you get brake performance out of the box. Using the server side tensorflow JS support snored. So lots of Enterprises used Nord for their back and operations and ferritin of their data processing. Now, you can use tensorflow directly

with no by importing any chance of rosehip model and running it through tensorflow. JS North Node also has an enormous npm package ecosystem so you can benefit from that and plug into the today npm repository collection and fur Enterprises where your entire back and stack is a note you can now bring all of them into note and maintain a single stack. So that's a question to ask you this how fast is it? And I'm showing her some results from mobilenet infants time on the left. You see results on mobile devices Running Blind

Side and applications at about 50 m per second. Android performance has some room for improvement and our team is heavily focused on addressing that because we bind to tense of flowers native C library. We have performance painter David python tensorflow boat on CPU as well as on GPU. Turn order to make it easy for you to get started. We have pre-packaged a collection of Immortals prevent models for most of the economist asks and these include things like image classification object detection human pause and gesture detection speech commands models for recognizing spoken words

and a bunch of text classification models for things like sentiment and toxicity. You can use these models with very easy reptile of an apis from a horse steps install them. And then you can use the 310 models and build your applications for a variety of use cases and these include these include gesture-based interactions that help improve access ability of your applications detecting sort of user sentiment and and moderating content conversational agents chatbots as well as lot of things around front-end web page optimization.

SodaStream models are a great way to get started and they're good for many problems. However often you have the need to customize these models for you. Don't you don't use and activity of verb comes in handy and I want to show you this application called a teachable machine, which is really nice way of customizing a model in just a matter of minutes. So I'm going to test both the demo guards as well as the time bizaardvark safe and try to show this life. So what do you think it is? This is a teachable machine web page which has the mobile light model

already loaded and glasses glasses and output word selects favorable to rock for green. paper for purple and scissors for rent Rock and I'm going to the cartoon images for paper. Rock and I'll go to the cartoon images for scissors paper. Okay. Salt Rock Paper Rock Paper Rock Scissors paper rock scissors Pretty neat so it's really powerful to set of customized models like this super interactively with your own data. Now what if you want to train your data on a somewhat of a larger scale

and train a custom really high-performing model specific to your application today. We are really excited to announce that we now support tensorflow JS 400ml meaning that you can use and then with one click you can export a model that's ready to be deployed in your JavaScript application. so using this feature one of our early testers the CVP corporation, which is building some mining safety applications for image classification applications for the mining

industry and in just about 5 note hours of training, they improve their model accuracy from their manly train model from 91% to 99% and get a much smaller and faster performing model and then immediately instantly deployed in a progressive application for on-field use Play New Edition two models, one of the big Focus areas for us has been support for a variety of platforms and diversity language which ones on a large bunch of platforms dance ideas can be used on all these different platforms. And today again. We're really

happy to announce that we now support integration with react native. So if you are willing cross-platform native applications, you can use tensorflow JS directly from the react native and you get all the power of webgl activation. What is the capabilities of the library? Let's look at a couple of use cases. Modiface is an AR technology company based out of Canada and they have used to build this this mobile application that runs on the WeChat mini program environment, which Royal where it lets users try out this beauty products instantly running in

these instant messaging applications performance and they were able to achieve all of those targets with tensorflow JS running natively deployed on his mobile mobile devices. In order to showcase the limits of what's possible with this our team has built a fun game and an application to show how you can take a state-of-the-art Model A very high resolution model II do face tracking and his business lip-syncing game so that the user is trying to lip-sync to a song and a machine learning model is trying to identify the lips and trying to match it

to how well you are doing lip syncing and then because it's in JavaScript send a Vibe you've added some visualization effects and some of the side effects. Okay, it's pretty cool. So this this demo the creator of this demo his head with us, he is at the station. So, you know, you can please stop by there and you can try it. If it feels like you're beginning to see more and more applications of Enterprise using tensorflow JS in always using it for a lot of their internal ml task specialization and computations directly in the browser and a research group in IBM is using it for on

the field mobile classification of these disease-carrying snails with spread certain communicable diseases. Selassie, I want to thank our community the popularity and growth of this library is in large part due to the amazing community of our users and contributors and really excited to see that lot of developers that building amazing extensions and libraries on top of tensorflow.js to extend its functionality. So this was just a quick introduction to tensorflow JS. I hope I've been able to show you that if you have a verb or a note. Ml

Eustis tensorflow JS is the right solution for your needs to check out our more details to talk later this afternoon where our team will dive deeper into the library and there are some amazing thoughts from a user's showcasing some fantastic applications is your 1 Source for a lot more information more examples getting started content models Etc. And so I can get everything you need to get started. So with that I would like to turn it over to Joseph Paul Cohen who is from Miller medical and he will share with us an amazing use case of how their team is using tensorflow JS.

Thank you very much. I'm very excited to be here today. So I'm going to talk about is that chest x-ray Radiology tool in the browser? We look at the classical traditional diagnostic pipeline. There's a certain area where web-based tools are used by physicians to Aid them in the diagnostic decision on such as Kidney donor risk for cardiovascular risk. These tools are already web-based with the advances of deep learning. We now I can do Radiology tasks. So just chest x-ray

Diagnostics and now put them in the browser cases for this is useful on an emergency room. We have a time limit of human in Aurora hospital Radiologists are not available or very far away the ability for a non-expert to triage cases for an expert saving time and money and where we'd like to go is towards a rare diseases and their kind of little data starred in this area to be able to do that. That's what this project has been called. nice by yamaka ISO we need to do to achieve. This is run a

state-of-the-art chest x-ray diagnostic then tonight in a browser. One thing for preserving pirate privacy of the data are the same time of allowing us to scale to millions of users with with 0 computational cost on our side. So how do we achieve this with tensorflow JS which allows us one second feed-forward in the Spence net model with a 12-second initial load up time. We also need to do with rusting out of distribution samples, but we don't want to process images of cats or images that are not properly formatted x-rays to do this using autoencoder with a Sims 4

in with the Reconstruction. And then finally we need to compute gradients in the browser right to show assailants and why we made such a prediction ship to models. With one Computing to feed forward and the other one compute ingredient or we can use tensorflow JS to compute the actual gradient graph and then compute it right in the browser given whatever model we have already shipped. This makes development really easy and it's also pretty fast. Hi, I'm Tatiana and I'm going to talk today about I'm a liar before we talk about The Melia let's start from the basics. We are here

because artificial intelligence is experienced a tremendous growth of all the three components algorithms data compute have come together to change the world compute is really really important because that would enable smashing learning resources to build battle algorithms to build new models at this you can see the models are becoming much much more complex to train a model today. We need several orders of magnitude computer capabilities than we needed several years ago. And how do we build Hardware which makes it

possible for those of you who worse than card vegetales is and then this is also did not and scaling. We cannot say any more simply to say the next if he was going to run and because of that the industry is the explosion of custom hardware and there is a lot of innovation which is driving this computer which makes artificial intelligence possible. So if we look at Waters companion you look in your pocket. You probably have cell phone inside the health cell phone most likely letter is a little teeth which makes it official intelligence possible.

And it's not just one chip right? There is CPU. That is GPU. There is this Peter is neural Processing Unit. All of that is hidden inside the little phone and seamlessly working together to make great use experience possible. In the data center, we see the explosion of specialized Hardware also Habana accelerations and CPUs and gpus many different Chiefs. We have CPUs. All of this is followed in the tremendous growth of specialized computer in data centers.

And once you have more specialized accelerators that Williams more complexity and as we all know Hardware doesn't work by itself. It is powered by tofu. And so that is also a tremendous Grove in software ikasystems for machine learning in addition to tensorflow. There are many other different Frameworks which are trying to solve this problem. And actually we go to problem with the explosive growth of hard water and soft water. It says the big problem here is that none of the scales too much Hardware too much complexity too much software to many

different systems are not working together. And what's the fundamental problem? The fundamental problem is that we have the technology industry across-the-board are inventing the same kinds of tools the same kinds of Technologies and we're not working together in this is why you see the consequences of this you see systems that don't interoperate because they're built by different people in different teams to solve different problems. Then directs is working on their chip, which makes perfect sense. It doesn't really integrate with all the different software for people that know or

work with all the harbor people. This is why you see things like you bring up your model. He tried to get to work on a new piece of Hardware. It doesn't work. Right the first time you see this in the cracks that form between the systems and that manifests is usability problems or performance problems with the bug ability problems and abuser. This is not something you should have to deal with. So what do we want? What would really love to do is take this big problem which has many different pieces and make it simpler by getting people to work together. And so we thought a lot about this

and the way we think we think we can move the world forward is not by saying that there is one right way to do things. I don't think that works in a field. This is growing his explosively is machine. It said what we think the right way to get to do. This is is to introduce building blocks and instead of standardizing. These are experience or standardizing the one right way to do machine learning. We think that we have the technology industry can standardize some of the underlying building blocks that go into these tools that can go into the compiler for a specific ship that can go

into a translator between one system of the other building blocks we know and we can think about what we want from them. We want of course the Best in Class craft technology has a really important we want to solve not just training but also inference mobile and servers and including all permutations. So training on the edge super important. Growing in popularity. We don't want this to be a new kind of Technology Island solution. We want us to be part of a continuous ecosystem that spans the whole problem. So this is

what a my resolve about and where's the new system that we have been building but we are bringing to the industry to help solve some of these common problems that that manifests in different ways. And so one of the things that we're really excited about is the Empire is not just a Google technology. We are collaborating extensively with Hardware makers across the industry or seeing a lot of excitement in a lot of adoption by people that are building the world's biggest and most popular Hardware across across the world, but what is Emily our infrastructure

and if you're not going to compilers what it's really saying that saying that it is providing that bottom level technology low-level technology that underpins building individual tools and individual systems that then get used to help with graphs and help with chips and things like that. So how does this work? Well what Emily provides if you look at it in contrast to other systems is it it is not again a one-size-fits. None kind of a solution is trying to beat technology technology. The powers is systems. And so like we said before it ofcourse contains a state-of-the-art compiler technology

and we have both within Google. We have dozens of years of compiler experience with in the game, but we probably have hundreds of years of compiler experience across the industry all collaborated together on this, platform. And this is my new modular extensible because requirements continue to change in our field. It's not designed to tell you the right way to do things as a system integrator is designed to provide tools so that you can solve your problems. Now if you dive into the compiler, there's a whole bunch of different pieces. And so there are things like a low-level graph

transformation systems their things for cogeneration so that if you're building a chip, you can handle my chickens Colonel the point of this is the MMA hour does not force you to use one time and pipeline. It turns out that while compilers for cogeneration a really great sword handwritten kernels. If you have a handwritten journals or TuneIn optimized for application, of course, they should start in the same framework should work with existing run times. And we really see Emily or is providing useful value that then can be used to solve problems does not trying to force everything into one

box. They may be wondering though for you. If you're not compiled a person or system integrator chip person. What does this mean to you? And so let's talk about what it means for tensorflow. So what it means for tens of flow is it allows us to build a better system because integrating tensorflow with immediate or specialized Hardware is really a heart problem. And with Emily are we can build a unified infrastructural area, which will make it much simpler for tensorflow to seamlessly work with any hardware a cheap which

comes out. Will you as a python developer? It's simply means better develop develop experiences. A lot of things that today might be not working as smoothly as we would like them to. Can be resolved by Emily are on so this is just one example you write a model you try to run it through the tens of thousands of the matter. You have no clue what it is. And now we see if you some good cop and try to help you with them or you will get an error message that says this is the line of python code which

on the problem. You can look at it and fix the problem yourself and just to summarize the reasons we are building Emily are is because we want to Foster and we want the industry to move faster with us and one of the keys to make industry work. Well together is neutral Governors and that's why we submitted. I'm a liar as a project to love and now it is part of all of the American system. The code is moving soon. And this is a very important because I love them has a 20-year a history of neutral

governance and building the infrastructure, which is used by everybody in the world. And this is just the beginning police station. We are building a Global Community around them a liar and once we are done and we will get better for everybody and we will see much pasta Advanced artificial intelligence in the world. Thank you. I work at hike and I leave the United Nations were there in various areas, which I'm going to talk today. Formally I've been working. So here

are some videos use cases that redo using AI the fundamental being hike as a platform for messaging and now we're driving a new social to church. We are looking at a more visually of expressing interactions between the users of typing messages. And if one could use & get recommended stickers with Ludacris the same way in a magician fashion in the more expensive fashion, then it would be more interesting and engaging conversation eventually across Martin acoustic electric Malaysian will basically be address around eight to nine languages currently in

India. And as we expand internationally you would be expressing most number of languages. So we want to go hyper local and then as well as type of personal perspective you want to address the needs of a person from a payphone his or her own personal language perspectives. So when you type you would automatically get stickers recommended in the corresponding native language, The second one is for in the communication using social network analysis in deep learning videos Gotham buildings and deep-learning to recommend friends. The next one is in Chili's around fraud analytics. So we

have lots of Click Farms way people try to miss use the remote setup given on the platform in a bit to see settings. And therefore you need interesting deep learning techniques and then only protection to address no no, no, no, no, no, no, no. No, no other one is Angel is around camping Julian hypersalivation in optimization to be able to address the needs of every user and make the experience engaging and extremely interactive. And finally we have interesting to stick up processing using Vision worse than Graphics. We should be coming soon in our little lizards. Wingfeather, you

know, we have a strong are research focus of your passionate about research. We have multiple Publications in ecir this year. It's a demo not directly to messaging but we had then I see him in a box of paper as well and looking at the kind of problems. We address need to look at extensions or physical address the limitations of supervised learning problems where we need to address cases where that is a long tale of data, very last stand of labels available limited number of flavors available in the same problems occurred in

NLT version of Kodi for Smart Learning and stuff like that. Hike real looking at 4 billion events per day across millions of users to connect a terabyte of data using the Google cloud with various tools on Google Cloud, including kubeflow a bit worried about data flows and we use it for other use cases some of these cases which I mentioned earlier. Play Century I will look into one particular use case right now. It is on stickers for sticker that I mentioned that a powerful

expressions of emotions context and its various kind of visual expressions or Dare Challenge Discovery. If you have going into millions and further into billions of seekers, how do you discover the stickers and be able to exchange at Real Time with few milliseconds in 6 yd or typing of personal interest on tax return event of the day situation recent messages gender language and you want to predict what does Sea Cadets most relevant to eight essential Ibanez to look at all the different ways a particular

Texas time when these two aggregate essentially the same and similar phrases to have the Layton putting a process videos languages between the languages and across the languages so that it does not experience and we need to deliver in the name of the device. A few milliseconds of response time flow of are basically given context and what the user is currently typing we use a message model which predicts using a classification model A produced a message and those messages are map to the corresponding speakers. Southern prediction

essentially we used a combination of tensorflow learning at the server cancel flight running on the device in the combination. We want to deliver basically a few milliseconds of latency for getting that because I commented on your network and try the new network on the device using tensorflow Lite and able to get the desired Motor Performance. The speakers eventually, once the messages are predicted. The speakers are naturally map based on the types of the stickers on what intense they are meant

to deliver and corresponding to the message protected those stickers at what the user one predicts the message that the person is trying to express from a solution perspective has a gender and then go to the speakers in the speakers. We basically School using reinforcement learning algorithms for words, so that the right kind of speakers and the way the people behavior on the platform is changing the corresponding speakers also adapt to it at real time. Thank you.

Cackle comments for the website

Buy this talk

Access to the talk “Day 2 Keynote (TF World '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “TensorFlow World 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “AI and Machine learning”?

You might be interested in videos from this event

March 11, 2020
Sunnyvale
30
205.62 K
dev, google, js, machine learning, ml, scaling, software , tensorflow, web

Similar talks

Kangyi Zhang
Software Engineer at Google
+ 3 speakers
Brijesh Krishnaswami
Software Engineering Manager at Google
+ 3 speakers
Joseph Paul Cohen
Postdoctoral Fellow at University of Montreal
+ 3 speakers
Jared Duke
Software Engineer at Google
+ 3 speakers
Available
In cart
Free
Free
Free
Free
Free
Free
Pete Warden
Engineer at Google
+ 2 speakers
Nupur Garg
Software Engineer at Google
+ 2 speakers
Matthew DuPuy
Principal Software Engineer at Arm
+ 2 speakers
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Day 2 Keynote (TF World '19)”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
558 conferences
22059 speakers
8190 hours of content