Duration 38:13
16+
Play
Video

An introduction to developing Actions for the Google Assistant

Adam Coimbra
Product Manager at Google
+ 2 speakers
  • Video
  • Table of contents
  • Video
2018 Google I/O
May 8, 2018, Mountain View, USA
2018 Google I/O
Video
An introduction to developing Actions for the Google Assistant
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
22.15 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Adam Coimbra
Product Manager at Google
Jamie Hirschhorn
Global Product Partnerships, Google Assistant at Google
Sachit Mishra
Software Engineer at Google

Adam is a Product Manager working on Actions on Google, focused on the conversational APIs for the Google Assistant. He graduated from the University of Pittsburgh with bachelor's degrees in Computer Science and Business Administration

View the profile

Jamie works on Global Product Partnerships for the Google Assistant. Her team at Google helps developers with discoverability and re-engagement integrations across core Google services, including the Assistant & Search. Jamie focuses on emerging surfaces for the Assistant, and building flagship partner experiences across key consumer verticals, such as Meals and Delight & Family. Jamie studied Finance & English at the University of Florida, and resides in San Francisco.

View the profile

Sachit works on libraries, documentation, and outreach efforts for Actions on Google as a developer programs engineer. He has also supported Android TV and Google Cast. Prior to Google, Sachit was a software engineer working at companies like Bloomberg, Intuit, and Grooveshark. Sachit earned a bachelor’s degree in Computer Science from the University of Florida.

View the profile

About the talk

Interested in the Google Assistant but not sure how to start developing Actions? Head over to this introductory session to get an overview of the developer platform and learn how to launch an Action that will make your existing services available on hundreds of millions of devices through the Google Assistant.

Share

Hi everyone, Adam coimbra, and I'm a product manager on the Google Assistant. I'm such as mister. I'm on the Google Assistant developer relations team. I work on early-stage integration. So Sofitel first. Cool, so welcome to the introduction to building actions for the Google assistant and Cesaro review session soap for totally new to all this. Hopefully it'll Peak your interest in the other great Deep dive sessions we have in Ohio this year, so I'll be going through the the why you should build so the

motivation and background for building for the assistant my favorite part from Sachin how to build the Deep dive into the text was playing how to really flourish with a platform and optimize your actions to get more value. So it seems like we see a major shift in technology every decade or so. Mainframes or so. I'm told the desktops the web mobile. For example with the move from web to mobile. We saw an explosion of new devices new ways to consume content Interactive Services the apps in the mobile web. Shift Computing becomes more ubiquitous and people expect technology to do more and

more to help them with their day-to-day lives. But all these devices and services also increased complexity and they can get in the way of creating cancer. We're just trying to get things done. We hope that it will solve it. So that's why we're so excited about the assistant because we hope that I can solve some of these problems by orchestrating across this ocean of services and devices that are disposable and allow us to just naturally Express. We want literally say it out loud and get back what we asked for in an equally natural way. To the Google assistant is our approach to this

new era of digital assistants and conversational Computing and we're starting to see that long-term Vision come to fruition uses are asking for things in new ways. They're saying I want to know I want to do I want to hear and they're expecting to be able to complete more and more complex tasks with assistance and they're asking for their favorite Brands to be available in this new identity across all their devices. So great example, you probably saw this morning is Starbucks soon. You'll be able to a Starbucks for usual and in 10 minutes have your drink waiting for you at the closest location?

So this is a service that many of us use on a daily basis, bring in an entirely new level of convenience and seamlessness to this use case. So, how do we do this while actions on Google is the developer platform for the Google Assistant not there already participating in Google's existing ecosystems like search Android play maps or working to automatically bring your services in content to the assistant. But actions on Google is the way to Target the assistant directly. So let's say you already have a web app mobile app optimized every pixel of it and you're asking what's next bring it to the

assistant app. Think about the use cases that makes sense in this space. For example Starbucks focused on the daily routine of ordering your usual coffee on the way to work. Show me what actions on Google to be this powerful new ecosystem for developers to participate in this new assistant space and leverage Google's investments in AI First Technology, but for you to invest we expect you to ask us what do I get in return and there's three things that I think are really important for us to provide for this to be a valuable platform does a reach monetization in re-engagement. First of

all, we have to provide you with the reach what does access to a wide audience of users now uses are already coming to be assistant to perform over 1 million actions, but the system is far from saturated as user come to get more more things done. There's a huge opportunity for your brand to get discovered and some still unmet needs and they're performing these actions across a huge range of devices and contacts from voicemail speakers. Like the Google home. The phone to Android auto to TV is Chromebooks and more bringing the number of devices where you can reach users to over 500 million.

Another component to the audience expansion is the global user base actions are now available in 25 plus locales with more coming this year. So what you have access to this white audience, you need to be able to actually run your business make money the last year we had a transactional capabilities for physical goods and services and Sumer expanding disc to digital Goods in subscriptions as well. We fill out an end-to-end transactional experience completely native to the conversational mode Audi using Google pay we can seamlessly take payments from user's identity. We can sign them in your app

and you can register for new accounts. And now these experiences are going to be even more seamless with fully eyes-free support for both payments and identity. So have a great session on Thursday actually on adding transactional capabilities if you'd like to learn more Now the final key to making a successful is to create a sustainable platform re-engagement is all about treating a loyal base of users. They keep coming back to your action over and over again now and I can tell you that our team constantly thinks about what we call user Journeys. We've analyzed tons of usage behavior and

found three critical Journeys were we think we can really assist users in the morning when they're trying to get out the door during their commute so they can be productive on the go and in the evening when they're trying to destress and relax some of these key moments uses are highly engaged and they keep coming back to the assistant and by optimizing for these moments these systems can drive organic Discovery and retention to your actions. So if you'd like to learn more about driving engagement come to Thursday session on building a user base for your actions. So we really do believe

there's an amazing opportunity here for you to get a white audience make money and then get more loyalty and retention. But what did you actually Build Together on this? Well, we offer a range of Integrations. And the first one is vertical programs. So either direct integration with Google and they're great because with a simple API implementation who will create an entire user experience for you. Now, it's been amazing to see the success and Adoption of our smart home API, you heard it now integrates with over 5,000 devices a single API ties together the Google assistant and the device

makers infrastructure increasingly complex commands and support a wide range of understanding. We have a great talk on Smart Home Integrations on Thursday. So please visit to find out more and this is just one example of how powerful vertical programs are by enabling great user experiences with simple Integrations. Apple Publishers, we offer contact lens this extends our long history of structured data Mark of a Google search. You can optimize your structured data for the assistant to enable great voice experiences with the contents of for example, I used to ask my assistant about

recipes and see your Bridge Center results from Food Network because the content is optimized for the assistance. We can provide voice instructions and guide them through the recipe step-by-step if you're already doing structure data for search you're on the right track and we have a deep dive Wednesday on taking the next step to actually optimize that for the assistant. Next with app actions, we're helping you discover the core capabilities of your existing Android app. So you can expose actions and content from your Android app. I leaking it to semantic intense in the actions that XML

file this allows Google to map user queries and for content and actions to your app and then feature you and suggestions across the Android Launcher Smart Text selection Google search app and Google assistant Android and the Play Store will have to get further reach and engagement with our users. You can find out about this actually just today at 5 p.m. In the top getting started with that factions. So these are great ways to integrate if you have an existing Android app or vertical or content specific use case, but many of the most successful ecosystems like the web and mobile are powerful

because they have a generalized integration model that allows developers to build custom experiences for any use case. They can imagine it's completely custom for your brand third direct conversation between you and your user example of conversational action. It's called California surf report and helps you find wave conditions if you live in, California, Let's walk through the user experience as a query like talk to California surf report. And and this is like the voice version of a shortcut on the mobile home screen. So the Google Assistant isn't able to

look through his index of actions and finds a match and now it's time to let the user know that it's going to hand over control to California surf report. So it says something like sure connecting you with that action in York on which is like an audio version of an icon is a distinctive sound letting these are Noah transition is taking place. And now the user is conversing directly with the action the action welcomes the user and its own voice explaining its functionality and asking an initial question. And finally the action is able to respond with the information to user asked for

and ends the conversation. We hear another York on letting the user know the assistant is back in control of the conversation. So this is a powerful open-ended model where you can build any sort of conversational interaction that you can imagine and it's not just about voice assistant is on so many different devices and many of them have screens add enriched visuals to supplement the conversation on your face makes it far more engaging and delightful for your users. Here are some examples of that same interaction with California surf report on visual devices, as you can see, it creates

an immersive experience responding to the user with Rich visuals as well as visual output as well as Voice output, sir. Not supporting different modalities in form factors has always been a challenge for platforms like the web and mobile and when you go to the assistant, you're targeting an even more diverse range of modalities from voice only like Google home device for devices like the new smart displays Intermodal devices like the phone and visual only like SmartWatches, but our conversational action. If you guys are multimodal with a single implementation of our Rich response

AP I still your experience across all these devices are interested in learning more about this. We have a great session on Wednesday called design actions for the assistant. Course no matter how smart is apis. Are it still takes some work and Technical know-how to get up and running and that's why we provide templates to get you started. This is a super fun and easy way to build for the assistant. No prior knowledge required. You can create fun and engaging conversation actions like personality quiz trivia and flashcards. All you have to do is come to the actions console and follow a

step-by-step wizard to get started with the template. Then you just fill out a Google Sheets spreadsheet with a content for example for flashcard. There's a row for each card and columns for the two sides and it hit so I love for everyone here to tonight go and spend 20 30 minutes and and build a conversational action with templates. If you'll see it's really fun and easy course, I know most of your developers, so that's never going to be satisfying enough. You probably have a use case. We've never thought of you want to treat create something truly custom. We're just interested in this

conversation of space and want to start building your skills. So let's hear a little bit about how to build custom conversational actions. All right. Hey everyone. So Adams just laid out a bunch of options for integrating with the Google Assistant than what I want to do is I want to walk us through building a conversational action and to do that. I want to share my journey with you of how I got to hear a fully functional high-quality conversational action that I built from scratch and what it does is it gives the user of some information about any given programming language

specifically information that will inform whether to use that language in an upcoming project. So like many of my colleagues at Google and I'm sure many of you here. I have had the profound gift of sharing my programming knowledge with others back when I was in high school in South Florida. I volunteered at a youth program in my community where we taught the basics of programming to a middle school and high school student. And this was in a community where a lot of these students wouldn't have had exposure to this knowledge without these kinds of classes. And After High School when I

moved away for college, I had to say goodbye to these students with whom I've worked for so long and I've always been looking for another way to reach out to them. But looking for me a lot of them have grown up and they now have Android phones and other devices with the Google Assistant built-in. So I thought it would be a great platform to reach out to them. Once again with an old lesson about choosing the right language for the right project. So from all of the options that Adam had laid out I had to figure out which one was the best for me. So I need to build something goes educational but

you know, I kind of have my own brand, you know my own way of teaching and I want it on that back and forth. I want to control the dialogue. So I need a conversation. So I went to the actions on Google documentation and I learn more about building conversations and basically works by sending your endpoint a request for every turn of dialogue and it gives you complete control over the conversation. You got your own voice separate from the Google assistant and the conversation ends when you decide or when the user cancels out. Yeah, but that's pretty taunting especially for someone who has

historically built graphical applications, right? So I wanted to learn more So when it comes to any new platform there for things I care about design tools infrastructure and testing. So with this whole new paradigm of interacting with the user I wanted to start with design. So I wear that one big thing to consider is your actions Persona human beings. We have this remarkable habit of comparing any human-like interaction to a real human presence. And I promise that this is going to happen to your action. So find a persona for your action and always ask what would this Persona say or

do in this moment? And how do I come up with my Persona? Well first I came up with some adjectives that describe my brand right? So I'm a little bit Kiki and I'm overeager and I'm kind of talkative when the topic is something that I really care about and then I came up with some characters that embody this in the real life so I could use an engineer like myself or maybe like a librarian or if you're like any of my friends video game enthusiasts. They tend to be this way as well. And then I chose one and I kind of will it down to a description so I came up with like a digital

library in program that's been migrated from Legacy architecture to architecture since the 70s and so it knows a lot about programming languages. All right, then I had another choice to make either choose a voice right and I realized I had a couple options here. Okay. So the platform comes with some built-in synthesized voice is both female and male and that makes it really easy to just plug in your text and go and you can even get you know some advance to control over the delivery of that output which speech synthesis Mark of language and that allows you to control some things like the pitch

and the speed and insert little boy Steven embed audio clips in the output. The other option is I could go with recorded voice talent. I could use that instead and you know, this has the potential to take my experience to the next level, but I would actually have to go find some voice Talent help and I would have to localize my recordings if I wanted to be supporting the more Regents, so I just don't have the resources for that that was going to work for me. Alright, so now I figure I can jump into the tools. The first place I went is the actions console and

here I created a project I can see all these options for getting started. I can see the different categories of actions that I can build. I can simulate my action when it's in development. I can check out analytics on the action when it's published. I can edit the directory listing that users are going to see any system directory and the beautiful thing about this project. Is that a sexy back by the Google cloud and my fire so all of my related technical assets can live in one place. All right cool so I can pretty projects and I can create an endpoint in what's a Firebase using the

same project? But this endpoint is going to receive The text of the user's utterance and I need to process that okay, but I don't have a PhD in natural language processing and machine-learning is shaky at best. All right. So what do I do when I use regular Expressions? So the next thing I figured out is that Google is going to give me this amazing tool for natural language understanding called dialogflow and it does exactly what I need to turn that unstructured natural language input into a structured representation that my endpoint can consume dialogflow sits in the middle of

between the assistant and my server and it does that for me with hybrid machine learning powered intent matching and entity recognition a second. But first, I just wanted to build one turn of dialogue. So I dug more deeply and I figured out how to at least build this the way this is going to work is that the user's going to invoke my action by name using one of our supportive phrases like talk to hand in the magical Google Assistant infrastructure is going to handle that like it would any other user utterance was going to realize that my action is the best to handle it. Semi dialogflow agent is

going to receive a request and going to realize it's being invoked. So it's going to trigger something called a welcome antenna. This is a default that every dialogue for agent comes with. All right. So, how do I do this from the actions console in the project overview page? I can create a custom action through dialogflow and in dialogflow. I'm going to modify the welcome intent. I'm going to remove the default greetings that it gives me. Alright, so now I need to come up with some sort of greeting for the user and I'm thinking about it and I can't really come up with anything

captivating and then I remembered what would my Persona say in this moment? And so with that I came up with a couple of a kind of geeky an overeager variations of Welcome prompts, which asks the user for a language for me like $5. I can tell you all about programming languages. Which one do you want to hear about? So I'm greeting the user. I'm Orient them to what my action can do and then I'm prompting them answer from there. I can head over to head over to the actions console through the dialogflow Integrations page and I can start messing with my action. And you know if I wanted to in

the future, I can do some really cool stuff here. So I could taper the greeting for repeat users or I could change it based on how long it's been since that specific user has last spoken to my action. And by the way, there's a session at 10:30 a.m. On day 3 about personalizing actions for users. So if you're interested check that one out. All right, so that was easy enough. I can answer that question of which language they want to learn about right? So, how do I count for all this variation? I'm going to use a dialogflow intent. All right.

So the way dialogflow intense work is that you train them to listen to for phrases that the user is going to speak in each turn of dialogue and then each semantically set a frases semantically similar set of phrase is called an intense, right? So with the power of machine learning when the user says anything similar to the phrase as you provide it at any point of the conversation dialogues was going to match that in ten. So I created in 10 which listen for the user saying something like tell me about kotlin or should I use JavaScript and then dialogflow knows but they're basically asking

about a language and its tracks from that which language they want and how does it do that? Well, I had to find a custom entity in dialogflow called languages which list out all of the programming languages that I support. And how am I going to count for the variation in the way each programming language is spoken in programming languages have nicknames what dialogflow offers the feature of designating synonyms for each canonical value and I liked it also comes with a bunch of built-in system entities that it already knows about things like temperature and cities and dates and times. All

right at this point I need to actually dynamically respond to the user with some information about the language that they've chosen. So I'm going to expand my infrastructure a little bit turn up dialogue. But now that users can respond with a request to know about some specific language. Like let's say they say tell me about kotlin and this time the Google assistant is going to hand off directly to my action because the user has already entered the dialogue. I'm going to handle the speech-to-text transcription and I locked it was going to match the which language intent that I described

earlier. But this time is it of responding. It's going to reach out to my Web book hosted on cloud functions for Firebase and it's going to dynamically pull information about that language and presented to the user. So, how did I tell dialler Floyd to do that while I set the URL of my cloud function in the Fulfillment settings of dialogflow and here I can edit them off and had her settings for the Post request that my weapon receives and is also an inline editor in this page. So you can actually edit your Cloud functions code right in the dialogflow console. All right bit

more about building one of these webhooks. So since I'm using Cloud functions for Firebase, I may be using nodejs. And the first thing to do in my index. JS file is I'm going to create a simple Jason mapping of language strings to some helpful facts about that language and then later in the same file. I'm going to use the actions on Google nodejs client library to handle the which language intent from dialogflow with a call back. It's in that call back. I'm asking you to respond with some fact from that mapping that I had to find earlier and I'm going to prompt the user to ask about another

language. So I'm I'm furthering the conversation and at this point I've got something pretty substantial the test. I can go back to that simulator in the actions console and I can play around more with it like a natural user and at this point You know, I might actually use some of the debugging information on the right side and I can actually simulate the experience as if I was a user on different surface types. All right. So speaking of surfaces The Experience right now is a little bit dry on a screen device. So, how can I change that? Well one thing I can do is I can add more visual

descriptions of the information. So here I'm using a basic card class from the no JS library to construct a rich visual response for the go programming language and it has a couple cool things that has a title and hasn't text and even some imagery and alternative text Rick disability. And this case the image is a goes mascot and I've even got a button with a link to goes website and there was that the API gives me really granular control over the construction of the response. Even the placement of the chat bubbles above and below the basic card. So what do I need to the guy

that used her a bit more? Like they didn't know which language they could choose from so I can create a list of programming languages that I support end of the library. What I'm going to do is going to create a mapping of each language to a row in the list with its own title and image and then I'm going to present the options to the user as a list and the user can tap or say whatever they choose but what if I want to emphasize the graphics I can change one word in the code list becomes Carousel and the user can also swipe through and browse the options more visually. But you know, this is a

hole on a screen device. Right? Like what if the users on a smart speaker and I want to show them something visual what cures a super cool feature and it's one that empowers you as developers to leverage the immersive Miss of you system. I can check if the user is on a screen device currently. And if not, what do they have any screen devices with the assistant link to their account? And if they do I can actually asked him to switch devices to the dialogues going to end on their speaker either going to get a notification on the phone where they can resume the dialogue and I even get control over

what the notification says. So this is a great example of how we're really unlocking the power of the assistant for all of you and there's a session tomorrow at 11:30 a m about designing across surfaces. So check that one out. And then once I had a mobile app and it includes them authentication, I can't you plug into that for my actions so that the user's identity is shared across the mobile app and action and there's a session on Day 3 at 9:30 a m about integrating mobile apps with the assistant. All right. So now I got a pretty cool experience going but I kind of want to test it further,

right. So we've already talked about testing in the actions console simulator. I can get debug information and I can't rhyme action on different surfaces. But while I can test in the console I can also test directly on any assistant enabled device which is logged into the same Google account. And we're also announcing something really cool that you've asked for for a long time. You can create Alpha and beta versions of your actions that can be shared with a limited audience for feedback and duration. So this means that my action I could share with my students before publishing live. And

is a session with more about testing tomorrow at 8:30 in the morning. So check that out. But now you've built this high-quality action and you got some rich responsiveness and you got variation and you shared it with some users and you're interested on the feedback. You're ready to submit to a review team and push the action live. So let's hear more about that. awesome Hi everyone of Jamie again. So now that you've heard a lot about the how and what to build on the assistant. Let's talk about how you can actually be

successful on the platform. So we think of this in three key pillars Discovery mom says yishun and re-engagement heard from Adam people are turning to their devices and their assistants for lots of different things in their day. It can be small something like setting a timer or can be a little bit more meaningful like helping plan your day. So whether you're building a productivity action of food ordering action a gaming action, whatever it might be you know that your users are out there looking for you. So I know everyone's had a long day today. It's little bit hot outlet store really

quick exercise. Everyone close your eyes for a second. Think about the best party that you've ever been to maybe it was in college maybe with last night. Maybe it was a wedding or a family affair. Maybe remember the smells to the music now imagine you were never invited that party. So think about your favorite brands, for example, did you take a ride-sharing service here this morning? So you order from a food delivery service last night. Do you have a favorite game or retailer now imagine that nobody ever told you about those brand experiences either? This

is why Discovery is so important and inviting someone to your experience is often times just as critical as building a great experience yourself. So where do you get started? The best place to start is with her assistant directory Elizabeth web and mobile across Android and iOS Developer. Once you start a project neurology consult, you'll be able to optimize this listing in real-time. What's really interesting here is that we see 30% of third party action to get discovered here in the store after 8. So, it's really critical to optimize this first entry for what talk about a variety of

you skate that are in the directory. It starts with browsing. So here you'll see Thailand personal recommendations for it's a top and less contextual content as you scroll down. The page is a few areas to highlight hair first, you'll see contextual suggestions. These are relevant suggestions based on the user's context things like time of day location. For example here. I can see a recommended station for my morning like playing my favorite playlist. Next you'll see a sign on the category is still see a total of four of them that our lives today. What's new which highlights New Quality

action? What's popular which highlights overall popular actions what's trending which is recently popular actions and lastly which is the most tailored what you might like which is actions based on diffusers history. You'll see that the directory support is categorical browsing and search here. We're trying to really help our users find relevant actions and specific categories and find their actions that are their favorite action store really quick search. For example, I'm interested in food and drink at the category. You'll see all the relevant actions here in that category.

But you also see some hopeful subcategories. So for example, you can see you have the option to order food from perhaps Domino's or from Dunkin Donuts so I can also get recipes from Food Network. Finally and most importantly your action pads think of this as a welcome mat to your brand your action. It should reflect your brand and it should invite users to really try your action. You'll see if the top here there's a quick summary which is what you'll see imagery a description and sampling vacation crazed. This allows the user to really understand what you're at does the

second section you'll see readings and recently launched reviews. This allows the user to assess the quality of Your Action before perhaps trying it. So what does this song mean for you? How can you actually optimizes in real-time? How do you make your actions stand out with an over a million accidents in our directory today? Your saliva example, I'll walk you through pretending my French Bulldog lover and I wanted to publish an action on French Bulldog for you. Unless you're you'll see what we did a few things really really. Well, you'll see short concise mean strong fear images Acura

subscriptions. Whereas you'll see on the right, it gets trimmed off of it and most importantly a bunch of sample invitation. So we maximize the number of queries understand that they can actually use to invoke this out instead of just saying talk to French Bulldog. Trivia, you'll see it as a Rite event invitations such as ask French Bulldog trivia about grooming for example Okay. So now you filter app your life with the directory listing. Let's talk about how built-in attends can also help you get discovered. It's a great school. The reason we built this was because we saw three Trends

firstly it's really hard for users to find an action for something that they want to do. Secondly it's hard for Google to actually classify what an ass can do on our end and thirdly natural language understanding is pretty hard. So as you can see here, it's there are a ton of languages or dialects are cultural differences there slang that all make a million different ways to say the exact same thing. So look at this small set of ways that a user can simply Express to say I want to be stressed. Building built in a tense can provide a, taxonomy for actions across all of our Google search

surfaces. If somebody says hey Google I want to start meditating. We can also just a few two fractions that we all will service at need an ark actually indeed meditation apps. Go share. Some of the built-in is tencent were working on this year. You'll see the ones that are start to check this out and I lock phone see which ones do they make the most sense for your business. Keep him. I will be rolling last four out this year like setting The best part about built in a tense and lots of these other tools that I'll talk about today or that they're really really easy to implement

you just log into her a few console simply pick from a list of intense that describes her action best and describe it. And finally, here's the outfit of this new infrastructure. If somebody has a very generic ass like I want to play a math quiz, or I want to play quiz here. You'll see a bunch of different relevant suggestions for So the last will. Talk about safer Discovery is called action light switch is an awesome tool to help you promote your experience. You'll stay here and they had to be sick sample that we've actually promote these experiences and actually

in our own assistant directory where users can actually deep link directly from the app directory into your action on their device on the device of their voice. Totally cool about actual length is that these actually could help you promote your own action to your own and your owned and operated audiences. So everything from your social sites, like you can see a tweet Sarah from Brad earlier today or your own website like BuzzFeed where you can actually see your user can deflate directly from their site into their action. You can promote the sun apps or even a billboard. So it's not too

strictly limited to do, you know, which is really exciting actually solve two problems 1 and makes it a ton easier to to deep link into an action directly and to what do users have a great high quality experience. They can all share this with her family and friends really easily themselves. So I think I think of all the users on Cross holiday to get in sharing your message out there. The simulator built in a tense assistant links is really really easy to implement. You can simply generator URL really quickly and then share across all of your channels. Now to see if you've knocked it out

of the park with Discovery will talk about transactions and digital Goods as an awesome way for you to monetize your business and grow your experiences. The last year we lost transactional capability is Viva transactions API. This is limited to a phone base experienced a physical goods and works really really well for simple transactions today. You probably heard lots of updates about transactions in a few sessions, but you'll see movie told her that that transactions will be on more devices expanding to more complex transactions. Will it be able to purchase a digital and will

be available more globally? What we talked about transactions on the assistant we're talking about three things the ability for user to purchase a good or a service or book a reservation with their favorite brand on the assistant. And no matter what a transaxle experience you're trying to build the foundational elements are always the same as you'll see here. The developer will always go through the process through this. So I'll quickly touch on some Tupac announcements that were made today what you're really really exciting and I'm really quickly

voice speakers and other assistance services in your home has been she was as he really suck to get a crown on it in our developer users can now purchase goods and services invoice prices. This means are used our users Canal complete transactions on surfaces, like Google home 3rd party speakers and smart screens, which is not available before really exciting that we will soon open up the ability for for app developers to sell digital Goods on the assistant. This will work across both

phone and voice surfaces and will be supported in multiple languages and Marcus UK Canada, France, Australia, Germany and Japan. I were thinking of three key use cases here as we launch this and we're really excited about diving a deeper here in exploring these with our partners in off digital goods. And this is super cool for some of the gaming developers that I work with today. This is the ability to purchase gems extra time. Perhaps characters are avatars throughout your experience really exciting secondly subscription services. So think of things like streaming

music or video and Leslie's digital media purchases, so things like audiobooks are rentals, South Park final killer you built up this awesome audience you built an awesome experience. You've acquired some users along the way how do you continue to build loyalty and retention with his with his great audience? I've been developing the assistant over the past few years. We've thought a lot about when how and where folks are actually interacting with Google and we noticed that they do this in clusters throughout their day. So perhaps in the morning, they do

things like ask the Google home about what there's a mite look like after heading out the door. They might be using navigation during the day perhaps they're searching for listening to music and then we were at night when they're winding down perhaps they're they're going to YouTube watching YouTube or playing games for example or spending time with her family. So what we know here which is really interesting is that people are creatures of habit identified some of these clusters and where the assistant can actually provide Extra Value in those instances in micro moments throughout a user's

died this new feature routines. Lets users combine multiple actions into a single command. So what's actually mean when you wake up in the morning? And you say Hey Google Good morning. Your assistant can do a few things in one simple command. It can turn your phone off silent. It's an adjuster license thermostat. I can tell you about your day the weather your commute and I can perhaps play your favorite podcast or redo the news. As a developer, you just simply need to check out the box 24 this feature and then users can add you to their routines? Is this routine is I'll talk

about 2 final re-engagement opportunities that are really exciting. So daily updates and push notifications so dearly I'll see you or when Google can actually share notification for an update at a specified time each day by the user. One example of this if you could get Transportation or duesler every single morning at the same time people understand that about you. So as you can see here, which is an awesome example on something that's really contact fresh user option to something from Esquire. For example, they can receive fresh daily wisdom teeth at the same

time each day. Personifications are really exciting because they can help you do things in real time. So things like any stock alerts from your from your favorite news double sure. This is really exciting for me. I'm a huge nerd very exciting. I love being I love attack of the store real time and it's really exciting because it provides both users and developers users is that it actually helps you allowed it helps give you access to relevant information, which is immediate is content Ranch and a specific to you. What's really valuable to Brands and it's really

help you enhance your utility as a part of your users. So you can actually just to be a great experience that you're trying to re-engage with someone and nurture a relationship that perhaps perhaps it's falling off of the user or if I user needs a gentle nudge to put a lot of something. That's Timely So I know it's been a long day out there. So we really appreciate everyone coming out today. If we hope that you take away a few things from today. It's one think about your use case. What's the easiest way to get started templates could actually be a really great solution for you. If I

hurt everyone to touch without is a dimension tonight and it's a really great way to dip your toes in the water. Secondly, please mean in with us learn more with us check out some of her awesome sessions on transactions dialogflow learning more about your your audience pieces building games similar effects to Harris is really exciting as a please check those out later this week as you can see some of the sessions listed here unless we we just wants to Nuke old Labs. You can check them out here throughout throughout I am so thank you ever so much for having us. Enjoy the rest of your

week.

Cackle comments for the website

Buy this talk

Access to the talk “An introduction to developing Actions for the Google Assistant”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “2018 Google I/O”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
159
app store, apps, development, google play, mobile, soft

Similar talks

Ilya Firman
Software engineer at Google
+ 1 speaker
Luv Kothari
Product Manager at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Krishna Kumar
Product Manager at Google
+ 1 speaker
Mariya Nagorna
Senior Technical Program Manager at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Dave Burke
Software Engineer at Google
+ 6 speakers
Romain Guy
Senior Staff Software Engineer at Google
+ 6 speakers
Chet Haase
Leads the Android Toolkit team at Google
+ 6 speakers
Dianne Hackborn
Product Manager at Google
+ 6 speakers
Aurash Mahbod
Director of Engineering at Google
+ 6 speakers
Tor Norbye
Tech lead for Android Studio at Google
+ 6 speakers
Stephanie Saad
Software Engineer at Google
+ 6 speakers
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “An introduction to developing Actions for the Google Assistant”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
558 conferences
22059 speakers
8245 hours of content