Duration 33:59
16+
Play
Video

Best practices to design AR applications

Alex Faaborg
Design lead at Google
+ 1 speaker
  • Video
  • Table of contents
  • Video
2018 Google I/O
May 8, 2018, Mountain View, USA
2018 Google I/O
Video
Best practices to design AR applications
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
38.52 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Alex Faaborg
Design lead at Google
Alesha Unpingco
User Experience Designer, Prototyper at Google

Alex Faaborg is the design lead for Google’s AR Platform team. He has led the design of Android Wear, Google Now, and the Android System UI for KitKat. Prior to joining Google, Alex served at Mozilla as the principal designer on Firefox for five years. Alex has a masters degree from the MIT Media Lab where he researched artificial intelligence, intelligent agents, and context aware computing. Alex’s interactive design work is based on fundamentals of human cognition and perception, from his background in Co

View the profile

Alesha Unpingco designs for both virtual and augmented reality as a user experience (UX) designer on the Google Daydream team. Prior to Daydream, she designed interactive experiences for various Google Ads & Commerce tools; led UX work for clients like Honda, Acura, and Intuit at Rubin Postaer and Associates; and applied design thinking methodologies to advance social good in startups, nonprofits, and higher ed. Alesha earned a bachelor's degree from the University of California, Los Angeles.

View the profile

About the talk

AR is a new medium with an entirely new set of design challenges and opportunities. This session will cover everything Google has learned so far about AR design, presented as a set of specific best practices. Topics will include volumetric interfaces, object placement, scene understanding, and designing for all users.

Share

We're going to talk to you today about designing AR applications. So Google has a long history in designing for AR. We've been doing mobile AR for the past 4 years and working on other augmented reality projects before that, which chains most recently is at mobile. AR is really starting to take off and Google arcore makes it possible for anybody to create quality AR content. So share some fun facts about arcore. The first thing is that we released 1.0 at the end of February and not made AR content available to more than 100

million devices. Already seeing rapid growth with more than 300 apps already available in the Google Play Store. So you're here because you want to learn how to design for a are and what we found is that once you understand your users and the type of experience are trying to create design principles for getting started falling to five different categories, which we call the pillars of a r design. First you want to understand your users environment. Where will these are be experiencing your app think about the surfaces that are available and how your

app can adapt to different environmental constraints. Then you want to consider the user's movement and how much the space that user will need in order to experience your app. And when it comes to initialization and onboarding you want to make the onboarding process as clear as possible, so that users understand exactly what to do in this entirely new medium. When it comes to object interactions design natural object interactions that can pay the affordances and also the feedback so that users understand how these digital objects fit in the context of your real physical space. And

when you're thinking about user interfaces balance on screen UI with volumetric innards interface design so that you're able to create an experience that is Meaningful and usable for your users. We have some examples that showcase the different guidelines within each of these pillars. And the thing that we want to point out is that this framework and help anybody get started with ar content creation. So throughout our talk we're going to show you some different demos that you'll be able to play with very soon there an app for watching on the Google Play store called arcore

elements and many of the core interaction patterns. You'll see in this talk are already available and seen form or will be available for Unity later this summer. That's what started out talking about users environment of the first color of a r design a relatively new technology. So just to be getting let's talk about what are core can actually do so it has lots of things first thing that everyone is familiar with the surface plane detection where I can understand surfaces tables floors those types of things it can also do

walls and then of course, what's what's better than horizontal and vertical surface is it can also do ankle surfaces with oriented points being able to place an object? I didn't need any angle. are quarters light estimation this is really important for having objects serve look realistic in a scene there's also some other fun things you can do with that it will get into and now it's just said I owe yesterday Cloud anchors which is available now for Eric or both on Android and also an iOS or supposed to do multiplayer experiences and they are with two people viewing the

same thing and also announced yesterday augmented images the ability to recognize an image but then also not just recognize it but use that image to get 3D postdata off of it so you know where it is in space so she said if they are core capabilities in this is of course growing over time and what we found is the Meijer app integrates with users environment really the more magical your app is going to feel so if what's your a few examples here first for a surface Planet action AR apps currently only use one surface

what we found is really no reason that you have to use one surface you could use all of the detective surfaces and it moments when you have your game interacting with multiple surfaces this can be really kind of like breakout moment sooner game where it feels very magical even something like is some say you're playing a physics-based game and you destroy your opponents you a castle or something and the bricks fall onto the floor at that moment when you see The Adams on the floor. I can be there really quite stunning. Right

light up to Mission this is critical for making objects look realistic in the scene who is an example that we are working on some sort of test out some different techniques there. We have three fake plant one real plant, but I think it's too actually a real fake plant and have unlit which is the most basic lighting you can do it after a realistic than Dynamic and a combination of dynamic and begged and screamed and then when the leaves of the plant or actually I'm a little bit darker and it's picking that up things are to see example of how it looks with some movement. And waiting is

definitely a lot of what Innovation is going to be occurring in the space as we try to get more and more realistic lighting. They can see where we are right now. An especially as like as a scene gets darker you start to see that the unlit objects just doesn't perform as well. So it's really important that you're using the real time lighting apis that are in arcore. The only thing you can do is you can actually use lighting as a trigger to change something. So here in this example when you turn the light switch off in the room the city actually glows and it responds to that change any

types of moments can really feel great magical freezers where you imagine you're playing the answer to the city simulation game and it's having these kind of significant meaningful changes based off environmental light. I don't answer points. This is actually a very new features. So we don't have a whole lot of examples here. But here's one of them are basic ones film this when I was when I was out ski in here at this attaching Androids to the side of a tree and you can see that you know, as a as a place them they stick to them exactly that point on the tree at the angle of

where the branches were. ICloud anchors announced yesterday exactly the same place and they can play that game together. And this is really tremendous Lee find you can actually try it out in the sandbox if you want to stop by later today. And again this works with both Android and iOS. Images there's a lot of different ways you can use this one then either we have in the sandbox, but he can go check out. It's actually an Art Exhibit that was built using augmented images.

So really excited about what you can do with oculus images in the joy lots of possibilities from artworks even something like you just having a toy. That's what it come to life the other surface of a product box where you can be certain 3D models of what you're about to play with. I thought it was going over some sort of the new Basics on the core capabilities with Eric or let's talk about how you'd actually start to design an app for a are. So one of the first things you're thinking is okay, where do I actually start, you know blank page and ready to start having a

brainstorming and then your new ideas for a r and what are the things that I want you to focus on is air exists outside of the phone. So your design works really exists outside of the phone as well. So I think I found a lot of people have done if you had tremendous amounts of mobile design as they they tend to be very attached to you sort of the phone frame and Serta flows of screens and you know, they've been doing that for her for so long that one of the first things you need to do when you're starting to think about a r is actually going to put away all of those yet you DUI stencils and

don't really think about the phone at all instead. What you want to do is you want to sketch the living rooms and end tables and outdoor spaces. And then as you sketch the user's environment, then he starts with sketchy and the various are objects that they're going to be interacting with in that environment. How many ways you can start to think of a are as having a lot of the same challenges as responsive design for the web in terms of different window sizes, but it's even more complicated because now you have responsive design for 3D space is better the other users actual living

room. So you want to sketch the user for scale to get a sense of how you're going to start crafting. This experience could be very large relative to the objects are very small and then you want to start thinking about how that user is going to move around in that environment. And that brings us to use our movement. So now that we understand how to design for the environment. Let's think about how to design for you the movement. And as Alex mentioned, it's completely okay to design beyond the bounds of the screen and what we found is that in many ways. This can make

the experience feel more delightful and even more immersive because when you have an object that begins on screen and also extends beyond the boundaries of the phone report, I can make the user feel like the object is really there and beyond that it can also motivate user to organically is being moving the phone or around their environment so that they can appreciate the full scale of these digital objects in their physical space and that brings us to our next observation because users are more familiar with to T-Mobile applications that don't typically require user movement as a

form of interaction. It can be very challenging to help convey to users that they're able to move around so many of users don't because it just doesn't feel natural based on how you use to detox in the past. But we realized is that characters animations or objects that convey visual interest on screen and then move off-screen can be a natural way to motivate users to move. So here we have a bird and it appears in the middle of the screen and when it flies off screen, it's replaced with a marker that moves around and slides along the edge to help users understand the birds location in

relation to the US. Another major thing that you want to think about is that whenever you have an experience that requires the user to move. You also want to think about how much space are user needs. So we found that experiences fall into three different sizes. There's table scale. There's room scale and there's also world scale. And when it comes to table skill what we found is that your experience is able to scale to the smallest of surface is so that many may be many users are able to enjoy your experience. And with real skill it expands the impact of a are

so that content will start to feel life-sized and you're able to do a lot more with the space that's available. And world-scale has no limits. It allows users to appreciate a are in whatever area they see fit and this is an area. We're particularly excited about because what it means for procedurally generated content in world scale. Send a matter what size your experience ends up being just remember to set the users read set the right expectation for users. So they have an understanding of how much space they will need because I can be a very

frustrating part of the experience if the user is playing a game and in the middle of the game, they realize they don't have enough space to enjoy it. And when it comes to how much how much movement your experience requires, there's no one-size-fits-all solution. It really depends on the experience that you're trying to create. For example, if you have a game that requires user movement as a core part of interaction that can be a very delightful experience. You can use proximity or distance to trigger different actions. So that as a user gets closer to this frog it can leave behind the

mushroom or maybe the mushroom can disappear and that can be really cool to see. However, if you have a utility app where the core purpose of the app is a help users understand very complex data and information than requiring users to move might be a really bad experience because what it means is that users who have different movement or environment limitations won't be able to get the complete app experience. So allowing users to manipulate the object to rotate it to move it around in a space that's more appropriate will ensure that all users have easy access to the data that they seek

Right because there is a relatively new the actual process for the users to flow from 2D parts of the rap industry D can be at times a bit awkward starting to create some sort of Standards around that so we'll talk about initializing and two are one of the first things you can do as you can leverage standard View and they are material icon when they see that they if they know that when they hit this like on there going to be going into a r i e clues that some sort of you all the normal places that icons appear like you're Floating Action button or on top of cards as the indicator that

you can actually do this object in 3D in the in your environment. What are the next things you'll see if you've been playing with lots of AR apps, is it something you might not understand? I wanted to talk about the concept of how understanding death actually requires too movement. So you'll see these types of animations. Where is trying to get the user to move their phone around? So why it why is that actually happening? Basically, we received because we have two eyes, but we actually got a lot of our death information to actually started moving her head around and being in the same

and for the case of arcore most current phones on the market only have a single camera on the backs own device only has one eye and if it hasn't moved yet, it doesn't necessarily know what's going on. So this is the first thing the phone sees it's going to say. All right. Well, that's interesting but I don't totally have a sense that if you know where these objects are yet. I want you to move just a little bit then it becomes clear as soon as you bring in a little bit of movement. Then you have enough of that data on different angles onto the scene that I can start to build up a model of what

it's in. That's why we have these animations at the start of the app to try to get that Movement try to get airport to have enough information to recognize this scene. Next time you want to think about is deciding if users are able to easily move the objects after they've been placed or if he's really more permanent object and again, like no right answer here. So you have more persistent object might be like a a game board or something that itself takes them put what we want to recommend that use standard icons to set expectations for users. They know as their

place in that object if that object is going to move around later on as his wife on it. So some examples of that. Let's say you're placing like a city game and here as you're swiping on the city or actually could be interacting with the game itself. So we recommend using an anchor icon for these more sort of persistent able to use her to move the game board later. Perhaps through menu screen or some type of rain current flow. But so set expectations ahead of time that the city actually is going to be sort of stuck to the ground there for a while as you interact with the

game. Versus you know, something like that you're shopping for furniture and you just play some Sharon the scene here the chair itself as an interactive so you can actually map swipe gestures on to the chair just easily move it around. So he's in the plus icon Dakota set expectations ahead of time that you're not really committing to exactly where your place in the subject details there. I understand how to onboard users. Let's start thinking about how users can interact with objects in their space. What are the things that we

challenge you to think about as his einarsson developers in the community is thinking about how to solve problem software user Behavior, even when it's unintentional. So one of the things that we recommend is giving you their speed back on object relations and this solves a huge problem that we see in Mobile AR where are user will be moving the device around and what's the device collide with an object in a r that object might disappear in the user has no feedback in terms of how to fix it. So what we recommend is we recommend providing feedback in the form of camera filters or special

effects that helps users understand when object Collision is not an intended interaction and it turns to work really well. The other thing that you want to think about is how to give users the right type of feedback on object placement and it's really important in this case to think of each stage of the user Journey even as it relates to surface feedback. The service fee back in AR is very important because it helps users understand how arcore understands the environment because there's a sense of cents of the surfaces that are available through range of the surfaces that are available. So

we recommend including feedback on the surfaces when the user is placing objects in the scene. the other thing that we recommend is maintaining the height of the tallest surface as a user drags an object from one surface to another and once once an object is suspended in the air make sure they are always communicating a visual feedback on the drop point that way it's very clear to the user at all times where the object is going to land. And once an object is placed into the scene. We also recommend providing feedback in the form of a visual feedback

on the surface or even on the object itself just to communicate the objects entry into the physical environment. So now that we know how to play with objects in your scene. Let's think about how an object might get there. We recommend using gallery in our faces in order to communicate user to users how they can take objects that live on screen and drag it out into the real world. So here you see we have a gallery strip at the bottom bar and as a user selects an object variable to drag it onto their space. And not only that were able to support both selection States

and also very familiar gestures that allow users to manipulate the objects. So you can use pitch to scale twist to rotate and you've been dragged to move. And you see many examples in our talk of how Dragon objects is a very common and expected Behavior. But another alternative for object selection an object movement is through a radical. The radical selection is also very effective in that it allows users to manipulate objects in their seen without covering too much of the users View. Do we have an example here where reticle selection is being

used to select a rock and that's triggered via the action button in the bottom. Right? But what allows users to do it a lot is it that allows users to see the many services that are available and as you can imagine if a user selecting an object with their finger and dragging it across the street at the screen, you don't have as much screen real estate to see all of the surface is that the user might want to place the object on so radical selection is very very impactful here. The other thing that you get with radical selection are recast, so raycasts are very

effective in helping the user get a sense of the virtual weight apply to each of these objects. So here we have another example where the user is able to pick up a feather and once the feather is picked up you'll notice that the rake has has very little movement very little beds on it. And for the most part it remains straight. However, when the user picks up the rock you're able to see a more dramatic then applied to the raycast that signifies the larger amount of mass that have your weight of this object in relation to the feather.

All right. So let's move on to the final pillar which is volumetric interface design. Is it one of the first things you want to consider here is that the phone is that users viewport? They're actually using the phone to look into the scene and NC the application and because of that you don't actually want to place a lot of 2 DUI on the screen this actually going to obscure the users view on to your application. So show you instead of quick example. It's obviously a lot nicer to have limited set of controls as soon as you start to clutter the screen really gets in the way of the user's

ability to ensure they are at and the sort of counterintuitive thing that we've even found is that users are so focused on the app out in the world that often designers will place the control on a screen level because they they want to draw attention to that control, but it's actually having the opposite effect that users are actually more focused out in the scene. So actually just miss controls that are drawn on the surface of kind of tuna those out. So really you want to be very mindful of when you're making decisions on if you're going to put a control Up on the screen versus out into

the scene itself. Four notches together during the few but also for discoverability of them than fighting that and that's not to say that you should never put a control up onto the screen, but you want to be your considering a few different metrics on it. So recommendations is that the only really leverage on screen surface to eye for things like your controls to have a very high frequency of use or controls that require very fast access. So like a camera shutter button is kind of the perfect example of something that you know hits those criteria. We are obviously taking lots of pictures

and also you want to take pictures very quickly, but imagine if you were like playing a game and there is no some ability to like fire or something that would be a good candidate for an on-screen control because you're both getting left on the lot and also you need to get stuff on very quickly. So we talked about using the view and they are icon to get people into the experience and transition from 2D and two are we also want to be very careful about the opposite of when users are now and they are and they're actually transitioning back to a 2d experience sound is I if users not

initiating that action to to go back into a 2d experience to actually be pretty obnoxious because they're so focused out in the scene. So these are doing the application and then suddenly a two DUIs shows up and blocks the entire viewport that can be pretty annoying depending on even if users exiting or the customizing an item in the scene or whatever these cases. Do you want that flow back to 2D screen level UI that's covering most of the screen to be something that the user's actively doing and not something that happens by surprise. So you also get out common, thing

with mobile application design. If you want to maintain your touch targets that are about the size of a user's finger 3D. This is of course a bit harder because the object could be any distance away from the user. So quick example of some things you can do here. I here we have two tennis balls and when you tap on the tennis ball from CD fires out of it because an error AR you can do whatever you want and we're showing the the touch Target size with the dotted line. So one of these tennis balls is actually maintaining a reasonable touch Target size as it gets farther away. Where is the

other one is not just mapping to the the virtual size of the object. And of course, it's a lot easier to interact with the one that is maintaining a large Target size. We've also found for interfaces. We are manipulating objects. If you're not doing tricks to kind of maintain Target size you often get these problems where you swipe an object so very far away and then it's actually hard to bring the object back because it's now it's such a small Target have to actually walk over to the other side to get it which All the fast rating maybe you can say it's very immersive. But either way it's

nicer to be able to actually bring the object back as well. So I'm the whole you want to be thinking about, you know, what controls are going to be on the screen vs. What controls are going to be out in the scene and kind of a amanta that the team has had is to say, you know at the scene over screen. Obviously we talked about some units are battery cases of wine, you'd want to put something on a screen level but I found it sits many people to initial reaction to design everything for the screen level because that's the type of design work. We've been doing for a

2d applications that you really want to start thinking more about volumetric e y and having your ears UI out into the scene itself, so give it a quick example of this. This is actually one of the demos the ships was seen form solar system simulator loads of fun. Also, it's missing a planet right now. We did fix that for the public release case. You noticed that in the video that you think I will have, you know your menu up in the corner that will you throw something up on the screen the other problem there is then you're not going

to be able to sort of Diaz immersed in the simulation itself as you're interacting with it. So I'll turn the way of doing that it's actually leverage these objects on the scene itself says you're topping on planets. You'll get feedback on what planet that is for educational use cases. And then in this particular dummo when you tap on the sun, that's how you start to control the entire solar system. So I hear the user's shopping on the Sun that brings up a panel. This is Ashley Android view 17 Forum. You can map just standard Android views into a

r and here you have controls like changing the orbit speed or the rotational speed of the planets themselves, and it's really nice to be able to interact with these objects in the scene and not to have that sort of sudden loss of being able to see things and things are taken out of the experience. Final point, which is why they are presents. So we'd actually seen this coming up in and user research studies where people would be looking through a phone died. And then they were going to step outside of the

lookout sign of the times like see that something was placed directly and then you have a person recording it so they laugh and I like hell, yeah right course, you know, I can always see it through the phone and yeah, we always laugh when we saw this happen and then I was testing out nap cuz he sort of being a plastic interlocking bricks and I had next instructions of what I was building and it's playing it for a long time and one moment. I looked over the instruction book and it wasn't there and I like, you know, they had the reaction that you normally have is like an object's just

disappears in real life. And of course then immediately unlike all that silly. You're not like yeah, it's a r but I was so immersed in the experience in the application and I've been playing it for so long that I was no longer kind of mentally tracking like what was real and which was what was virtual and I was Turtle buying the difference is happening and you're going to start have this experience as well as you interact with his applications. And I'd say that's the moment when your application is performing really really well cuz it means that the user's is completely immersed

in Creston application. So if you ever have these moments where people are, you know, looking at a vase and yeah, it's their phone than it looked down and disappears and they react that's good. That means the office is performing great. All right, so we can see the five pillars of an AR design which again include understanding the user's environment planning for you. There's movement onboarding users by initializing smoothly designing natural object interactions and balancing on screen and volumetric interface design. And again this framework We Believe will help

anybody get started with creating apps that everybody can enjoy. Save a pic video for you some amazing content that many designers and developers like yourselves from the community have created. We hope you enjoy We're very happy to share with you everything that we had today, and we look forward to seeing what you create. Please fill out our survey and check out our resources online. Thank you.

Cackle comments for the website

Buy this talk

Access to the talk “Best practices to design AR applications”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “2018 Google I/O”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
159
app store, apps, development, google play, mobile, soft

Similar talks

Jen Devins
UX Lead Google Accessibility at Google
+ 3 speakers
Bethany Fong
UX Lead at Google
+ 3 speakers
Catherine Idylle
Interaction Designer at Google
+ 3 speakers
Andrea Wong
UX researcher at Google
+ 3 speakers
Available
In cart
Free
Free
Free
Free
Free
Free
Rachel Been
Creative Director, Material Design at Google
+ 2 speakers
Josh Estelle
Software Engineer at Google
+ 2 speakers
Rich Fulcher
User Experience (UX) director at Google
+ 2 speakers
Available
In cart
Free
Free
Free
Free
Free
Free
Clemeintine Jacoby
Software Engineer at Google
+ 1 speaker
Rose Yao
Director of Product at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Best practices to design AR applications”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
558 conferences
22059 speakers
8245 hours of content