Duration 39:25
16+
Play
Video

The future of the web is immersive

Brandon Jones
Software Engineer at Google
+ 1 speaker
  • Video
  • Table of contents
  • Video
2018 Google I/O
May 8, 2018, Mountain View, USA
2018 Google I/O
Video
The future of the web is immersive
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
30.72 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speakers

Brandon Jones
Software Engineer at Google
John Pallet
Software Engineer at Google

About the talk

This talk will demonstrate how the web can extend the reach of VR and AR experiences. Get an update on Web XR and a quick show of tools that developers can use to create web VR experiences on existing websites. The session will also include a demo of AR on the mobile web, helping developers understand how to get started today.

Share

Welcome everyone to our talk today on the future of the web and immersive Computing. My name is Brandon Jones and I'm a developer on the Chrome team that's been driving forward immersive web standards. So we feel like 2018 will Mark the true kickoff point for what we've been calling the immersive web. The technology is involved have all begun to stabilize enough that we can begin exposing them to the web platform with confidence. And while the apis that will be discussing today or still in development. If you start learning how to use them now, they'll be shipping and stable

browsers by the time you're comfortable with them. So what do we mean when we say the immersive web well at the collection of new and upcoming Tech that prepares the web for the full spectrum of immersive Computing which includes virtual reality and augmented reality more. Generally we think of the immersive web is anything that gives away but sense of depth volume scale or place. Here's how we think about these Technologies in general virtual reality or VR can take you anywhere will augmented reality or ar can

bring anything to you will be covering both in this talk first. I'll cover how we're bringing VR to the web today. And then John Palette will come up and show you our upcoming efforts to do the same thing with a r So let's start with virtual reality chances. Are you already know about VR headsets like the Daydream view the HTC Vive or the Lenovo Mirage solo that was released last week and is on display here. They use a combination of head-tracking screens Optics and controllers to make you feel like your present in a different place when you use them. Last year

and I owe we presented the webvr apis way for web developers to communicate with these devices. When PR was really well-received but it also had limitations it became apparent as to be our ecosystem grew since then we've been collaborating with content developers device manufacturers another browser vendors to address those limitations. And the result is the new web XR device ATI webxr replaces webvr evolve to meet the needs of a growing immersive Computing ecosystem. It'll serve as the foundation for the immersive web exposing not just VR but soon AR functionality as well to web developers,

but also goes beyond the headsets to make those immersive World accessible to as many people as possible. Of course, we recognize that migrating existing code to a new API is a bit of a paid. But we also think of the benefits would be well worth the short-term inconvenience when it's our API was designed by listening to lots of feedback from developers of real-world content just like you and explicitly addresses many of the issues that they reported Additionally the new API provides a platform for a are features and will be more forward compatible with a wider range of devices.

It's also cleaner more consistent more predictable and enables browsers to apply more optimization. So what do I mean when I say optimizations? I really like this part has allowed us pipeline. We do it by doing things like reducing the amount of texts or copies a crow makes internally when presenting VR content. as a result we're seeing that on Android we can default to using 50% higher resolution render targets than webvr did without impacting the applications performance and all of that

technobabble means that we're pushing twice as many pixels to the display versus the same content in webvr put simply your mercy content looks better and runs faster simply by using the new API So when can you start using it? Well today developers who want to try virtual reality features on the web can do so today in Chrome 67, which is currently in beta. There are two ways to get started first. The XR device API is available as An Origin trial today origin trials are a way for Chrome to allow

developers to deploy features to users that aren't quite finalized yet by embedding a token in your page header in return the developers agreed to provide feedback on the future so that we can improve it prior to shipping. Now if you're not interested in actually deploying content today, but just want to play around with the API. You can also enable it by a Chrome's about Flags page to start testing. And if you enable the API to either of these methods you give you gain access to a variety of different viewing a variety of different ways of viewing immersive content.

First weatherstar in Chrome 67 give you access to mobile VR devices on Android which enables users to fly there and instantly turn into a virtual reality headset. Of course, we recognize that there's a wide variety of mobile devices and browsers out there which is why we're also making cardboard compatibility available for other browsers like mobile Safari by a JavaScript polyfill as you might expect there are some limitations to join that route, but it allows developers to guarantee a baseline functionality to all

users. Next Chrome 67 also support high-end desktop headset like HTC Vive or the Oculus Rift on Windows these represent the highest quality VR experiences available to Consumers today provide an excellent opportunity for Progressive enhancement of your immersive apps by adding things like room feel movement and six degrees of freedom input And finally from 67 also supports what we call Magic Window content Magic Window refers to viewing the scene on a 2d webpage the still track the device motion. It provides users that

there's a larger world to see and a way to consume a Content even when a headset isn't handy. Stop. Let's take a quick look at how we can use this new API to create a simple virtual reality application on the web. We're going to assume that you already have a webgl team that you would like to make viewable in virtual reality. first we need to detect if there's any webxr devices available and if so we need to advertise that functionality to the user to do this will start by calling the Navigator. XR. Request device function the promise that

returns will resolve to an XR device which may represent a mobile device with VR and AR capabilities a wired desktop headset Standalone headset or some other type of immersive Computing device Next we check to see if the device supports the mode that we want to use in this case because we want VR content to the display on a headset. We asked if it supports exclusive access by the page. If it does, we had a button to our page to advertise the XR feature. And when the user clicks on that button will request what's called an XR session. The session is what gives us

access to all of the actual capabilities of the VR or ar device. You get a session by calling request session you call that on the XR device. We were previously specifying again that we want exclusive access to the display. Then once we have a have a session, we need to create two supporting objects a frame of reference and a webgl layer. The frame of reference tells us how to position objects in the scene relative to the user here. We're asking for a stage frame of reference which attempts the origin on the user's floor that way it's very trivial to line up your virtual

floor with the user's physical one. next we are we create a web DL layer, which gives us a surface to be rendering are 3D seen into you can think of this kind of like a canvas element is visible from within the Then once our setup is done, we're going to kick off an animation Loop by calling requestanimationframe on the session, which is very similar to the window. Requestanimationframe. You're already familiar with Now with that we're almost ready to begin rendering. But first we need to understand A New Concept introduced by webxr

and that's abused the WebEx our render Loop provide your application with a list of views that must be drawn every frame in order for the scene to appear correctly on whatever device you're viewing it on. A view provides all of the values needed to render a single image of the webgl seen such as you and projection matrices and frame buffer Newports. Most headsets will require you to draw to use one for each eye while Magic Window scene will only require one of you and you can imagine in the future. We might have more exotic Hardware that would require you to try even

more views than that to count for different display configurations. The benefit is that webxr can use a unified rendering path for a huge variety of different devices in display mode. And here's how that looks and practice. Our request animation frame callback is given an XR presentation frame object which contains the list of use that we need to render. It allows it also allows us to Perry the pose of the XR device, which is its position and orientation in space. Display artenom headset. We're going to buy in the frame buffer provided by the webgl layer that we created

previously. And let me get looping through each of the views was provided by the frame. Drawing into the viewport that each of the views describes. Using the view and projection matrices that they provide. And when we put it all together, we can now provide an immersive virtual reality experience with the webgl team that we had previously that works on mobile desktop and Standalone VR devices. But of course your users aren't content to just look they want to interact with the immersive world that you're creating. So we need a form of him put

a lot of the feedback at we got about the difficulty handling input with the webvr apis unified input system that works across all of that WebEx ours various form factor. The basic ideas that anytime you're interacting with the theme weather by tapping on the Magic Window looking at an object and clicking a button on a cardboard style headset or using the controller point-and-click. All of these interactions can be described by a ray point of origin in a direction to the scene.

by using the Rays that webxr provides you can trace into your scene determine what object the user wants to interact with and respond appropriately no matter what input method they're using Let's see how that looks in code first. The user needs to know what they're pointing at. So we'll visualize whatever input method they have. By going through and wearing Anita rating the list of input sources from that webxr session. Then reach input source will get a sperm pose which describes the direct me and put his pointing in and if applicable the position and

orientation of the controller as well. Then depending on the type of input source that we're dealing with will run or some combination of a controller model a pointer Ray and a cursor to show where the input is pointing. You can see an example of how not Mike visualize attract controller is rendering both of representation of the controller itself and a pointer rain cursor to help user better Target their input if we were using the freeway system, like cardboard weed only run to the cursor because it is fairly distracting to have a laser coming out of your

forehead. Thank you. Now that the user can see where they're pointing. We need to be able to respond to what they're clicking on in the scene and to do this we listen to the select event. Misfires on the XR session Intel's open switch input source fire the event which we can men use aquarium hose at the time the event happens. This gives us a pointer array. The application can I use the raid form hid test against its internal representation of your seating. And if we detected that object was selected we trigger the appropriate behavior, which will depend on the context of your at.

This allows us to provide users with a way to easily point-and-click and interact with any object in the scene and have that interaction be consistent across all of their use devices that they might be using. Well, that's our gentle system also allows for grabbing a moving object by listening to the select start and select ends events. And crucially the Selectmen also is considered by the user agent to be a user activation event, which was not the case previously with what with webvr one of the issues that we heard about from developers. Most frequently. This

means that applications can start media playback with WebEx are input just like they would a mouse or touch screen. So we now have an interactive VR experience that was on the web and that is awesome. But not every user has a headset. And even if they did not every user is using a browser that supports WebEx are yet? We want your content to be accessible by a billion plus users and that's why we're expanding the immersive webs reach to 2-D screens by a magic window and tune on WebEx are enabled browsers with our JavaScript polyfill. The first

let's talk about Magic Window as I mentioned earlier. It's a mold that lets users view immersive content on their phone without a VR headset tracking the orientation of your device to let you look around if I can holding a little portal to another world in your hands with the webxr device API. We put a lot of effort into making the creation of Magic Window experiences as frictionless as possible. Let's rewind just a little bit and look at rxr session creation code from early and the presentation indicates that it wants to start displaying content on the headset by specifying it wants

exclusive access to the display Magic Window. We were more than content to show up as part of the page and not on the headset to accomplish this we only need to make a few changes. First we create a canvas that we want the Magic Window to content to display in. Which can be attached anywhere you want in the document? And then you need to get an XR presentation contacts from that campus. This works just like getting webgl context at the canvas. Rocket Pass 2 special request as the output context this binds the sessions rendered output to that canvas.

Now, you also noticed that we've dropped the exclusive argument from the request because the sessions output isn't intended for the headset. And that's it. Like everything else is the same the render Loop. We described earlier now knows how many requests a single view the input system now fire events that correspond screencaps rather than controller clicks and everything else was kind of works. Even better. It's easy to use both Magic Window and the are on a single page so you can let users transition frictionless lie between the two at a types of wood at the touch of a button. I

finally was talk about the WebEx are polyfill for browsers that don't support web XR with polyfill provides an implementation of the API in one of two ways first. It offers a JavaScript only implementation that works on any mobile browser that exposes orientation events. Even Melissa Paris. Secondly, if a browser implements the older webvr API, the WebEx are on top of it taking advantage of the Native Hardware features of websites are all webvr already exposes. This means that you can instantly make your new WebEx are content accessible to a much

wider range of users simply by adding one script to your page. And when I fart isn't the only way that we're bringing the immersive bringing immersive Computing in the web together with the Chrome 67 beta you can now launch of VR optimized version of Chrome straight from day dream home. We put a lot of work into making the browsing too into making browsing the traditional web a great experience in VR, but it also really shines as a way to browse immersive web content. The ability to hop from the 2D web to VR content and back at the push of a button

makes discovering new VR content online really easy fluid and honestly just a little bit magical. So, how's Michonne Chrome is fully embracing the immersive lab both through VR browsing and through VR web apps, but what about augmented reality? We'll talk about that when he handed over to John Palette. X-rated, my name is John pallet. I'm a product manager on the Chrome team. And I'm going to talk to you about augmented reality. Those of you who don't know what I meant and reality is if you tried out AR stickers in photos or use Pokemon go or if you use Snapchat filters, you have already used

augmented reality. I'm like VR where you're completely immersed in a virtual world. The idea with augmented reality. Is that overlays information into the real world. So if you use any of these experiences than you've already tried AR What's great about augmented reality is that there's over a hundred million phones and tablets already available that support augmented reality. And that number is growing rapidly and generally speaking those devices already have web browsers. That means that augmented reality can be a real convenient for users. They can walk around experience augmented

reality and they don't need to install a new app each time. It's great for developers to because if you want somebody try out your AR experience, you can throw the QR code or URL and then the user can jump right into it do the experience and then leave the web is going to make augmented reality super convenient and accessible for everyone. So that's great. The web is a great fit for a are how many of you are web developers in the audience. How many of you have a website? I hope that's about the same number given that your web developer. So this is okay. Great

web helps a r thick does air help you a web developer. So there are actually a lot of ways that augmented reality can help you enhance your site and I'm not going to talk about it anymore when I'm going to do instead of show you what I mean. I don't mean to give you a preview. What's coming soon to Chrome Canary. Can we switch over to the device, please? Hey this demo got it's working in this demo. It's actually my daughter and I are researching the city that the

Temple of my aura and it say it was the center of Aztec worship and built in the fourteenth century and the temple was built for both the god of rain in agriculture and the god of war and so we're learning about here is the use of Chuck mole statues, which are used to make offerings to Glendalough the I got a friend I so we're learning about that here and if we go down on this page, what I can do is jump into augmented reality mode to learn more. And so it's learning about the same. And what I can do is I can place a check most statue right

here in two stage in front of me not this really lets me get in and get a sense of the real world set the scale in reality. It's 45 in long and 36 in high and we can see it's actually life-size. It's a life-size representation of a human being. You notice on the website and through some of the other websites that we saw that a lot of them have this 90-degree head orientation. And so if I go into here we can learn is that that 90-degree head orientation is actually fairly common but so is the pose because this is Chuck Mall you could see is reclining on its back with its knees and its arms

bent and that's traditionally and it's art forms of this time a symbol of humility of submissiveness of being a prisoner in this case. It's probably humility to the day. There's a bowl for making the offerings here and it's interesting as I can see a little bits on the arms and legs representing iconography. Let's take a look at that. Who's interesting? So if you have a retractable and it's celebrating the rain God it's normal to see frogs and marine life and symbols like that. So let's take a look around and see if we can find any. I don't actually see a whole

lot here is something interesting on the cheek. But it's not a ton of that kind of iconography here. So it's interesting to wonder. Okay. Well, how did they know that this chocmo statue was actually associated with below. Well if we go down to the feet What we learn is that the feet are red and have a hint of blue and the sandals and discoloring actually helped researchers connect and listen to get a better view of the blue. You see it's tucked in right there in the corner. So this actually gave researchers the clue that they needed to connect this particular chokmah, which was

the first full color Chuck mole that was ever discovered with the original paint back to other pieces of artwork from the same time. Which talked about the rain got Is charcoal bad was 3D scanned by sayaka 2016 in Columbus in collaboration with the Temple of my joy Museum the date is available for anyone you can download it use a presentational purposes through the open heritage project on Google arts and culture. This is not available to All Chrome users today these apis for the CERN development. But we want to do now is show you how to use them and get your

feedback before they become firm. I switched back to the slides, please. So what you just saw me see Billy to take an existing two-dimensional webpage and add an augmented reality experience to it. This is straight up the biggest opportunity for web developers what they are because it means you can take amazing new experiences and add them to your existing website and there are a lot of different websites today that can get some value for users by allowing you to place virtual objects in the real world me give you a

few examples. One of them is shopping in this case. We're looking at a Furniture website and it would let me Place Furniture elements into the real world. My family recently bought a new couch and we ended up cutting and measuring pieces of paper and placing them on the floor to make sure that the couch would fit I could tell you from personal experience. This would have been a lot easier. I just had another great use cases education where anybody can drop an object into the real world and look at it. This is great for statues is a painting at the Louvre that's double-sided where you can

get a much better sense of what's going on. And if we explored that such I hope you saw it you can get a lot more information and learn a lot more than you can from Justice static image there a lot of other opportunities as well. Such avatars games AR stickers and more in a mobile context their tools are we can provide right now to you that will make your web pages more delightful and increase their value to users. This is coming quickly and it will make the mobile device a handheld window into a new reality. So let's talk about how do you say p eyes are taking shape? They're being

developed in the w3c immersive Webb community group. It's a Cooperative effort with a number of companies including Google Mozilla Microsoft Samsung Oculus as well as other companies and web developers. This is where the WebEx or specification is being developed which includes a VR apis apis that Brandon spoke about earlier. But there are also some more bleeding edge API server being worked on within that community group. Some of them are more than others. I'm going to spend most of my time talking about the talk to pay our sessions and it says which were the two building blocks of the shock

mode demo that I showed you earlier. You noticed that hit tester in the experiment station stage where the other ones are still being. So what that means. Is there more than just discussion but not quite ready for origin trials and practically speaking that the devil that I showed you earlier will work in Chrome Canary within the next week or two basic by the time you get home and you watch this again on YouTube. It should be ready for you to try when you do that. You're going to want an able to chew flights that we shall hear the weather start device API flag and the WebEx arcade test

flag. So this is great. This means an early version of augmented reality is coming quickly to Canary. So let me show you how you can actually get started using it. Well good news because it's based on the WebEx are API a lot of the code that Brandon already covered around VR apply through as well what that means you things like capturing device post device orientation of position or exactly the same and so switching from VR code AR code. It is mostly doing the same thing. So what I'm going to focus on here are the differences The

first thing to point out is that a our sessions are still in the proposal stage. We're still having conversations about the best way to do this what that means is that we weren't able these flags and Canary you're going to have a temporary way of enabling AR mode and you may recognize pieces of this code because it's similar to what brand of showed you earlier when creating a magic Windows session one that is not an exclusive session. What if you enable 2fa are flags at Canary we're going to make the assumption that you always want a r and what that means is that the camera will be rendered

behind her geometry and kept in sync with a device post. In the future the community group is still discussing how we're going to extend and allow the Declaration of an AR session. It's likely that the properties will be part of request session. It's not finalized yet. So just for now if you're trying this out and you enable the flags and Canary know that you will go into AR mode automatically as long as you create Magic Window and it will keep everything synchronize. What are the difference from Brandon's example is the frame of reference? We're in VR you were using stage which

aligned with the floor here. You're going to use eye level and what that means is that the first recorded position and forward orientation of the device becomes the origin and every other motion is relative. After that. The y-axis is always aligned with gravity. So it's perpendicular to the ground plane. But what use I level it starts. Where were the phone is and then everything else becomes relative after that. It is important to use that when you're trying out AR mode. So now let's talk about the real building block that allow the devil to happen, which is the hit test ATI

foundation of the demo that I just showed you and what it lets you do is cast array into the real world. And detect real-world geometry and shakes. So if I were standing here I can cast away and it would tell me that there was a Podium there it would also tell me through the back side of the podium and a floor. I don't think it goes all the way to the mantle of the earth at this point. So to demonstrate how we used to hit test API think about a radical you haven't heard that term before with a radical is is an indicator that is drawn on a real-world surface and are example of

this was useful because it allowed to use to figure out where they wanted to place the object in the arc know what you're doing when you're drawing a radical is your continually doing hit test into the real world and then using the results to place geometry into the scene so that it appears as if it's on top of the world detective object. Before I show you any code for this what I want to say is that we are using three. JS, which is a helper library for 3D graphics on the web. It takes care of a lot of the heavy lifting for 3D geometry management and get some of the code. We're going to

talk through a specific to three. J yet. If you're interested in learning more you can visit Three JS. Org In this case we're going to do is we're going to create a match. There's some geometry for the radical so that Circle that you saw on the slide and the shark full size box before is actually 3D geometry with texture on it. And it doesn't really matter what geometry you use so I'm not going to go into the details and neither does the permission were you created in the scene? All you have to do is create it added to the scene. And what we're going to do is update the position on

the fly as we get back or hip test results. Everything we do next is going to be part of the requestanimationframe live. And this is basically the same as you saw in VR and his broker to the WebEx are session. So you request a frame and then you execute on it. And at the end you request another frame and continue doing things that weigh. Now I keep part of this example is that we're going to be using our three. JS camera for a couple of things. What is obviously for rendering and what we're going to do is update it with the view and the projection Matrix from

the extra device post. This is sure is that the camera is aligned with our augmented reality device pose and is using the correct frame of reference. So important for rendering we're going to use it for something else as well. In every frame we want something to place a reticle on and as I mentioned earlier to hit test AP I will allow you to cast a way into the real world and get back intersection points with real-world object. But to do that we kind of need a ride in the right coordinate system with the right frame of reference other great news is that three. JS gives us a way of creating

that letting us take her camera and Generator rate from it. What was the news for you 00 which in accordance system here is the center of the screen if you're going to try to Cast Away based on a user tap by you use anywhere from -1 to positive one, but it is that's where our race end of the screen is going to go and then we'll pass that in to the camera and get back, right. The next thing we're going to do is use our Xbox session to request a headset. This is the new API that is available when you turn on the flag, and this is the one that is going to be coming to Canary in a

couple of weeks. What would you do to head test? What is accepting is both the origin and the direction of the right so you can see above we extract that from the rain that we generated and it also accepts the overall frame of reference as a parameter. What we get back is a list of hits and they're ordered in terms of how close they are to the camera. So the first hit is the nearest one to the camera and then they're sorted by distance. But we want a radical to be on the first thing that we saw it. The last thing so we're going to take that one and then we'll do is we'll use

the Matrix that we generate from the head result and then apply that to the medical model and that's it at this point. The radical is now positioning itself in real-world object within the rest quest animation frame As you move through device around That was talk about how to place objects and they are there's one slight difference here that we probably want to consider when we get back to hit test when we get back as both the position of the hip and also the normal of the surface of the right detective and we didn't dissect that before because we wanted our radical to appear as if it was on

the surface, but if you're placing an object in space, you probably want to specify your own orientation, especially your which way is up and instead of using the Norwall on the real world. Object was detected. The good news is with three. Yes. This is fairly straightforward is used to decompose Chinook return in and the scale from the four-by-four method. You can change the orientation of the object to be whatever you want either by generating a new 4 by 4 Matrix or by Steady rotation on the object directly. The key Point here is that there are situations where you want to

decompose the hit test result and use pieces of it in this case for object placement. But in other cases were you want to generate an understanding the real world within your code? You can get a lot of useful information about from these hip test results. The last thing that I want to cover is an API, which is only in discussion at this point. This is not coming in Canary, but it's really important augmented reality. And so I want to cover it so that you have an understand introduction to it right now. This is a core concept of augmented reality which is why we're going to cover it here.

The concept is anchors anchors allow the augmented reality system to keep virtual positions in orientations poses. Always correct and up-to-date as a system learns over time. So what do I mean by that what I mean by that is that what you think about the augmented reality system? It doesn't have perfect information. Everything is getting is from the camera and from sensors and so forth, but it's learning about the real world as I move the camera around and is it gets more information? It's improving its understanding of the world. But what that means is that sometimes it was wrong

previously and it learns that later on. So that's where anchors come in anchors allow the underlying augmented reality system to keep track of poses that you care about and improve their accuracy later. So to give me an example supposed to be augmented reality system. My camera was here when I did the head test but later it realized the camera was over here. What I could do with that knowledge if I anchored that the object was adjust its location after the fact when it realized its mistake anchors are also useful and a lot of other places, for example, when real-world object start to

move as you can imagine the whole concept of anchors is a both very useful, but it's also one that's deep enough that it's continuing to go in conversation. So it's not ready for you to try right now. I had to do this as a web. API properly is a topic of conversation. But even with a hit test API, which is all that you saw the session was being created then hit test API. There's a lot of opportunities for you to start looking at how you might enhance your existing web pages the hit test. I showed you can let you do pretty much everything on the screen now shopping education

and more and really all I showed you was placing one virtual object into the real world. There's a lot of information you can get from a hip ties and a lot of things you could do with us and frankly. We're all really excited to see what you come up with and ideas. You have four ways that you can make your website better. This brings me to another point over a hundred million devices that do a are today, but not everyone does so what are you doing that case one mentioned earlier that does a lot of coats similarity between AR & VR. This is really important if what you

want to do is create a new AR experience, but then also make it available in VR for users that aren't using a devices Sports AR for example a desktop that doesn't have a camera and they're not going to pick it up and move it around the room in that case. What you might want to do is have the exact same code driving a VR experience on desktop with a magic window in that case. You have to adjust your user experience and your user interface. So instead of walking around with your desktop, you would give them controls to be able to zoom in and zoom out. But if you think about the check mode Mo

you could still explore the statue and learn a lot from it that ability to do Progressive fall back is super important on the web and we're really excited that webxr allows us to do that without having to rewrite the same experience twice. So just to recap Brandon I talked about the WebEx or device API both for VR sessions which are available for origin trials today and a our sessions which is still in discussion. We also previewed some augmented reality apis which you can try and chrome Canary actually one augmented reality. I wish you would try it Chrome Canary in a

couple of weeks. As I mentioned the conversation is still ongoing for sessions and anchors and if you're interested in tracking that conversation This is the slide you should take a photo off because we would like you to learn about this. What we really want is your feedback. We'd like you to try it out and let us know what you think. If you try the API and you want to suggest changes. This is where you should go. If you want to track how things are progressing. This is where you should go. Please do visit the community and its polish shoes. If you have questions or fall issues, if you

find things that you think should be different. Overall don't we're very excited to see what you create with this both for VR as well as for augmented reality. And one last thing we wanted to mention is that a lot of them three media that you saw the size today was taken from Polly which is an excellent repository for 3D assets created by tools. Like Blossom tilt brush. They're all available under the fit that Creative Commons license and Skip credits and you

can check it out at Poly. Google.com Do you want to get started with development other symlinks on the screen where you can see how the community group is progressing as well as seeing code samples the app spotlink. There is the demonstration that I showed you earlier. It will not work today if you pull out your phone and go to Chrome Canary and turn it on. It will not work. Give us a couple of weeks. But please do try it out and let us know what you think the web sandbox, which is over on that

end and thank you so much for joining us today. And please enjoy the rest of Google IO.

Cackle comments for the website

Buy this talk

Access to the talk “The future of the web is immersive”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “2018 Google I/O”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
159
app store, apps, development, google play, mobile, soft

Similar talks

Ludo Antonov
Head of Growth Engineering at Pinterest
+ 2 speakers
Ben Galbraith
Product director for the Web Platform at Google
+ 2 speakers
Malte Ubl
Engineering lead of the AMP Project at Google
+ 2 speakers
Available
In cart
Free
Free
Free
Free
Free
Free
Rob Dodson
Developer Advocate at Google
+ 1 speaker
Dominic Mazzoni
Software Engineer at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “The future of the web is immersive”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
558 conferences
22059 speakers
8245 hours of content