Duration 38:49
16+
Play
Video

Build blazing fast web content sites with Firebase and AMP

Michael Bleigh
Software Engineer at Google
  • Video
  • Table of contents
  • Video
2018 Google I/O
May 8, 2018, Mountain View, USA
2018 Google I/O
Video
Build blazing fast web content sites with Firebase and AMP
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
10.56 K
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speaker

Michael Bleigh
Software Engineer at Google

Michael leads the engineering team for Firebase Hosting and the Firebase CLI. Michael is passionate about the web platform has been building open source and developer tools since 2008. He has presented at events including Google I/O, the Chrome Developers Summit, OSCON, and RailsConf.

View the profile

About the talk

Firebase provides a comprehensive set of tools for building all kinds of web applications. In this session, learn how you can leverage Firebase Hosting, Cloud Firestore, Cloud Storage, and Cloud Functions to build a world-class content site that loads incredibly fast from anywhere in the world.

Share

Hello, welcome to blazing-fast content with fire bass and amp. I'm Michael Bly. I'm an engineer on the Firebase hosting and Firebase CLI. Now I want to start with a question. What makes a website fast because that's the goal right? No matter what kind of experience I'm building. I want it to be fast, but answer this question I first need to ask another. What do we mean by fast do I mean low latency? Very responsive low time to First paint time to interactive. Really? I mean

any of these and all of them some of the time The most important thing to remember is it fast websites feel fast to the user. That's our end goal, right? That's the only thing that really matters when a real user visits our site. It should feel fast and responsive to them. So going back to that original question. What makes a website fast the answer like with most things is well, it kind of depends. There are so many factors that go into real and perceived performance and you can spend hours or days focusing on any one

of them. So I'd like to simplify a bit and look at performance in terms of two different kinds of web experiences. First let's imagine an email client. This is an application with a single entry point for users. They will almost always loaded up by the same URL. Also, it requires authentication before any kind of action can be performed. The signed out experience for our email client is just a login page. And you don't plan is going to be open all day and updated continuously as new messages arrive. So the presented data changes constantly finally

email clients are highly interactive. Your most important actions are being able to click into messages and read them being able to reply to them being able to compose new messages. So you're sort of constantly navigating around the interface and Performing new app. Now let's contrasting that to a Content site. So that could be a new side of resource blog anything where the primary reason for someone to visit is to read the content is available there. Unlike the email client the primary entry point for a Content site is likely to

be a deep link to a particular article that was posted by social media or discovered through search content sites are publicly accessible. It doesn't require a login to view. Well new articles may be created on a regular basis once created. They're generally going to stay stable with a few updates here and there. And again reading and scrolling are primary interactions here. We aren't as concerned with being able to do other actions than just look at the page. See what's on the page scroll down to see more what's on the page. soap

what do we do to make an email client fast? Well here we're going to follow the best practices for Progressive web apps we can build with the apfel pattern. Where is service worker cash is all of the JavaScript HTML CSS Etc. This needed to render our site and then we can use API calls to fetch data and render at client-side. Now. This is actually a fascinating topic and I could go into detail, but it's also not what we're here to cover today. I'm going to talk about how to make content sites fast and it's actually pretty different from what you're going to do in the sort of Rich

client experience fundamentally for Content sites. We have to optimize for first page load. This means that our ideal and common case is when someone is visiting the website for the very first time, they don't have anything in their cash for our site. They don't have anything at all about our site on their computer and that changes how we need to optimize performance. This means we need as few round trips as possible before the page gets painted, especially in poor connectivity environments every round trip. The browser has to make before it displays

content is just going to kill performance. We also need to minimize scripts and styles that block page render any critical CSS that you have should be in line right away. Finally, we need extremely low latency from the server. We need to be delivering content close to the user. So that the time that it takes for them to request a page and then actually have the page that back to them is the smallest possible. So what does a performant website look like? Well, damn, I look like this. This is actually incredibly performant HTML. There's no stylesheet. There's

no blocking scripts. This is just a few bites of text that we send over the wire and we're good to go. And you know, I'd say If This Were the nineties I'd say let's go for it. But it's 2018 and user expectations go a little bit further than default browser style sheets. That's where Aunt comes it. Am stands for Accelerated mobile pages and is an open source Library created by Google specifically to provide a foundation for fast content sites. Amp is Fast by default because amp stops you from doing things that

slow down your WebEx. With amp, you can't do any custom JavaScript at all. CSS has to be in line. You're not allowed to load any style feeds externally instead ampad functionality an interactive Behavior through specially approved custom elements that are part of the amp project. This also means that Aunt Pages can be efficiently cashed by search engines are social networks and preloaded in advance of user interaction. So when you do a Google Search and you see the little lightning bolt icons and then you tap on that and it

loads instantly. That's because Google has cashed the app and content and as soon as you made the search query is started loading that Aunt content in the background. So they it was already ready to go by the time you tap the link in search. So this content has been pre-loaded and served by the app cash. Now it is important to remember because it can sound a little scary up front like all this amp is just like it's a whole different system is being built on top of HTML. This is radically different but for the most part when you're building a page, you're just writing standard HTML. So let's

take a look at sort of the boilerplate and page. And you can see that this looks pretty much like any other HTML page. The only difference is we have that cool little lightning bolt in HTML tag and where the... And the Ant boilerplate style tag is there would be just a number of style that have to be included biamp by default so that it can sort of do the right things in terms of displaying content by reloading the amp run time. And well and is mostly just HTML that doesn't mean that it's only HTML and only sort of the standard tags you get with the browser.

Aunt is HTML with some custom elements that are designed to bring you modern niceties without sacrificing performance. So let's take a look at just a couple of those. First we have amp image. Now. The amp image tag is actually baked into the amp run time. So you don't have to load anything extra special to get this this just comes with and the first thing that it does is this controls the loading of the image to ensure maximum efficiency. So when you load an app page is not necessarily going to load every image on the page immediately the way it would if you were

using a standard image tag instead it will sort of do its own optimizations to figure out when's the right time to render something it might wait until you scroll it interview. It might wait for other things the amp runtime handles that for you so you don't have to think about it. Next you'll notice this little layout responsive attributes on the amp image responsive tells me it tells amp that this element should fill the horizontal width of its container and then match the height based on what you supplied with a knife. So normally while you might have to specify exact

image Dimensions with amp if you're using a responsive layout, you can just specify an aspect ratio. So here I just say it's 1.33 to 1. Another thing that I have gives you with an image is the ability to do placeholders for any element. So here inside my first amp image. I have another amp image with a placeholder attribute and the source for that is just a date of your eye with their super low resolution version of the image that I want to load. So essentially on the server I in line to superlo image data super low resolution Day to your eye and then that gets loaded almost

immediately because it's small and Tiny in the Akron time says, okay, let's do this. And then when my higher-resolution images loaded it'll just instantly swap. It's nap in and replace the placeholder. So this is a great way to give sort of improved perceived performance where you can get an idea of what the paid content is going to look like before it's 100% loaded. Some elements aren't baked into the end run time and you have to load them separately via script tag. So here's a script tag to load the app font element and you can see that this is really straightforward. It just

has a custom element attribute that describes what the element is going to be named and then it points to the amp CDN to load the script. You also noticed that it's an async tag. And this is true of all Aunt custom elements because like I said, we're trying to minimize blocking scripts and styles so all and custom elements are loaded asynchronously the only blocking script in an ant page is the amp run time itself. Now and font is actually pretty cool What it lets you do is essentially control the font loading behavior of the browser and optimize it for

performance. So essentially you can add a timeout attribute to Anne Fontaine that says if this much time has elapsed since the page started loading and I still don't have this pot loaded then I should abandon loading it and do something else and you can set that 202 essentially say if this font isn't already loaded on the user system, then I'm just going to not try to load it and do something else instead. And that something else is that when the deadline expires amp will add ass custom CSS class to your document that you can then use to apply additional Styles or switch things around

change font size is whatever you need to do to fall back to system default fonts. Now I don't really have time to get into it in the session but web fonts are very expensive from a performance perspective. If you don't need them don't use them or at least used to like Aunt font to fall back quickly if they're not loading, especially if you're trying to load an external style sheet like through Google fonts that introduces a blocking script style request that you have to wait for before your page. Will you can inline font family to improve the performance a little bit here, but just in general

be careful and very deliver it when you're using custom fonts in a high-performance content site. Mail course there are a lot more compliments than he's and I'd encourage you to explore the amp documentation for elements that help with everything from layout to Media to interactivity in your amp pages. And another thing I want to call out is it answers just one way to make content sites fast. It's not the only way if you can't use amp or you don't want to use amp you don't like him you don't use it and the same techniques that I'm going to show throughout the rest of this talk

will still largely apply to you and can still help to make your content site high performance. Remember this is what we're starting with and this is high performance. So as long as you are being careful in the way that you apply scripts and styles and all of the nice things that the web web platform has gotten in the last several decades, then you can make a performance experience. So let's go back to original checklist for making a Content site fast and we can see the app actually helps quite a bit. It helps us optimize for that first page load by minimizing that were found trips

reducing blocking scripts on Stiles and I kind of just leave this last one minimizing Network latency. So how can we tackle that? That's where Firebase comes it. Firebase is a comprehensive platform for building mobile experiences. And today will be using it to build a ramp sight. Now, you may be already familiar with Firebase as a great fit for sort of these rich highly interactive applications. Like our email client example from earlier Firebase is JavaScript sdks for products, like the real-time database and Cloud firestore are great tools for highly interactive maps. But

fires bass can be just as powerful for building latency sensitive content sites. So today, we're going to try to use Firebase and amp together and hopefully build a lightning fast experience for our users. now I really love this Caper rooms working together with my friends to solve puzzles before the clock runs out. It's a lot of fun. So I built escapable which is a simple resource to discover Escape rooms in your area. Can we switch over to the demo, please? This is as capable as you can see it's very simple. I'm just going to pick San

Francisco Bay here. And now you can see I just have a list of escape room. So the top one is escape Google I O N I can scroll down and see all the other ones that are around 3 to location. I can link to the website or get directions. And then I also have the the rooms that are offered by each location and I can tap those to expand it out and see a little bit more info so I can see that this one is one to four players a last 60 minutes and it's 96% recommended. So that's really all that there is to this site and is very simple, but it also you know, this has the kind of like Rich

modern look that you're looking for in a web experience and an accomplished as what the user set out to do which is discover Escape rooms in their area. Remember whenever you're tackling creating an experience of any kind think about first what your users need to do and how you can make that experience the most efficient before you dive into other things. Can we go back to the slides? So when I set out to build escapable, I came up with three potential approaches to make it fast static compilation Dynamic rendering

and invented rendering. Data compilation is probably familiar to many of you. It was also the first major use case of Firebase hosting firebases developer Focus web hosting platform Firebase hosting serve static content from a global content delivery Network automatically provisioned SSL certs for free for custom domains has Atomic release management for easy to pull in rollback. And also let you configure niceties like doing rewrites for single page apps. And four Stacks Heights compilation happens up front on the developers machine. So the developer is going to compile the

assets into just HTML CSS, whatever else they need and going to be deployed to Firebase hosting directly. Firebase hosting well, then just serve those requests from its Global CDN whenever they requested so this is really straightforward. The advantages of static sites are clear. There is zero request time processing your just serving flat files so it can be an incredibly fast that way also it's extremely cash efficient since things only change when you do a new deploy Firebase hosting is able to efficiently cash all of the content on edge servers around the world until you do a new

deploy. On the other side. It's not really suited well for frequent updates, especially for user-generated Content. Remember these assets have to be redeployed every time they change. So if you have a website where users are constantly changing things or making the content shift and anyway Statics not necessarily going to be a good fit. It also usually requires some Dev skills to edit a static website because usually like I said, you're editing markdown files on your machine and then building them with Jacquelyn deploying them or something like that as opposed to using a friendly CMS

like you I Now there are lots of tools that are available. And again, this is not what we're going to talk about today. Because if you can use a static site there are lots of resources to help you get started and I encourage you to do so, there is literally not going to be on more performant way then doing a static site where you just serving flat files. So like you can just go do that, but And in fact the amp project website is a static site hosted on Firebase hosting. So this is not just something that we talked about as like maybe some

people should do it. This is something that we do as well. But some sites are too complex or get updated too frequently for static compilation to be a good fit. So here we turn to Dynamic rendering also commonly called server-side rendering. Now to do Dynamic rendering we're going to have to bring in some additional features. We started with Firebase hosting but now we're going to need to bring in Cloud firestore to store the data First Sight cloud firestore is a flexible nosql database that can scale from weekend projects to Planet scale applications. We're also going to use cloud functions for

Firebase to do the actual server-side rendering with no J. Yes Club functions provide serverless compute for your Firebase project and we can connect them directly to Firebase hosting. So, let's see how that works. In a dynamic rendering world the user request a site from Firebase hosting. Firebase hosting and proxies that request to a cloud function the cloud function is then going to go out and fetch all the data that it needs to render the page from cloud firestore. Once it's done that it's going to render HTML and send that back to hosting which then get sent back to the

user. And if we set a cash control header on that response, then Firebase hosting will cash that at the edge and send that back immediately instead of going back to functions everyday as long as the cat hasn't expired. So the great thing about Dynamic rendering is it fresh content is available immediately, since we are rendering every time a request comes in we know that we're getting the freshest content on every request. It's all so familiar architecture for most developers almost everyone has built some kind of request response web server and their time and so this is

something that just is easy to slide in and easy to understand. On the other hand, it's somewhat inefficient and compute expensive when you really think about it. Because like I said, we are fetching all these documents and rendering them every single time a request comes in when the documents aren't actually necessarily changing all that often. It's also really difficult to efficiently cash Dynamic content because since we're rendering every time and we don't necessarily know when the content is changed, we have to make a trade-off between how long can we bear to Holt to serve up

stale content versus when do we want to incur the penalty of having to rear-ender and compute? So how do we actually do server-side rendering with Firebase? Here's a streamlined example as a quick note. I built this capable using typescript to take advantage of modern JavaScript teachers like async await. And here you can see that I have just a pretty standard Express app. I'm fetching some data. I'm setting some headers and then I'm rendering a page. One thing to call out is notice that I do want to wait promise that all here and that's so that

I'm fetching all of the data for my page and parallel instead of fetching it one at a time and waiting for that to complete before taking off the next batch. It's also really important to think about cache-control headers just generally in web applications, but especially for server-side rendering things on Firebase hosting you can see here. I have a max age equals 300 which is saying it's okay to cash this in your local browser for up to five minutes or 300 seconds. I also have an S Dash max age set the 1200 which says it's okay to cash this in a CDN for 20 minutes.

Now again, this is something that's going to be a trade-off if you're content changes sort of at most once a day. Maybe you can afford a longer TTL but then again, maybe you can't because maybe it only changes once a day but when it changes its super vital that people see it right away, so that's something that you'll have to determine for your own app. And in this case because I have 20 minutes on the server and 5 minutes on the client. I have sort of a worst-case scenario of a 25-minute hits tail page. They get served to a user. So after that,

I'm simply rendering the content using of data that I Fetch and sending it back as a response which will talk more about as an in a moment. Now that I have my Express app. I need to register as a cloud function and connected to Firebase hosting. So here you can see I am for the firepit the Firebase functions SDK and then I export a function using functions that a CPS. One request and then I just passed in my Express app. So when you're registering an HTTP function, one of the things that you can do is just passing an Express app as a Handler and I will just work so you don't need a

special rapper or anything like that. You can just pass the express upright in and now this tells this says that when I deploy I want a CPS function called app And that it's going to serve content for my Express app then in Firebase. Jason. The first thing I do is declare public directory. And this is where my static assets will live and in general just like with static site before anything you can serve as a static file you should so this is where I'm putting things like the logo for escapable in my manifest. Jason things that don't change very frequently and I'm fine redeploying the site

if I see if that's necessary when they do change then I'm rewriting all your oils to a function called app. And this will only rewrite URLs that aren't an exact match for something in my public directory so I can safely just say hey if it doesn't exactly match of static files and I want to send it off to my cloud function to see if it needs to be rendered there. Now I'm not going to go super in-depth into the rendering here while you can use your favorite templating library or even something like pre-act to render amp pages. I decided to avoid libraries altogether and just use template

literals with string interpolation because it's possible. So why not if you have user-generated content though, please be sure to properly sanitize your data and don't just do this because you'll have bad script injection attacks and have a bad time generally. But ultimately, this is just a function that returns a string because all I'm doing is creating my app page as HTML as a spring The only other thing that I did is I put CSS in separate files and then I just in line to those in the amp custom tag by loading them literally out of my functions directory. So again, this is like

very duct tape, but it works. So how well does this perform well when hitting the origin and performing a full fetch of all the documents to render the page, I got a response in about a second. Sometimes it was a little better. Sometimes it was a little worse, but I was kind of on a bridge and that's not horrible, but it's not great. But if you compare that to when the CDN was serving content, we had 21 milliseconds as the total round-trip time including delivery. That's a free-market difference. And if you look at the film strip, you can see that difference clear as day. So here we

have essentially a 1-second difference. That is the difference between the CDN and the origin. So this is 3-g perf On the Origin. I got my page to Peyton about 3 seconds on the CDN. It's in about 2. Now interesting Lee if we look at the virgin of the page in Google's amp cash. We knock another eight hundred milliseconds off of the meaningful paint time. That seems unfair right? Like how does how's the app cash get to be faster than my sight. Well, it's cheating kind of the app cash does some optimizations of amp pages when it loads them into the cash.

Then make it perform better by doing some specific things that optimize and further reduce those blocking those render-blocking scripts and styles. But what about users who come to my side directly? I want that kind of performance on my site. Well, there's a pretty new open source Library Tool Box open source, some tools that let us mimic the same optimizations that the amp cache does on her own server. It does this by removing the boiler plate locking app to a specific version and enlightening critical CSS. Also wants it. Does this the page

becomes not valid amp, which is kind of interested. So by optimizing the app you make it invalid at so the way that you approach this is you serve both the Aronoff Size and page which can be served up by the app cash and use there and then you also serve the optimized and page as the canonical link that people are going to go to when they visit your site. And we do this by installing two packages in our project and to watch Optimizer and amp tool box runtime version runtime version literally just goes out and figures out what the latest run time version of amp is and then tells it to

you and it does a little bit of in memory caching. So it's not making that every time you call it the the optimizer contains a transform HTML function which essentially takes valid amp HTML a couple arguments and then transforms it into the optimized HTML. So, how can we use this optimize function in our app? Well, we have our Express in point and we can essentially just copy paste that into another endpoint this going to be our canonical page instead of our Aunt Paige. And so here you'll notice the differences. We dropped Aunt from the URL because now this is our canonical

page, so it's just going to be Splash region name. We also have a weight OptimEyes in the renters page because we're just wrapping our previous render block with an optimized call and we're passing in the amp URL now it's important to passing the amp URL because when you render an app page, you need to have a link Rel canonical that points to the canonical version of the page on your site and bypassing listen to the optimizer. We tell it to invert that because we're turning this into the canonical version of are paid. So it needs to turn that link back into

a link out today at page. Now when we do this Dynamic when we apply the optimizer and we compare that to our previous see the end result. That's a pretty Market Improvement. We get in we get a full second improvement over the CDN time and now we're painting in one second, which is really fast on 3D. What's interesting though is now if we go back and compare this to the app cash. We're actually beating that as well. So by using the app Optimizer you can actually beat the app cash in rendering your page on your domain. Now, of course,

I hope that I've drilled into at this point that CD ends are really important for performance all of these performance benefits from the optimized but I just showed we only get those, rendering from the CDN. So if you want to truly maximize performance, we need to render from the CDN as often as possible. That's where a vented rendering comes in. For a dynamic rendered page we had fire store in Cloud functions to Firebase hosting to generate or HTML on the fly. Now. We're going to add one more Firebase product cloud storage. And what we're going to do is we're going to pre-render content

on demand when the data changes and storage and cloud storage as a flat file to deliver later. Here's how it works. When data changes in my firestore database that triggers a cloud function because you can do that with Cloud functions. It's really great Cloud the cloud function. Then does two things first. It renders the HTML and writes that to a cloud storage file and second. It sends a request to purge the hosting cash for that specific URL and what this enables us to do is have the content change when the content changes not every time

as requested so that all happens when the data changes but what about when the user request Society so the user makes a request and again, we're going to proxy to a cloud function but this time instead of calling out the fire store and grab me all the documents instead. We're just going to do a transparent read through proxy straight to cloud storage. So we're going to say I already have the store to the flat file. So I'm just going to serve that up. And in fact, this is mimicking a lot of what the Firebase hosting origin does itself. when you deploy a static site to

Firebase hosting the result is it all subsequent requests are going to be served up by the CDN and we can end because we are invalidating the cash on the CDN whenever the content changes we can set this to being essentially an indefinite server cash, which means that we're going to have that CD M performance more often. The benefits here are pretty clear. You still have fresh content available instantly just like you do with Dynamic rendering unlike Dynamic rendering though. We only pay the cost of render when the data changes not when the user request the site. So if

my data changes once a day, then I'm only paying that cost once a day instead of every time user comes to my site or every time the cash expires when user comes to my site. You can also cash until the content changes and because of the performance benefits of cdm's this is kind of the critical piece. This is what lets you have static like performance even though your rendering content on demand in an invented way. And the only real downside here is that this may be kind of unfamiliar territory. If you're not super familiar with Cloud functions and sort of typing things through events.

This may be a little weird or scary to you, but we're going to walk through some code and hopefully it will get less so So here is where I set up the three functions that I used to listen to when I need to rear-ender the page and it's 3 because remember I'm going to need to rear-ender the page whenever any data changes on the page that I care about and so far as capable that's three different things if the region document changes, I need to rear-ender the page. If a location that's in the region changes. I need a rear end of the page and if a room that's in the region changes, I need rear

end of the page. So for the reason it's really simple just whenever it changes. I fire off a render for the location and the room changes. I told to look at the region out of the document say okay. This is the reason that I'm going to need to rear-ender also pretty simple. Now the actual update region page is just rendering strings basically, so we call the same render function that we used to render HTML before but now we call it with the day that we're fetching because the we're just triggering it at a different time. So we're essentially doing the same

thing but triggering it when the data changed were then also generating the optimized HTML and then we're storing both of those in cloud storage with this right and Purge function and the right in fridge function first. We just used the Firebase. Admin SDK is for cloud storage to save the file to save the file in the cloud storage. And then I'm going to tell you a little trick. So by making a request to Firebase hosting with the purge method you actually tell the CDN to purge the contents and then the next request will go through to the origin. Basically, you can

send this request and then that'll cause the next request to be fresh content now fair warning. This isn't exactly an official API and it might change in the future. We're taking a look at the how we can incorporate this more officially into the Firebase hosting product, but it really enables some powerful use cases, so I wanted to give you kind of a sneak peek. Now that we stored are content and purged the cash. We need to actually serve it up. So back to our Express app. We just write a simple bucket proxy and all that does is again using the storage admin SDK? We create a read stream.

Reset the cache control and this time you'll notice that the server max age is a large number that's actually a year in seconds. So we're saying Cassius on the server for a year. And the reason that we don't care is because we're going to proactively invalidate that cash when the content changes then finally. We just type the content from a reed stream down as the response and that's all we have to do. So that was a lot of steps and I'd like to show you how it works in supremacist and in practice, so, can we jump back to the demo, please? Alright, so here we go. I've got my

sight and I'm going to jump over here into my firestore database and I have a new room that I've been working on. The only thing that's left to add is the region so region SFO. Add that there. Now I'm going to jump over to my functions logs and within a couple seconds we should see that the change to my firestore database has caused the function on room change to trigger. Any second now. There we go. And so you can see on room change started executing and it did some stuff. So what we've done is we rendered an amp optimized version. We wrote an /

SFO. HTML SFO. Female and then we purged both of those URLs and it's done now if we jump over the cloud storage, you can see that I have the structure of my sight in HTML in cloud storage and I refresh this really quickly. And once it loads back up. I'm going to click on SFO and you can see that this was modified just now so it's updated fear. But of course that's not actually super impressive if the website itself doesn't update. So let's try there so hit refresh and there we go. Now we have a new room flames of the Firebase that just appeared

on our site and if I switch over and look at the dev tools here, you can see that that response served up in 841 milliseconds which isn't super fast. I mean, it's fine, but you can also see that that was a cache Miss because this was my first request after I had encouraged if I go back and reload again. Jump back here. Now. We have a response time of 10 seconds and it's going to stay that way on his head server until my content changes. Can we go back to the slides? Now even when just comparing the origin performance and not whether

or not it's being served up by the CDN. I found that the invented rendering has about a 32% speed up over Dynamic rendering mostly because you're just processing through to a flat file in GCS and that's notable. But again, it's not the most important thing. The important thing is that we get this optimized CDM level of performance almost all of the time and it's only just after our content changes and the first request each add server after that that we ever get anything other than this super OptimEyes super fast performance. so hopefully this is been this is giving you a

few ideas to sort of go out there and try for yourself and to go back to the original question question. I don't know if I'm completely and answered it, you know, what makes a website fast, but I do have an answer and the answer is super cheesy because what makes a web but citefast is you you do with experiences aren't fast because of magic there fast because developers care about performance and work hard to make it better. So we can provide you some of the tools and some of the technology that help you do this, but at the end of the day, it's your elbow grease and you digging in and

caring about performance and making it work that is going to give your users the experience, but they deserve. That's all I've got today. I hope to get your feedback on the session at google.com FiOS / schedule. I also want to call out before I go that Jeff posnick is giving a talk in an hour called Beyond single page apps alternative architectures for your pwa which also has tons of interesting things about building performant pwas on Firebase hosting with Cloud functions, and it's a totally different approach than I took here. So if you want to sort of get even more ideas to help

you get started, I'd recommend that highly that's all I've got for today. I will be heading over to the Firebase and box directly after this and thanks for taking the time.

Cackle comments for the website

Buy this talk

Access to the talk “Build blazing fast web content sites with Firebase and AMP”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “2018 Google I/O”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “Software development”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
159
app store, apps, development, google play, mobile, soft

Similar talks

Jeffrey Posnick
Software Engineer at Google
Available
In cart
Free
Free
Free
Free
Free
Free
Tom Greenaway
Senior Developer Advocate at Google
+ 1 speaker
John Mueller
Software Engineer at Google
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free
Eric Bidelman
Senior Staff Developer Programs Engineer at Google
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “Build blazing fast web content sites with Firebase and AMP”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
558 conferences
22059 speakers
8245 hours of content