Build blazing fast web content sites with Firebase and AMP (Google I/O ’18)

Articles

Build blazing fast web content sites with Firebase and AMP (Google I/O ’18)


[MUSIC PLAYING] MICHAEL BLEIGH: Hello. Welcome to Blazing Fast
Content with Firebase and AMP. I’m Michael Bleigh. I’m an engineer on the Firebase
Hosting and Firebase CLI. Now, I want to start
with a question. What makes a website fast? Because that’s the goal, right? No matter what kind of
experience I’m building, I want it to be fast. But to answer this question,
I first need to ask another. What do we mean by fast? Do I mean low latency,
very responsive, low time to first paint,
time to interactive? Really, I mean any of these, and
all of them some of the time. The most important
thing to remember is that fast web sites
feel fast to the user. That’s our end goal, right? That’s the only thing
that really matters. When a real user
visits our site, it should feel fast
and responsive to them. So going back to that
original question, what makes a website fast? The answer, like
with most things, is, well, it kind of depends. There are so many factors that
go into real and perceived performance, and you can
spend hours or days focusing on any one of them. So I’d like to simplify a
bit and look at performance in terms of two different
kinds of web experiences. First, let’s imagine
an email client. This is an application with a
single entry point for users. They will almost always
load it up by the same URL. Also, it requires authentication
before any kind of action can be performed. The signed out experience
for our email client is just a login page. An email client is going
to be open all day, and updated continuously
as new messages arrive. So the presented data
changes constantly. Finally, email clients
are highly interactive. Your most important
actions are being able to click into
messages and read them, being able to reply
to them, being able to compose new
messages, so you’re sort of constantly navigating
around the interface and performing new actions. Now let’s contrast
that to a content site, so that could be a news site,
a resource, a blog, anything where the primary reason
for someone to visit is to read the content
that’s available there. Unlike the email client,
the primary entry point for a content site
is likely to be a deep link to a
particular article that was posted by social media
or discovered through search. Content sites are
publicly accessible. It doesn’t require
a log in to view. While new articles
may be created on a regular basis,
once created, they’re generally going to
stay stable with a few updates here and there. And again, reading and scrolling
are our primary interactions here. We aren’t as concerned with
being able to do other actions than just look at the page,
see what’s on the page, scroll down to see more
of what’s on the page. So what do we do to make
an email client fast? Well, here we’re going to
follow the best practices for progressive web apps. We can build with the app
shell pattern, where a service worker caches all of the
JavaScript HTML, CSS, et cetera, that’s needed
to render our site. And then we can use API calls
to fetch data and render it client side. Now, this is actually
a fascinating topic and I could go into
detail, but it’s also not what we’re here
to cover today. I want to talk about how
to make content sites fast, and it’s actually pretty
different from what you’re going to do in the sort
of rich client experience. Fundamentally,
for content sites, we have to optimize
for first page load. This means that our
ideal and common case is when someone is visiting
the website for the very first time. They don’t have anything in
their cache for our site, they don’t have anything
at all about our site on their computer, and
that changes how we need to optimize performance. This means we need as few
round trips as possible before the page gets painted. Especially in poor
connectivity environments, every round trip the
browser has to make, before it displays content, is
just going to kill performance. We also need to minimize
scripts and styles that block page render. Any critical CSS that you have
should be inlined right away. Finally, we need extremely
low latency from the server. We need to be delivering
content close to the user, so that the time that it takes
for them to request a page and then actually have
the page sent back to them is as small as possible. So what does a performant
web site look like? Well, it might look like this. This is actually
incredibly performant HTML. There is no style sheet. There’s no blocking scripts. This is just a few bytes of
text that we send over the wire, and were good to go. And I’d say, if this were the
’90s, I’d say let’s go for it. But it’s 2018, and
user expectations go a little bit further than
default browser style sheets. That’s where AMP comes in. AMP stands for
Accelerated Mobile Pages, and is an open source
library created by Google specifically to provide a
foundation for fast content sites. AMP is fast by default,
because AMP stops you from doing things that slow
down your web experience. With AMP, you can’t do any
custom JavaScript at all. CSS has to be inlined. You’re not allowed to load
any style sheets externally. Instead, AMP adds functionality
and interactive behavior through specially approved
custom elements that are part of the AMP project. This also means
that AMP pages can be efficiently cached by search
engines or social networks, and preloaded in advance
of user interaction. So when you do a
Google search and you see those little lightning bold
icons, and then you tap on that and it loads instantly,
that’s because Google has cached the AMP app
content, and as soon as you made the search query,
it started loading that AMP content in the background,
so that it was already ready to go by the time you
tapped a link in search. So this content has been
preloaded and served by the AMP cache. Now, it is important
to remember– because it can sound a little
scary up front, like, oh, this AMP is just like it’s a
whole different system that’s being built on top of HTML. This is radically different. But for the most part, when
you’re building an AMP page, you’re just writing
standard HTML. So let’s take a look at sort
of the boiler plate AMP page, and you can see that
this looks pretty much like any other HTML page. The only difference is we have
that cool little lightning bolt in the HTML tag. And where the dot dot dot in the
AMP boiler plate style tag is, there would be just
a number of styles that have to be included
by AMP by default, so that it can sort of do
the right things in terms of displaying content by
loading the AMP runtime. And while AMP is
mostly just HTML, that doesn’t mean
that it’s only HTML and only sort of the standard
tags you get with the browser. AMP is HTML with
some custom elements that are designed to
bring you modern niceties without sacrificing performance. So let’s take a look at
just a couple of those. First, we have AMP image. Now the AMP image
tag is actually baked into the AMP
runtime, so you don’t have to load anything
extra or special to get this. This just comes with. And the first thing that
it does is this controls the loading of the image to
ensure maximum efficiency. So when you load
an AMP page, it’s not necessarily going
to load every image on the page immediately
the way it would if you were using a
standard image tag. Instead, it will sort of
do its own optimizations to figure out when’s the right
time to render something. It might wait until you
scroll it into view, it might wait for other things. The AMP runtime
handles that for you, so you don’t have
to think about it. Next, you’ll notice this little
layout responsive attribute that’s on the AMP image. Layout responsive tells
the– it tells AMP that this element should
fill the horizontal width of its container and then
match the height based on what you supply in width and height. So normally, while you might
have to specify exact image dimensions, with AMP, if you’re
using the responsive layout, you can specify an aspect ratio. So here, I just
say it’s 1.33 to 1. Another thing that AMP
gives you with AMP image is the ability to do
placeholders for any element. So here, inside my
first AMP image, I have another AMP image
with a placeholder attribute, and the source for
that is just a data URI with a super low resolution
version of the image that I want to load. So essentially, on the server,
I inlined this super low image data– super low resolution
data URI, and then that gets loaded
almost immediately, because it’s small and
tiny, and the AMP runtime says, OK, let’s do this. And then when my higher
resolution image is loaded, it’ll just instantly
swap it, snap in and replace the placeholder. So this is a great way to give
sort of improved perceived performance, where you can
get an idea of what the page content is going to look
like before it’s 100% loaded. Some elements aren’t baked
into the AMP runtime, and you have to load them
separately via a script tag. So here’s a script tag to
load the AMP font element, and you can see that this
is really straightforward. It just has a custom
element attribute that describes what the
element is going to be named, and then it points to the
AMP CDN to load the script. You also notice that
it’s an async tag, and this is true of all
AMP custom elements, because, like I said,
we’re trying to minimize blocking scripts and styles. So all AMP custom elements
are loaded asynchronously. The only blocking
script in an AMP page is the AMP runtime itself. Now, AMP font is
actually pretty cool. What it lets you do
is essentially control the font loading
behavior of the browser and optimize it for performance. So essentially, you can add a
timeout attribute to AMP font, and that says, if this
much time has elapsed since the page started
loading and I still don’t have this
font loaded, then I should abandon loading
it and do something else. And you can set that to
zero to essentially say, if this font isn’t
already loaded and on the user’s
system, then I’m just going to not try to load it
and do something else instead. And that something else is
that, when the deadline expires, AMP will add a custom CSS
class to your document that you can then use to
apply additional styles or switch things around,
change font sizes, whatever you need to do to fall
back to system default fonts. Now, I don’t really have time
to get into it in this session, but web fonts are
very expensive, from a performance perspective. If you don’t need
them, don’t use them. Or at least use tools like
AMP font to fall back quickly if they’re not loaded,
especially if you’re trying to load an external
style sheet like through Google Fonts. That introduces a blocking
script style request that you have to wait for
before your page loads. You can inline font
family to improve the performance a little bit
here, but just in general, be careful and very
deliberate when you’re using custom fonts in a
high performance content site. Now, of course, there are a lot
more AMP elements than these, and I encourage you to
explore the AMP documentation for elements that
help with everything from layout to media to
interactivity in your AMP pages. And another thing
I want to call out is that AMP is just one way
to make content sites fast, it’s not the only way. If you can’t use AMP or
you don’t want to use AMP, you don’t like AMP,
you don’t use it, and the same techniques
that I’m going to show throughout
the rest of this talk will still largely apply
to you, and can still help to make your content
site high performance. Remember, this is what
we’re starting with and this is high
performance, so as long as you are being careful in
the way that you apply scripts and styles and all
of the nice things that the web platform has gotten
in the last several decades, then you can make a
performant experience. So let’s go back to
our original checklist for making a content
site fast, and we can see that AMP actually
helps quite a bit. It helps us optimize
for that first page load by minimizing
network round trips, reducing blocking scripts and
styles, and that kind of just leaves this last one,
minimizing network latency. So how can we tackle that? That’s where Firebase comes in. Firebase is a
comprehensive platform for building mobile
experiences, and today, we’ll be using it to
build our AMP site. Now, you may be already
familiar with Firebase as a great fit for sort
of these rich, highly interactive applications,
like our email client example from earlier. Firebase is JavaScript SDKs
for products like the real time database and Cloud Firestore
are great tools for highly interactive apps,
but Firebase can be just as powerful for building
latency sensitive content sites. So today we’re going to try to
use Firebase and AMP together, and hopefully build a lightning
fast experience for our users. Now, I really love escape
rooms, working together with my friends to solve puzzles
before the clock runs out. It’s a lot of fun. So I built Escapable,
which is a simple resource to discover escape
rooms in your area. Can we switch over
to the demo, please? So this is Escapable. As you can see,
it’s very simple. I’m just going to pick
San Francisco Bay here, and now you can see I just
have a list of escape rooms. So the top one is
Escape Google I/O, and I can scroll down and
see all the other ones that are around. For each location, I can link to
the website or get directions. And then I also
have the rooms that are offered by each
location, and I can tap those to expand it out
and see a little bit more info. So I can see that this one
is one to four players. It lasts 60 minutes, and
it’s 96% recommended. So that’s really all that
there is to this site. And it’s very simple,
but it also– you know, this has the kind of like
rich modern look that you’re looking for in a web experience,
and it accomplishes what the user has set out to
do, which is discover escape rooms in their area. So remember, whenever
you’re tackling creating an experience
of any kind, think about first what
your users need to do and how you can make that
experience the most efficient, before you dive
into other things. Can we go back to the slides? So when I set out
to build Escapable, I came up with three potential
approaches to make it fast, static compilation,
dynamic rendering, and evented rendering. Static compilation is probably
familiar to many of you. It was also the first major
use case of Firebase Hosting, Firebaser’s developer
focused web hosting platform. Firebase Hosting serves static
content from a global content delivery network,
automatically provisions SSL certs for free
for customer domains, has atomic release management
for easy to pull in rollback, and also lets you
configure niceties, like doing rewrites
for single page apps. And for static
sites, compilation happens upfront on the
developers machine. So the developer is going
to compile the assets into just HTML, CSS, whatever
else they need, and those are going to be deployed to
Firebase Hosting directly. Firebase Hosting will then
just serve those requests from its global CDN
whenever they’re requested. So this is really
straightforward. The advantages of
static sites are clear. There is zero request
time processing. You’re just serving flat files. So it can be incredibly
fast that way. Also, it’s extremely
cache efficient. Since things only change
when you do a new deploy, Firebase Hosting is able
to efficiently cache all of the content on edge
servers around the world until you do a new deploy. On the other side,
it’s not really suited well for
frequent updates, especially for user
generated content. Remember, these assets
have to be redeployed every time they change. So if you have a website
where users are constantly changing things or making
the content shift in any way, static is not necessarily
going to be a good fit. It also usually
requires some dev skills to edit a static web site,
because usually, like I said, you’re editing marked down
files on your machine and then building them with Jekyll
and deploying them, or something like
that, as opposed to using a friendly CMS-like UI. Now, there are lots of
tools that are available, and again, this
is not what we’re going to talk about today. Because if you can
use a static site, there are lots of
resources to help you get started, and I
encourage you to do so. There is literally not going
to be a more performant way than doing a static site where
you’re just serving flat files, and so like you can
just go do that. But– and in fact, the
AMP project website is a static site, hosted
on Firebase Hosting. So this is not just something
that we talk about as like, oh, maybe some
people should do it. This is something
that we do as well. But some sites are
too complex or get updated too frequently
for static compilation to be a good fit. So here, we turn to
dynamic rendering, also commonly called
server side rendering. Now, to do dynamic
rendering, we’re going to have to bring in
some additional features. We started with
Firebase Hosting, but now we’re going to need
to bring in Cloud Firestore to store the data for our site. Cloud Firestore is a
flexible NoSQL database that can scale from
weekend projects to planet scale applications. We’re also going to use
Cloud Functions for Firebase to do the actual server
side rendering with no JS. Cloud Functions
provides serverless compute for your
Firebase project, and we can connect them
directly to Firebase Hosting. So let’s see how that works. In a dynamic rendering
world, the user requests a site from
Firebase Hosting. Firebase Hosting then
proxies that request to a Cloud Function. The Cloud Function is then
going to go out and fetch all the data that it needs
to render the page from Cloud Firestore. Once it’s done that, it’s going
to render HTML and send that back to hosting, which then
gets sent back to the user. And if we set a cache control
header on that response, then Firebase Hosting will
cache that at the edge and send that back immediately
instead of going back to Functions every time– as long as the cache
hasn’t expired. So the great thing
about dynamic rendering is that fresh content is
available immediately. Since we are rendering every
time a request comes in, we know that we’re getting
the freshest content on every request, period. It’s also a familiar
architecture for most developers. Almost everyone has built some
kind of request response web server in their
time, and so this is something that
just is easy to slot in and easy to understand. On the other hand, it’s
somewhat inefficient and compute expensive when
you really think about it. Because like I said, we are
fetching all these documents and rendering them every
single time a request comes in, when the documents aren’t
actually necessarily changing all that often. It’s also really difficult
to efficiently cache dynamic content, because since
we’re rendering every time and we don’t necessarily know
when the content is changed, we have to make a trade
off between how long can we bear to serve up stale
content, versus when do we want to incur
the penalty of having to rerender and compute? So how do we actually do server
side rendering with Firebase? Here’s a streamlined example. As a quick note,
I built Escapable using TypeScript to take
advantage of modern JavaScript features like Async Await. And here, you can see that I
have just a pretty standard express app. I’m fetching some data,
I’m setting some headers, and then I’m rendering a page. One thing to call
out is notice that I do an Await promised
dot all, here, and that’s so that
I’m fetching all of the data for my
page in parallel, instead of fetching at
one time and waiting for that to complete before
kicking off the next fetch. It’s also really important
to think about cache control headers, just generally
in web applications, but especially for server
side rendered things on Firebase Hosting. You can see here
I have a max age equals 300, which
is saying, it’s OK to cache this in your local
browser for up to five minutes, or 300 seconds. I also have an s
dash max age set to 1,200, which says it’s
OK to cache this in the CDN for 20 minutes. Now again, this is something
that’s going to be a trade off. If your content changes
sort of at most once a day, maybe you can
afford a longer TTL. But then again, maybe you
can’t, because maybe it only changes once a day,
but when it changes, it’s super vital that
people see it right away. So that’s something you’ll have
to determine for your own app. And in this case, because I
have 20 minutes on the server and five minutes
on the client, I have sort of a
worst case scenario of a 25 minute stale page
that gets served to a user. So after that, I’m simply
rendering the content, using the data that I fetch, and
sending it back as a response, which we’ll talk more
about in a moment. Now that I have
my Express app, I need to register it
as a Cloud Function and connect it to
Firebase Hosting. So here, you can see I import
the Firebase Functions SDK, and then I export
a function using functions.https.onrequest,
and then I just pass in my Express app. So when you’re registering
an HTTPS function, one of the things
that you can do is just pass in an
express app as a handler, and that will just work. So you don’t need a special
wrapper or anything like that. You can just pass the
express app right in. And now this tells– this says that when I deploy,
I want an HTTPS function called app, and that it’s
going to serve content for my express app. Then in Firebase.json,
the first thing I do is declare a public
directory, and this is where my static assets will live. And in general, just
like with static sites before, anything you can serve
as a static file, you should. So this is where
I’m putting things like the logo for Escapable,
and my manifest.json, things that don’t change
very frequently and I’m fine
redeploying the site if that’s necessary
when they do change. Then I’m rewriting all URLs
to a function called app, and this will only
rewrite URLs that aren’t an exact
match for something in my public directory. So I can safely just say,
hey, if it doesn’t exactly match a static file, then
I want to send it off to my Cloud Function to see if
it needs to be rendered there. Now, I’m not going to go super
in-depth into the rendering here. While you can use your favorite
templating library or even something like Preact
to render AMP pages, I decided to avoid libraries
altogether and just use template literals with
string interpolation, because it’s
possible, so why not. If you have user
generated content though, please be sure to properly
sanitize your data and don’t just do this,
because you’ll all have bad script
injection attacks and have a bad time, generally. But ultimately, this is just a
function that returns a string, because all I’m doing
is creating my AMP page as HTML as a string. The only other thing that I did
is I put CSS in separate files, and then I just inlined
those in the AMP custom tag by loading them literally out
of my function’s directory. So again, this is like very
duct tape, but it works. So how well does this perform? Well, when hitting the
origin and performing a full fetch of all the
documents to render the page, I got a response
in about a second. Sometimes it was
a little better, sometimes it was a little
worse, but that was kind of on average. And that’s not horrible,
but it’s not great. But if you compare that to when
the CDN was serving content, we had 21 milliseconds as
the total round trip time, including delivery. That’s a pretty
marked difference. And if you look
at the filmstrip, you can see that
difference clear as day. So here we have, essentially,
a one second difference that is the difference between
the CDN and the origin. So this is 3G perf
on the origin. I got my page to paint
in about three seconds. On the CDN, it’s in about two. Now, interestingly, if
we look at the version of the page in
Google’s AMP cache, we knock another
800 milliseconds off of the meaningful paint time. That’s seems unfair,
right, like how does– how’s the AMP cache
get to be faster than my site? Well, it’s cheating, kind of. The AMP cache does some
optimizations of AMP pages when it loads them
into the cache that make it perform better by
doing some specific things that optimize and further reduce
those blocking– those render blocking scripts and styles. But what about users who
come to my site directly? I want that kind of
performance on my site. Well, there’s a pretty new open
source library, called the AMP Toolbox, and the app team
open sourced some tools that let us mimic the same
optimizations that the AMP cache does on our own server. It does this by removing
the boiler plate, locking AMP to a
specific version, and inlining critical CSS. Also, once it does
this, the page becomes not valid AMP, which
is kind of interesting. So by optimizing the AMP,
you make it invalid in. So the way that you
approach this is you serve both the
unoptimized AMP page, which can be slurped up by
the AMP cache and used there. And then you also serve
the optimized AMP page as the canonical
link that people are going to go to when
they visit your site. And we do this by
installing two packages in our project, AMP toolbox
optimizer and AMP toolbox runtime version. Runtime version
literally just goes out and figures out what the
latest runtime version of AMP is and then it tells
it to you, and it does a little bit
of in-memory caching so it’s not making about
every time you call it. The optimizer contains a
transform HTML function which essentially takes valid
AMP HTML, a couple arguments, and then transforms it
into the optimized HTML. So how can we use this
optimized function in our app? Well, we have our
express end point, and we can essentially
just copy-paste that into another endpoint that’s
going to be our canonical page, instead of our AMP page. And so here, you’ll
notice the differences. We dropped AMP from the
URL, because now this is our canonical
page, so it’s just going to be slash region name. We also have a weight
optimize in the renders page, because we’re just wrapping
our previous render block with an optimized call and
we’re passing in the AMP URL. Now, it’s important to
pass in the AMP URL, because when you
render an AMP page, you need to have a
link rel canonical that points to the canonical version
of the page on your site. And by passing this
into the optimizer, we tell it to invert that,
because we’re turning this into the canonical
version of our page, so it needs to turn that
link back into a link out to the AMP page. Now, when we do this dynamic– when we apply the optimizer
and we compare that to our previous
CDN result, that’s a pretty marked improvement. We get and we get a full second
improvement over the CDN time, and now we’re painting
in one second, which is really fast on 3G. What’s interesting
though is now, if we go back and compare
this to the AMP cache, we’re actually
beating that as well. So by using the
app optimizer, you can actually beat the AMP
cache in rendering your page on your domain. Now, of course, I hope
that I’ve drilled it into you, at this
point, the CDNs are really important
for performance. All of these performance
benefits from the optimizer that I just showed,
we only get those when we’re rendering from the CDN. So if we want to truly
maximize performance, we need to render from the
CDN as often as possible. That’s were evented
rendering comes in. For a dynamic rendered page,
we added Firestore and Cloud Functions to Firebase Hosting
to generate our HTML on the fly. Now we’re going to add one
more Firebase product, Cloud Storage, and what
we’re going to do is we’re going to pre-render
content on demand when the data changes and store it
in Cloud Storage as a flat file to deliver later. Here’s how it works. When data changes in
my Firestore database, that triggers a Cloud
Function, because you can do that with Cloud Functions. It’s really great. Cloud– the Cloud Function
then does two things. First, it renders the
HTML and writes that to a Cloud Storage file. And second, it sends a request
to purge the hosting cache for that specific URL, and
what this enables us to do is have the content change
when the content changes, not every time it’s requested. So that all happens
when the data changes, but what about when
the user requests the site. So the user makes a
request, and again, we’re going to click proxy
to a Cloud Function, but this time, instead of
calling out to Firestore and grabbing all the
documents, instead, we’re just going to do a
transparent read through proxy straight to cloud storage. So we’re going to
say, a already have this stored as a flat
file, so I’m just going to serve that up. And in fact, this
is mimicking a lot of what the Firebase
Hosting origin does itself when you deploy a static
site to Firebase Hosting. The result is that all
subsequent requests are going to be served up by
the CDN, and we can– and because we are invalidating
the cache on the CDN whenever the content changes,
we can set this to be, essentially, an indefinite
server cache, which means that we’re going to
have that CDN performance more often. So the benefits here
are pretty clear. You still have fresh
content available instantly, just like you do with
dynamic rendering. Unlike dynamic rendering though,
we only pay the cost of render when the data changes, not when
the user requests the site. So if my data
changes once a day, then I’m only paying
that cost once a day, instead of every time a
user comes to my site, or every time the cache expires
when a user comes to my site. You can also cache until
the content changes. And because of the
performance benefits of CDNs, this is kind of
the critical piece. This is what lets you have
static like performance even though you’re rendering
content on demand and in an envented way. And the only real
downside here is that this may be kind
of unfamiliar territory. If you’re not super familiar
with Cloud Functions and sort of piping things
through events, this may be a little weird
or scary to you. But we’re going to
walk through some code, and hopefully it’ll get less so. So here is where I set
up the three functions that I use to listen to when
I need to rerender the page, and it’s three,
because remember, I’m going to need to rerender the
page whenever any data changes on the page that I care about. And so for Escapable, that’s
three different things. If the region document changes,
I need to rerender the page. If a location that’s
in the region changes, I need to rerender the page. And if a room that’s
in the region changes, I need to rerender the page. So for the region,
it’s really simple. Just whenever it changes,
period, I’d fire off a render. For the location and
the room changes, I pull the region
out of the document and say, OK, this is
the region that I’m going to need to rerender. Also pretty simple. Now, the actual
update region page is just rendering
strings, basically. So we call the same
render function that we use to render
the HTML before, but now we call it with
the data that we’re fetching, because the– we’re just triggering
it at a different time. So we’re essentially
doing the same thing, but triggering it
when the data changed. We’re then also generating
the optimized HTML, and then we’re storing both
of those in cloud storage with this write
and purge function. And the write and purge
function, first, we just use the Firebase Admin SDKs for
cloud storage to save the file, to save the file in
the cloud storage. And then I’m going to
tell you a little trick. So by making a request
to Firebase Hosting, with the purge
method, you actually tell the CDN to
purge the contents and then the next request
will go through to the origin. So basically, you
can send this request and then that’ll cause the next
request to be fresh content. Now, fair warning, this isn’t
exactly an official API, and it might change
in the future. We’re taking a look at how
we can incorporate this more efficiently into the
Firebase Hosting product, but it really enables
some powerful use cases, so I wanted to give you
kind of a sneak peek. Now that we’ve stored our
content and purged the cache, we need to actually serve it up. So back to our express app,
we just write a simple bucket proxy, and all that does is,
again, using the storage admin SDK, we create a read stream,
we set the cache control, and this time, you’ll
notice that the server max age is a large number. That’s actually a
year in seconds. So we’re saying, cache that’s
on the server for a year. And the reason
that we don’t care is because we’re going
to proactively invalidate that cache when the
content changes. Then, finally, we
just pipe the content from our read stream
down as the response, and that’s all we have to do. So that was a lot
of steps, and I’d like to show you how it works
in premise and in practice. So can we jump back
to the demo, please? All right. So here we go. I’ve got my site, and I’m
going to jump over here into my Firestore database. And I have a new room
that I’ve been working on. The only thing that’s
left to add is the region. So region SFO, add that there. Now I’m going to jump over to
my functions logs and within a couple of seconds, we
should see that the changes to my Firestore database has
caused the function on-room change to trigger– any second now, there we go– and so you can see on-room
change start is executing. And it did some stuff, so what
we’ve done is we’ve rendered an AMP optimized version, we
wrote AMP/sfo.html, sfo.html, and then we purged
both of those URLs. And it’s done. Now, if we jump over
to Cloud Storage, you can see that I have the
structure of my site in HTML in Cloud Storage. I’m going to refresh
this really quickly. And once it loads back up,
I’m going to click on SFO and you can see that this
was modified just now. So it’s updated
here, but of course, that’s not actually
super impressive if the website itself
doesn’t update. So let’s try it there. So I’ll hit refresh,
and there we go. Now we have a new room,
Flames of the Firebase, that just appeared on our site. And if I switch over and
look at the dev tools here, you can see that that response
served up in 841 milliseconds, which isn’t super fast. I mean, it’s fine. But you can also see
that that was a cache miss, because this was my first
request after having purged. If I go back and reload again– jump back here– now
we have a response time of 10 milliseconds. And it’s going to stay that
way on this edge server until my content changes. Can we go back to the slides? Now, even when just comparing
the origin performance, and not whether or not it’s being
served up by the CDN, I found that the
evented rendering has about a 32% speedup
over dynamic rendering, mostly because
you’re just proxying through to a flat file in GCS. And that’s notable,
but again, it’s not the most important thing. The important thing is that
we get this optimized CDN level of performance
almost all of the time. And it’s only just after
our content changes, and the first request to
each ed server after that, that we never get anything other
than the super optimized super fast performance. So hopefully this
has been– this has given you a few ideas to
sort of go out there and try it for yourself. And to go back to the
original question, I don’t know if I’ve completely
answered it, you know, what makes a website fast? But I do have an
answer, and the answer is super cheesy, because what
makes a website fast is you. You do. Web experiences aren’t
fast because of magic, they’re fast because developers
care about performance and work hard to make it better. So we can provide
you some of the tools and some of the technology
that help you do this, but at the end of the day,
it’s your elbow grease and you digging in, and
caring about performance and making it work
that is going to give your users the experience
that they deserve. That’s all I’ve got today. I hope to get your
feedback on this session at Google.com/IO/schedule. I also want to call
out, before I go, that Jeff Posnick is giving
a talk in an hour called Beyond Single Page
Apps, Alternative Architectures for Your
PWA, which also has tons of interesting
things about building performant PWAs on Firebase
Hosting with Cloud Functions, and it’s a totally different
approach than I took here. So if you want to sort
of get even more ideas to help you get started,
I’d recommend that highly. That’s all I’ve got for today. I will be heading over
to the Firebase sandbox directly after this. And thanks for taking the time. [APPLAUSE] [MUSIC PLAYING]

Leave a Reply

Your email address will not be published. Required fields are marked *