Cache Rules Everything
Caching is something most developers take for granted, but experience tells me time and time again that most developers also don’t understand how to configure their caching rules safely, correctly, or effectively. Do you know what no-cache means? What does the Pragma header do? What is the difference between Last-Modified or ETag? Expires or Cache-Control? You will soon.
In this talk, we’ll remove the noise, get rid of everything we don’t need, and then step through a series of real-life scenarios to work out how to solve almost any caching situation with a series of questions.
Videos
Links
Transcript
(upbeat music) I'm gonna talk today about the very humble HTTP cache or browser cache.
It's a really big talk, so I'm not gonna have much of a fanfare or intro.
I thought this talk would be a really good fit for a conference called State of the Browser.
I wrote this talk for this event.
Every browser in the world pretty much has a browser cache.
But one problem I find, certainly anecdotally in my work, is that because it's there and it's omnipresent, people just take it for granted.
They assume it works out of the box, they assume it works always, and I see a lot of people simply getting things wrong, either misconfiguring or failing to configure proper browser caching and completely missing out on the benefits.
So by way of a very honest show of hands, who in the room thinks they've genuinely got a pretty good grasp of browser caching, caching headers, and how to configure and set things up properly?
That's pretty good.
Lot of confidence in the room.
Who thinks it's kind of scary, perhaps contradictory, a bit confusing, what do we need, what do we not need?
That was me until fairly recently, that would be completely honest.
Right, okay.
There's a lot of conflicting information out there.
There's a lot of different ways of managing or achieving the same thing.
What I want to do in this talk is kind of distill it.
One of the first things we'll do is get rid of everything we don't need, and then get to a place where we can comfortably set caching up very effectively, very safely, and very easily.
For those in the room who are thinking, My God, an entire talk about browser cache, that sounds dull.
You're absolutely right.
I hope you enjoy it.
(audience laughing) I did this defensively, 'cause we had a few slides issues.
If I want to take a picture of that real quick, we've touched wood, we've been fine.
That's not wood.
We've been fine, but in case we get any slide issues, they're already online.
So grab a picture of that.
Hopefully, looks like we're not gonna need it.
So yeah, the only intro I'm gonna do is, I'm Harry.
I'm a consultant performance engineer, a web performance engineer from the north of England.
What that basically means is I help companies like these find and fit site speed issues.
Now this is a very immodest slide because this is listing my sort of household name client list, but the nice thing about this talk is that anyone who has a website, anyone at all that has a website, will benefit from setting their caching headers up properly.
So this is not a thing that's reserved for the biggest of the big.
Anyone can make use of the content in this talk.
There's an adage in my industry, the webperf industry that simply states the best request is the one that is never made.
And that's certainly true.
If you get your caching headers set up properly, you can completely zero out the network cost of requests.
That's magical.
There's a huge over-focus, in my opinion, in my industry, of looking at just the first visit to a webpage, cold cache, cold starts.
While that is very important, because at some point every visitor is a first-time visit to your site, it completely neglects anyone who's returning, or the far more common use case, somebody traversing your website.
Anyone who's just going from page to page could really reap a lot of benefit from you getting this stuff set up properly.
However, that's the key.
We have to set things up properly and that's entirely why I've written this talk.
It's gonna be way, way simpler than you probably imagined.
Before we dive too deeply, I want to go through some key concepts.
Those of you in the audience who get this next section, please don't feel patronized.
All I wanna do is make sure we've got a shared understanding of what we're about to discuss Because even subtly misunderstanding any part of this next section could completely change how you perceive this talk.
So it's very important we're all kind of on the same page.
The first and potentially most, probably most obvious concept, I wanna talk about caching in general.
And sort of the sister topic that goes hand in hand is revalidation.
These are the two things you need to solve.
How do we cache a file, and how do we find out if we need to update that file afterwards?
So caching is simply, how long can I reuse this file without checking for updates?
And here's the key bit, without checking for updates.
We use this file over and over, regardless of whether it has changed on the server.
Which means the flip side is revalidation is how do I check that a file has changed after that cache time limit is up?
And again, this is the key bit, after the cache time limit is up.
One of the most common expectations and misunderstandings I see among developers is that they somehow believe that they can cache a file, but it will also always be up to date.
And that's simply not possible.
If you've told a browser to store a file for a week, and you change it after one hour, that's on you.
Your visitors, your users will see the same file for the next six days and 23 hours, because that's what you've told it to do.
Caching, if you think about it, is a way of stopping the client talking to the server, so you would not expect to be able to push updates to a file.
Therefore, it's vitally important you get things set up correctly.
Don't cache something for a year if you're likely to change it daily.
Therefore, revalidation should not happen while a file is cached.
You should not expect to be able to send updates to a cache without the expiry having been met.
If this is happening to you, you've actually run into a bug and you've got something potentially conflicting.
I was working on a project recently, the developers were certain they'd set their caching headers up properly, but they'd got another conflicting header, which meant they were constantly revalidating files that they thought they were caching for potentially forever.
The next concept, fresh and stale.
So fresh is a file that is in cache but within its expiry date.
Fresh does not mean the file is up to date.
All fresh means is the file has been cached and the cache is allowed to recycle it, it's allowed to reuse it.
It has nothing to do with the up-to-dateness of the content.
All this is to do with is the file is in cache and can be reused.
Stale is a file that is also in cache but has passed its expiry date and needs revalidating.
So the key concept here is there is a third state, which is the file is on the server and changed, but the cache has no idea about it.
That third state doesn't have a name.
Fresh is not the most up-to-date version of a file, it's just the file is currently cached and still considered in date.
Next concept, request and response, and hopefully as web developers, we should be very familiar with this concept, but request and response in a very simplified diagram would be, in this case, a desktop computer is asking for a file from a server on the internet, a response comes back.
The request response lifecycle is very, very simple, but the pedant in me wants to point something out.
So far I've been saying the word file, file, file, file.
A response isn't the same as a file, they're not necessarily the same thing.
A response could be a 404, which is literally the absence of a file.
What I'm gonna try and do now through the talk is, instead of saying the word file, I'm gonna say the word response, which brings me nicely onto the next section.
We need to deal with 200 and 304 responses.
When we look at caching, These are pretty much the only two status codes you need to be aware of.
200 is, yep, I fetched a full response.
That might look like this.
I went to a server, it had a file, I brought that file back.
Interestingly, this would also yield a 200 response.
The browser says, I'm gonna check in the cache, there's a file there, I'm gonna use that file.
That's also a 200.
This is correct, 200 just means okay, nothing went wrong.
What I would personally like to see is another status code that says, Yep, we got a full response, but this one came from cache.
We don't have that, a 200 will be a 200 if it came from the server or from the cache.
Three or four, the other type of response we want to look at is I've checked and you can still reuse the file in cache.
Think of three or four as a renewal, right?
The file did go stale, it did go out of date, but it hasn't actually changed so we can reuse it.
Three or four is like a renewal.
That would look a little more like this, and what you'll notice, we've now got the phrase conditional request.
This is a request that goes out with certain extra headers, request headers, verify whether the file's changed, and accordingly we bring back a three or four response, which says extend the cache lifetime on that file in cache, renew it basically.
We don't fetch a full file back, we just fetch a response.
Those are the key concepts.
What I want to talk about now is stuff we can just delete.
The easiest way to solve problems is to remove most of them.
So the next section is just, If this is live on your website, go and delete them.
The easiest way to start is getting rid of the stuff we don't need.
So there are a bunch of headers when it comes to caching that are simply either wrong or superseded, or just there are better alternatives.
The first one that's wrong is pragma, and this is still quite prevalent.
The pragma header looks as simple as this.
Pragma no cache, don't cache this file.
Several problems with the pragma header is that one, it was never meant to be a header at all.
This is a mistake.
It's a caching header, kind of.
The spec never permitted this for use as a response header.
Pragma was only ever meant to be a request header.
This should have never have happened.
If you're using this, I would advise deleting it immediately.
Pragma will also collide, which is annoying, even though it's not meant to even exist.
It will collide and conflict with more appropriate caching headers, such as cache control.
That makes this quite harmful.
The spec even says, "Pragma is not specified for responses and is therefore not a replacement for cache control.
" You shouldn't be using this.
This should have never happened.
Also, it's designed for backward compatibility with H1.
0 connections.
We're on H3 now.
This is old, old stuff.
The next one is XSpires.
XSpires isn't terrible, it's not bad, but it has certain flaws and there are certainly better alternatives.
An XSpires header looks like this.
This is an absolute date in the future.
In our case, this is in the future.
If you were to put a date in the past, it would mean don't cache this file, because we can't cache, we can't expire a file like yesterday.
So putting a negative or a date in the past would actually force this file to be uncacheable.
This isn't terrible, this isn't bad, but there are certain flaws that we can sort of improve upon.
So this is also a caching header.
Expire at an absolute time, which means this will fail if a user changes their system clock.
It's very rare that anyone would do that.
I used to do it when I was younger, 'cause I'd have 30 day trials of Photoshop that seemed to last forever if I just went back a month.
But that would completely ruin the expires header.
Interesting thing about the expires header is because it's absolute, it means the file expires at the same time for everyone around the world.
The expires header's always set in GMT, so the same file would go stale at the same time for everyone all around the world.
It's not very harmful, it's just cache control is better, and cache control has been better since 1997.
Interestingly, if you have both, the cache control header will win, so it expires, will be sort of like nullified by the presence of a cache control, but we might as well delete it.
Simple way of thinking about it is, cache control does everything that expires does, and way, way more.
The other header we want to get rid of, and this might come as a surprise, is last modified.
Last modified is very, very prevalent, and again, it's not bad, this isn't harmful, but we've just got better alternatives.
Last modified looks a bit like this.
It's also an absolute date.
It's usually gonna be in the past, unless you're a time traveler.
And what this is gonna do is say, this file was last changed on this date, and therefore you can check if there have been any updates since then, the file's probably changed.
This is a revalidation header, rather than a caching header.
It's not harmful in this same, like Pragma is, but think about this.
Last modified is a proxy for whether a file has changed or not.
The file may still be the same, you may have just re-saved it later.
So you can get a lot of false positives with using last modified, which I'll go into later in the talk.
So immediately, if you're using any of those three, just get rid of them, 'cause we're gonna replace them with just two headers.
All we need to make caching work effectively is two headers.
So the headers we do want.
The first one is cache control, and hopefully you've seen cache control before.
Cache control as an example, and every example in this talk is exactly that, It's an example, please don't copy and paste any numbers.
You may see a cache control, a header looks like this.
The private, max-agent, and must-revalidate, these are referred to as directives.
These are what make cache control extensible.
We can add features to cache control.
With expires, we can literally only pass in a date.
With cache control, we can invent new directives and extend it that way.
So this is a caching header.
These are the rules and conditions around caching a response and this, like I said before, superseded expires in 1997.
Cache Control has been the preferred header for over 25 years.
So that's the one we need for caching.
The other head that we do need is ETag.
ETag is similar but different to Last Modified and it allows a browser to verify has the file changed.
We've got weak ETags.
And I've struggled to really come up with an analogy for what a weak ETag does, but imagine you were trying to identify a book.
You could just tell someone the book is called this, and that might work.
But what if two completely different authors have written completely different books, both with the same title.
That's the risk we run with ETag.
A weak ETag normally just hashes the metadata of a file.
So that means that you could have two completely different files, but share same metadata may return the same ETag.
If you're gonna use weak ETags, you're actually probably best off just sticking with last modified.
The gold standard would be strong ETags.
This is a hash of the actual contents of the file, and this promises a byte-for-byte comparison, a byte-for-byte likeness.
Strong ETags can be quite expensive to generate, but they are the gold standard.
If you can get these sort of centers as response headers, that's the best possible outcome.
So this is a revalidation header rather than a caching header.
Weak or strong ETags exist, and it's usually a hash of the full cache response to compare to the remote version.
Okay, let's get into the proper bulk of the talk.
Let's talk about caching.
Let's talk about strategies for caching and how to set things up.
What I'm gonna do, it's not like a proper quiz, 'cause I don't want you to participate, I'm just gonna ask you questions.
I would like to run this as an actual quiz, but we simply don't have time.
What I'm gonna do is I'm gonna pose, we're gonna learn about cache control via kind of scenarios.
I'm gonna ask you questions, certain scenarios, and perhaps just between the person sat next to you, just try and preempt or guess or answer the question.
So we're gonna give different scenarios with yes or no answers.
I want you to just see real quickly if you can work out what the answer would be.
So the first question is, can this response be cached at all?
Yes or no?
Anyone got any idea?
Just talk amongst yourselves what the headers might be for yes, we can cache this file and no, we can't.
Five, four, three, two, one.
If we can cache the file, we just need something as simple as a max age.
This max age is a value in seconds that says you can keep this file in cache and reuse it for, in this case, just an hour.
Can't stress this enough, ignore the actual numbers, don't go copy and pasting from these slides.
If we simply can't cache the file because it contains sensitive information or hyper real-time information, imagine, I don't gamble, but like imagine a sports betting website and you've got odds changing all the time.
If the API responses were cached, you might place a bet at odds that were 10 minutes old.
You don't ever want to do that.
You also probably don't want to cache hypersensitive information in API responses, maybe someone's transaction history.
If that's the case and you do not want to cache a file at all, you just need no store.
It really is as simple as that.
People overthink sort of uncacheability.
So we've introduced two concepts here.
The first one is max age.
The first concept we're gonna look at is the max age directive.
How long can I cache this for in seconds?
A key difference here from expires is this is now relative.
Because expires named a date in the future, everyone in this room would have that file expire at the exact same time.
An interesting thing with max-age is it's relative, and it's relative to when the browser entered your cache, that's right, when the file entered browser cache.
So that means that if we all downloaded a file one day after each other, they would all expire a day later than each other.
So these are all relative to each individual request and response.
Give very careful consideration to these values.
If you know you change files daily, don't cache them for 10 years, right?
Be very careful with the, well, the caveat stuff, which we'll cover in a second, but your max age values need very careful consideration.
You probably want to set them up differently for different asset types.
The other thing we looked at was no store.
And there really isn't much to say about this, just don't cache this response.
It's as simple as that.
You may have seen equivalents that look a bit like this in the wild.
This is a very, very aggressive, don't do anything to this file ever.
This is fine, this is not harmful, but honestly, these other two are just redundant.
That is really all you need.
Don't store this file, whether on a CDN, your own cache.
This is a very strong directive that a cache must not persist this file.
Okay, does this response always have to be perfectly up to date?
Remember I said that we've got fresh and stale, and then this third scenario where a file might be fresh in cache, but it actually changed on production, it changed on the server.
Have we got a scenario in which we always want this file to be the most up to date possible?
What would we use for that?
Again, just have it all sort of decide amongst yourselves.
Five, four, three, two, one.
Right, what's annoying is no cache doesn't mean don't cache the file.
What no cache means is don't use the cache first, don't use cache by default.
go to the server, see if there is a change, if not, come back, then use the file from cache.
So it's a very confusing name, it doesn't mean no cache, but I think a directive that says go to server first and check please, is probably a bit too cumbersome.
(audience laughing) If we are allowed to, if we can afford for a file to be slightly out of date, then we just go back to just using a max age.
Again, this is just set to one hour.
So the concept we introduced here is no cache, which like I say, it doesn't mean don't cache the file, it means don't go to the cache first.
So remember this pattern we had, where a browser can just go straight to the cache and return a 200 response?
If you have a no cache header, this simply does not work.
This pattern cannot happen.
No cache forbids the browser from going straight to cache.
It will always make a trip to the server to see if the file has been updated.
If the file has been updated, it will bring the fresh version back.
If it hasn't been updated, all that comes back is a 304 response that then releases the file from browser cache.
This means that no cache will always incur a one round trip of latency at least.
What this therefore means, and most files on the web are latency bound, most files hopefully are small enough that latency is your slow down and not bandwidth.
What this means is you won't get much performance benefit from using no cache unless you've got a very big file.
Right?
If you're gonna go all the way to the server and come back and say, "Oh, I could have reused it the whole time all along.
" If that file was only a kilobyte or two anyway, a small JavaScript file, you've not really got any performance benefit from this, but this is how you would achieve that holy grail of, the file will always be the most up-to-date possible, and you could potentially release that file from cache without having to re-download it, but it will always incur a round trip of latency.
So in some instances, this might not be any faster than just requesting the whole file from new again.
Can this response be shared?
Is this a file that many people could make use of?
For example, you probably want every visitor to your site to see the same style.
css.
You probably don't want every visitor to your site to see /myaccount.
So can we share this file?
Can this response be shared?
Again, just amongst yourselves, try and decide what do you think the header might be for this?
My mic has slipped down my arm.
(audience laughing) (audience chattering) Is this a file that we could reuse for fulfill multiple requests?
A good example would be style.
css or a product image on an e-commerce website.
You want every customer to see that same image, so you might as well share that response.
Three, two, one.
You'd use the public directive.
This says this can be cached in public caches.
When we talk about public caches, that could be a proxy cache, but in sort of colloquial terms for web developers, we're talking CDNs.
If this file can't be shared, if it is unique to one particular person, if it's got someone's, if someone's logged in, and the response to HTML contains their name and their account details and their maybe bank account balance, you do not want to leave that on a CDN.
So you would just complement it with the private directive.
Interestingly, public here is redundant.
As soon as you've got a max age header or an S max age header or a must-be validate header, public becomes implied.
So you don't even need to put public.
So by default, everything is available to public caches.
Public actually comes with side effects.
So if you want many people to have access to the same response, you would use public, but then because the presence of a max age implies public anyway, you can just drop it.
Public, I don't have time in this talk, but public comes with side effects.
Public could allow a authenticated file, So someone's gone through basic HTTP auth, logged in.
As soon as you put public on even an authorized file, it will nudge it into a public cache.
So you could end up sharing logged in data between users.
Don't know why it does that, it seems like a really weird oversight, but public comes with side effects, so it's best avoided.
Public, diagrammatically, would be file comes from a server, drops into a CDN, and the CDN then fulfills the request for every subsequent visitor.
Private, on the other hand, is only the requesting client may store this response.
This is only intended for one person or one cache.
It prevents personalised or sensitive information going out to multiple endpoints or multiple users.
It's not a security thing, and it's not a suitable replacement for no store.
All this means is this file now, maybe /myaccount, might pass through your CDN, but it'll keep on going, and a copy will never remain there, and every response is fulfilled by the origin.
Okay, this is where the questions get really contrived because I've got to ask really ridiculous questions in order to boil them down into yes or no answers.
So bear with me.
Can we reuse this response for offline users even if it's stale?
And I don't know why you don't, well, I've got examples of why you might want to do that.
It is an edge case, but some interesting heuristics that browsers use around caching.
So can we reuse this response for offline users even if it's stale?
Anyone got any idea what this header might be?
(audience chattering) Three, two, one.
If we can't, we just go to this.
Sorry, if we can, if we can do it, we just use this.
Really interesting thing here is we've told the browser, this file's good for an hour, but the browser doesn't have to honor that.
What the browser can do is in certain scenarios, it can say, well, I'm offline now, so what I'm gonna do is, even though this is an hour and 10 minutes old, I'm gonna release it one last time.
The browser doesn't have to expire a file at this max age, which seems ridiculous, because that's not a max age anymore, is it?
It's more of a suggestion.
Browsers can release files from cache, even if they're stale, in certain scenarios, such as the cache can't launch the origin server.
Basically, you're offline.
If we don't want this, if it's like, no, no, as soon as this file goes stale, it must not be reused, we add the must revalidate header.
What this says is, after that one hour, you must go back to the server to check for changes.
If not, you just have to give like an offline error.
So must revalidate.
Basically, fascinating behavior with HTTP and browsers and cache is that caches are permitted to serve style content in certain scenarios.
For example, a user is offline.
Must revalidate prohibits that behavior.
Leave must revalidate off of anything that might be useful, even if it's slightly out of date.
A recipe, for example.
A recipe blog post page, it's not gonna be hypercritical you get the most up-to-date life story of the person giving you the recipe, you might want to just say, "Look, if this person's offline or whatever, "fulfill a response from cache even after it's expired.
" Non-live train times.
If you're on the Tube and you've lost connection, a static page of train times might be useful to serve while it's stale, even though it's kind of max ages being passed.
What you wouldn't want to do is do this on a live train timetable website and anything that could lie to someone.
So must revalidate just ensures but after the cache expiry has been met, you cannot reuse it and you must go back to the server.
So HTTP allows caches to reuse stale responses, for example, when they're disconnected from the origin server.
I'm gonna have to speed up a little bit, I'm kind of running out of time, well I'm not, but I just need to hurry up.
Can we tolerate a slightly out of date response while we perform revalidation?
These questions are getting worse.
Basically you gotta solve the riddle before you can even begin to answer it.
This is really cool.
Basically what this is saying is, okay, okay, okay, could we reuse this file just one last time?
It's very similar to the previous example, but different, 'cause this is a little more deliberate and we get a little more control over it.
Could we reuse this file one last time while we perform revalidation?
'Cause basically revalidation is synchronous.
The moment a browser caches, like a file has gone stale, the browser has to do all that checking before coming back with an answer and either downloading a file or releasing it.
While all that is happening, the user's seeing nothing.
They don't see the image, they don't see the new style sheet.
What we can do is we can use something called stale-while-revalidate, which makes that entire revalidation process asynchronous.
What this basically does is says, okay, as soon as this file's gone stale, after an hour, we need to revalidate it, but after that hour has gone, for 10 minutes you can use the out-of-date response, but in the background, I'm gonna fetch the new response.
So it means that there's never a gap of nothing.
We release for a grace period, in this case, of 10 minutes, we release the old file for like that grace period just allows the revalidation to be asynchronous.
If not, we just drop back to nothing at all.
We just go back to a simple max age.
The stale while we validate, really cool.
Revalidation is synchronous.
Users would see nothing while the, when I say nothing, I don't mean to see a blank screen.
They just won't see anything to do with the file in question for the entire duration of that revalidation.
Could we get a way of showing the old response while we fill that gap?
For example, a purely decorative image in a blog post or a newspaper article, someone's there to read stuff mostly, right?
So it doesn't matter if the image is 10 minutes out of date, just don't show them a blank part of the screen.
So what happens here is we immediately get a 200 response from cache.
In the background, we get a conditional request, which will undo a 304 response in the background, or a 200.
It might be actually a new file as well.
But this is to stay away from revalidate.
Relatively new in sort of internet terms, but very, very cool, very useful.
I actually worked recently with Cloudinary to get Stalewar revalidate on the image caching headers, 'cause it just, some images, it just seems to make sense.
Release an old file, even if it's out of date, for a grace period of maybe 10 minutes, do the update in the background.
Do we need to configure CDNs and browsers differently?
Does our CDN need different caching information than our, yes we do, is the answer.
If we do wanna do that, you would use the S max age.
Annoyingly, we're missing out, we just moved the hyphen.
max-age or smax-age.
Stuff like this.
Well, the internet is designed by the cleverest people.
(audience laughing) But they need a proofreader.
What this does is say cache on the user's computer for an hour, but cache on the CDN for a day.
There's a very elegant reason as to why you might want to do this, and I only learned this reason about, oh, a month or two ago.
If you don't want this, you just leave out the smax-age.
So S, max-age, S stands for shared.
Shared caches follow these directives, everyone else just follows whatever's left.
So I'm gonna start with a question.
Why would you want to configure your CDN and your browser cache separately?
Well, it's because cache-busting is a myth.
Like I said, caching is a way of stopping one endpoint talking to another.
In order to cache-bust, you need to talk to each other, which you just promise you're not gonna do.
Cache-busting is a myth for the most part, especially for the purposes of this talk.
You can't empty your user's cache.
You can't go to all of your hundreds of thousands of users, right, I'm gonna empty your cache.
Actually, you technically can, but it can't fit in this talk.
You can't empty your user's cache.
You can flush your own CDN's cache.
So it might be like, okay, at most we'll tolerate one hour old content in the browser, but we'll update the CDN cache daily.
This means you can flush your DNS cache, yeah, like I say, automatically daily with an SMAX age one day, or every time you roll a release, flush your CDN cache.
This allows you to serve fairly up-to-date content to customers, while just sort of revalidating back with origin every day, or on demand.
This helps you shield origin.
So you only come back to origin once a day or on demand, and your CDN takes the hit every hour.
Is the file hashed?
So I want a quick show of hands, whose build generates files with fingerprints in them?
You all get 10 points.
They're completely worthless, but you do have ten of them.
This is amazing.
As soon as your build does this, we open ourselves up to some really great performance wins.
If your file is hashed, you can set max age to what the spec defines as the biggest possible number, and you can add immutable.
Immutable is like a contract with the browser.
If you don't hash your files, some files can't be hashed.
You can't hash your actual domain, right?
That'd be ridiculous.
People have to go to a new domain every time you update it.
Certain file paths just cannot be fingerprinted and that's fine.
Static assets, CSS, JavaScript, if you are generating these fingerprints, honestly that is gold because we can start using Mutable.
If not, it is an HTML response or it's a CMS image that it doesn't get fingerprinted, whatever, we just go back to using a bare max age.
So Immutable.
Immutable makes a contract with the browser.
What Immutable does is it tells the browser this file will never change.
Therefore, you will never need to revalidate it.
If a file has a fingerprint in it, it's hashed.
The moment you change that file, you get a whole new file.
File123.
css becomes 321.
css.
Fingerprinted files never change.
They cease to exist and become a whole new file.
Therefore, if the file never changes, the browser never needs to come back to the server and say, "Hey, have you changed the contents of this file?
" But no, I haven't.
As soon as I do, there'll be a whole new file for you.
Therefore, we can cache this file literally forever, pass, well, no, actually literally, figuratively.
Figuratively forever, because how long is forever?
The spec defines it as, I'm not reading that out.
This is 68 years, and 68 years is basically the biggest number in binary we can fit into 32 bits.
It's the biggest possible positive integer.
This is also the, anyone old enough to remember the Millennium bug?
(audience laughs) Right, oh, my kind of crowd.
This is now the 2038 bug.
This is the biggest positive number you can fit into 32 bits.
The spec states that this is the maximum number you can put on a max age, and any cache should honor either that number or the highest number it can.
So everyone puts one year, but the specs say you can go as far as 68 years.
Also, if you're using 68 years, you're really backing your project.
so I admire that.
Right, I do need to start hurrying up a little bit.
That's caching, and what I hope is that when you refer back to these slides, you can go through this yes/no process and be like, "Okay, this file needs this header, this needs this.
" The thing is, we need to work out what to do when those files have gone out of date, so the next step is revalidation.
What I find quite interesting is technically, technically we don't need revalidation.
What we could do is just say, "As soon as this file goes stale, download it again.
" But that'd be wasteful, because the file might not have changed.
So a defensive step is, okay, go stale, check if we need to download it again.
If we don't need to download it again, we just renew.
We apply the same headers to the existing file and it just renews its kind of contract.
We could always just like naively fetch the file again regardless of if it's changed.
But we could get loads of false positives here.
That would be very wasteful.
So what we do is we want to do revalidation.
And revalidation is just getting the cache to check with the server, I'll learn any changes, Do I need to download anything new?
So I'm gonna hit up on last modified versus ETag.
Both of these headers would cause the browser to commit conditional requests.
They are request headers.
So what would happen is we get a conditional request.
If it's a 200 response, we'll download a new file.
If it's the three or four response, we'll recycle the old one.
Conditional requests are request headers that just start with an if.
So if non-match, so if your hash is different to my hash, I'll download a new file.
Or if your last modified isn't my last modified, I'll download a new file.
So if any of these resolve to true, you get a whole new response downloaded.
If not, we release the file from cache again and renew its cache control.
So for another 10 days, another hour, whatever the header was.
Of these two, we would prefer ETag.
Last modified isn't terrible, we would prefer ETag.
Last modified changes whenever a file is written, even if its contents didn't change.
So imagine your build process touches the entire file system.
Let's say you've got a static site generator.
All your entire site gets rebuilt.
You only changed one typo on one page, but your static site generator rebuilds the entire site and all of a sudden 10,000 HTML files have got a new last modified.
That's the harm, or not the harm, but dramatic.
I am prone to a bit of dramaticness now and again.
But it's just wasteful.
A lot of false positives with last modified, So what I want to do is just show you a proof.
This isn't using the same mechanism as HTTP.
This isn't how a cache would work.
I've just echoed "Hello World" into a file called revalidation.
I got its last modified date, and it's 18.
50, and the hash is 6F5902.
I did the exact same thing again, wrote the exact same text into the exact same file.
Its last modified isn't outdated.
It's nearly two minutes later, but the hash has remained the same.
This is the benefit of ETag over last modified.
Last modified can yield false positives.
And the problem with that is, like I say, if your build touches the entire file system, even if you've just changed one type or one page, if your build touches the entire file system, your entire site, your entire set of static assets, all just got a new last modified.
So you're gonna have to fetch a load of data unnecessarily.
We're getting into the last bits of it, I promise.
Another thing I wanna go through is, we covered this with the immutable.
Don't revalidate hash responses.
If your file has a fingerprint in it, You don't need to revalidate it at all, 'cause it is never gonna change.
So because hash files never change, there is no need to revalidate them.
Don't put a last modified or an etag on a hashed file.
It will never need to be revalidated, so there's no point putting it on there.
If you put an etag on there, you might cause a file that never needs to be revalidated to be revalidated.
So that's the kind of conflict you could run into.
I have one five minutes over, so I do apologize.
Just really wrapping up quickly, what have we learned?
Well, hopefully we've learned this.
We shouldn't expect to be able to fetch new content while a response is cached.
This is the biggest misconception I see.
If you've told a browser to look after a file for a week, don't expect the browser to redownload any new content within that week.
We've only got two jobs, we only need two headers.
We massively overcomplicate caching.
I've got a client at the moment who's got pragma and last modified and etag and cache control and whatever the other one was that I was talking about.
You only need two headers max.
Cache control and ETag, that's all you need.
And if you've got a hashed file, you don't even need ETag.
Expires and last modified aren't bad, but cache control and ETag are much better.
So if you can use those, you're gonna avoid false positives, it's gonna be a much nicer ride.
And again, third time I've said this, files that never change never need to be revalidated.
So you can just, if you've got a fingerprint on there, cash it for 68 years and don't put an e-tag on there.
I'm not gonna talk through this next section at all, but I promise you I'm winding down.
What I thought I'd do is to actually give you some idea of where to start with picking the values.
I've bucketed sort of time zones, I guess, or time sort of buckets.
Never short, medium, long, and forever, and forever is like weeks, however many weeks in 68 years, days, hours, and minutes.
My account, you probably never want to cash that.
You want hyper real-time data there.
API responses for live train times.
Don't cache those at all.
A news page, maybe you want to cache the BBC News homepage for maybe five minutes.
Breaking news, I mean, nothing's five minutes breaking.
Nothing's that important.
Well, maybe some things.
Store locator.
If you're opening stores quick enough that you can't cache the page for a few hours, I really applaud your business.
Certain things, I've got a client at the moment and they do a genuine algorithmic, I don't know how to describe it, they find out your nearest store, they've got a store locator page, and every time you request that page, it requests, it works out, and I'm actually, where are the nearest stores?
So if you're opening stores quickly, you need to do it on every request, congratulations.
Do that statically, cache it, and cache it for a few hours.
Maybe a product image you could cache for a few days.
Product imagery doesn't update that often, maybe you could cache that for a few days.
Fingerprinted assets, literally forever.
And finally, this should be a little helpful cheat sheet.
Non-versioned assets and versioned assets both need cache control.
Your non-versioned assets need much more granular cache control directives, and they will need ETag because they will need revalidating.
As soon as you've fingerprinted the asset, you just need immutable and, oh, that's a typo, it should say 68.
But yeah, cache this stuff forever.
With that said, I have gone over time, so I do apologize.
Thank you for listening.
around later on if anyone has any questions.
Thank you very much.
About Harry Roberts
Harry is an independent Consultant Web Performance Engineer from the UK. He helps some of the world’s largest and most respected organisations find and fix their site-speed issues.
He is both a Google- and a Cloudinary Media-Developer Expert, and has consulted for clients from the United Nations to the BBC, General Electric to the Financial Times, and a whole host more. He is also co-chair of performance.now(), the web performance conference for professionals.
When not doing client work, he writes, teaches, and speaks about the entire gamut of front-end performance. When not doing work at all, he’s probably out on his bike.