Caching refers to storing the result of an operation so that future request return faster. Basically, if you do something once you know whether it’s a database query or rendering some HTML or you know, anything that might be slow. You store the results so you don’t have to do the computation a second time instead you can reference the previous result. So, when do we cache? We cache things when the underlying computation is slow, when the underlying computation will run multiple times. When the output of said computation is the same for a particular input, so that we know you know, we don’t have to recompute it every time, because it’s going to be the same every time; the output of this computation. And another good reason you know, for, for when we cache, when you are hosting provider charges for DB access, which applies to you right now. Google APP engine, gives you a fixed number of reads and writes to the data store in a particular day and if you go over that, you have to pay for it. Even if your app doesn’t get a whole lot of traffic Caching request so they don’t have to hit the database over and over is a fine way to save some money. And that will be an example we start with shortly. So, whenever you have a situation where you have the slow computation that you’re running multiple times, with the same output over and over, you should cache it. You should store that result somewhere else. So that you don’t have to run a computation over and over. So let’s, let’s talk about how that algorithm looks. Let’s say we have a function called db_read(), and this reads from the database. And it’s slow, you know, let’s say it takes 100 ms to run this. This query which is slow for a database query but not unheard of. And, and you’re serving thousands of requests, you know. If every request that comes in to, to your website has to hit db-read and that takes 100 milliseconds, that means you can only do ten requests a second or you start, you know, you start doing multiple requests at the same time and your database starts to get pummeled because it’s trying to do this complicated query. All at the same time and, and this 100 milliseconds maybe turns in to 200 or 300 milliseconds, or 500 milliseconds, who knows, you know. When your database is under a load it starts to really get angry at you. So, if we wanted to cached this db read, let’s talk about what this algorithm would look like. Prefer it a kind of write in, in sitor code here, we will have something that looks like this. If the request from making is in the cache, return the cache version of that request, and in this case, I’m pretending cache is, is like a dictionary and the request that we’re making is the key into this dictionary. And that’s generally the structure of the cache, the cache is basically a large, like a large hash table a large mapping of, of keys to values. Now you know all about hash tables a hash table works perfectly well for this. So, if a request is in the cache. Returned Cache value of that request, Else start the value of this DB Read in a variable, put that variable in the cache for future look ups. And then return that variable. So basically, instead of calling DB Read on every request, the first thing we do is check to see if that request is in our Cache And if it is, we return the cached value. This is called a cache hit. And only if this request isn’t in our cache do we actually return our query. And this is called a cache miss. And what we do on a cache miss is we actually do operation, and store the result of the operation in our cache, and then return that result. So future requests will just bounce off the cache. Hash. Now if you’re using a hash table, depending on the size of that hash table. We’re going to probably do a lot better than 100 milliseconds, we’re probably going to be talking about less than 1 millisecond. Which would be you know, quite a big speed improvement. Now of course this hash table is huge and you’re caching lots of things. You know, hash table have their own Performance characteristics that you’ll have to take into account. But you know, you can get substantial improvements just by taking the slow pices of your code and wrapping this simple algorithm around it, which is the focus of our next quiz. What I’d like you to do is implement that caching algorithm here in Python. So, what I’ve given you is a function called Complex Computation. And this takes two numbers and it adds them up. But it’s going to take half a second to do so. The time dot sleep function causes Python to just sleep or wait for however much time you want. In this case, I have it sleep for half a second. So, what I want you to do is use a dictionary as a cache. And it’s defined right here as, called cache. I want you to make this function cache computation. Use this cache to store the results of calling complex computation on different inputs. So if we pass in the same inputs a and b twice, I want the second running of this function to use the cache. The first one, we’ll do the actual computation. And we’ll be grading this by running picking different values of a and b, seeing how long the first one takes to run and then seeing how long the second one takes to run. And the second one should run substantially faster.