Computationally intensive tasks are everywhere now.
We are using resource-intensive techniques such as LLMs and Generative AI a lot these days.
Whoever uses precious resources would know how daunting it is to do the same task again, even though we know the results will be the same. You’d blame yourself for not storing the results of its previous run.
This is where the LRU cache helps us. LRU stands for Least Recently Used. It’s one of the many caching strategies. Let’s first understand how it works.
5 Python Decorators I Use in Almost All My Data Science Projects
How does @lru_cache work in Python?
Imagine your brain is a small toy box. It can only fit five toys. Your friends keep asking you about different toys, and you use your superhero memory to remember and tell stories about these toys.
Some toys are easy to find because your friends often ask about them, so they’re at the top. Some toys are harder to find because they’re at the bottom of the box.
After you tell a story about a toy, you put it back on top to make things easier. That way, the toys your friends ask about the most are always easy to find. This is called the "Least Recently Used" or LRU strategy.
And if you get a new toy, but the box is full, you remove the toy that hasn’t been asked about for the longest time. If a friend asks about it, you can still find it in your big toy warehouse, which takes longer. That’s how LRU caching works!
Since version 3.2, Python ships with an inbuilt @lru_cache
decor, which you can use on any of your functions. It makes your process work much like the toy box example.
If a part has been executed before, the results will be pulled from the cache during subsequent runs. And just like your toy box getting full, you can specify a maximum size for the cache.
A Little Pandas Hack to Handle Large Datasets with Limited Memory
Where can we use lru_cache in our daily work?
Consider a long-running database query. We’ve all been there: waiting, fingers tapping, for the database to finally respond with the needed data. This often happens when we need to rerun Jupyter Notebooks.
Now, imagine doing the waiting once and then retrieving the data instantly the next time. That’s the magic of lru_cache.
It’s not only about database queries. What about APIs? Especially those with a Pay-as-you-go pricing model like the OpenAI API. If the prompt looks the same, ith lru_cache, we call once, pay for it, and reuse the result without paying again.
Also, If you’re dealing with external data queries where the connection to the source is not always reliable – such as with web scraping – lru_cache can be a lifesaver. Instead of constantly wrestling with an unstable connection, you can fetch the data once, cache it, and refer to it without worrying about connectivity issues.
And let’s not forget computationally heavy tasks. If your function is a number cruncher, taking in data and churning out results after heavy computation, lru_cache can drastically reduce the load on your system resources. This is often the case when your application involves running a heavy ML model like BERT LLM or having a comprehensive search functionality.
In all these cases, the value proposition is the same. Time saved, money saved, resources saved. And this is possible as long as we assume that the output stays the same within a reasonable time frame. In the grand scheme, Python’s lru_cache isn’t a technical solution but a principle of efficiency. And in a world where every second counts, efficiency is king.
Testing lru_cache on recursive functions
Here’s a typical example of a Fibonacci number generation. See how long it takes to run with and without lru_caching.
import time
def fib(n):
if n < 2:
return n
return fib(n - 1) + fib(n - 2)
start_time = time.time()
print(fib(40))
print("Execution Time: ", time.time() - start_time)
>> 102334155
>> Execution Time: 19.45328450202942
Now let’s import lru_cache
from functools
module and annotate the function with it.
import time
from functools import lru_cache
@lru_cache(maxsize=None) # Invinite cache
def fib(n):
if n < 2:
return n
return fib(n - 1) + fib(n - 2)
start_time = time.time()
print(fib(40))
print("Execution Time: ", time.time() - start_time)
>> 102334155
>> Execution Time: 8.893013000488281e-05
The code achieves significant time-saving by introducing the lru_cache
decorator for memoization compared to the non-memoized version.
In the first example (without lru_cache
), calculating the 40th Fibonacci number took approximately 19.45 seconds. In the second example (with lru_cache
), calculating the 40th Fibonacci number took approximately 8.89e-05 seconds.
That’s like 99.99% time saved.
Here’s a plot of execution time for various Fibonacci numbers. See that the non-cached version takes up exponentially higher execution time, whereas the cached version takes a teensy little amount of time for even more significant numbers.
The @lru_cache is not a silver bullet
While @lru_cache
is instrumental in improving performance, but that doesn’t mean you can use any function and forget about the performance altogether.
The function being cached must be hashable.
This means it must be immutable and able to be converted into a unique integer for dictionary key purposes. This typically isn’t an issue for functions, but you can’t cache methods of mutable classes or functions that take mutable arguments.
For instance, the following function is immutable. Hence lru_cache works fine:
@lru_cache(maxsize=128)
def add_ten(number):
return number + 10
print(add_ten(5))
>> 15
But the next one is not. Calling this would throw an error.
@lru_cache(maxsize=128)
def add_to_list(lst, item):
lst.append(item)
return lst
print(add_to_list([1, 2, 3], 4))
>> TypeError: unhashable type: 'list'
lru_cache isn’t great if your function depends on external sources
The cache doesn’t automatically expire entries. If your function’s return values can change over time (for example, if it’s based on a file system or network operations), then using lru_cache
it could lead to stale results being returned.
The following Python script simulates an external source and tests this:
import functools
import time
def simulate_external_source():
"""Simulate an external source that changes over time."""
return time.time()
@functools.lru_cache()
def dependent_function():
"""Function that depends on an external source."""
return simulate_external_source()
# Test lru_cache with a function that depends on external sources
print("Testing lru_cache with a function that depends on external sources:")
print("Initial call:", dependent_function())
time.sleep(1) # Simulate a delay
print("Cached call:", dependent_function()) # Should return the same result
time.sleep(1) # Simulate a delay
print("Updated call:", dependent_function()) # Should still return the same result
print()
@functools.lru_cache(maxsize=1)
def expiring_function():
"""Function with cached entries that expire over time."""
return simulate_external_source()
# Test lru_cache with entries that expire over time
print("Testing lru_cache with entries that expire over time:")
print("Initial call:", expiring_function())
time.sleep(1) # Simulate a delay
print("Expired call:", expiring_function()) # Should trigger recalculation
time.sleep(1) # Simulate a delay
print("Expired call:", expiring_function()) # Should trigger recalculation again
print()
___________________
Output
-------------------
Testing lru_cache with a function that depends on external sources:
Initial call: 1685507725.6362917
Cached call: 1685507725.6362917
Updated call: 1685507725.6362917
Testing lru_cache with entries that expire over time:
Initial call: 1685507727.639048
Expired call: 1685507727.639048
Expired call: 1685507727.639048
For such instances, you need a caching technique that supports TTLs.
How to Run Python Tests on Every Commit Using GitHub Actions?
Conclusion
lru_cache is a very simple technique to store function outputs so that the computations don’t have to run again and again. While it’s instrumental in saving expensive computational resources, it doesn’t work well in all cases.
Yet, as long as your function doesn’t depend on external inputs and it’s hashable, lru_cache is definitely a great tool.
Thanks for reading, friend! Say Hi to me on LinkedIn, Twitter, and Medium.
Not a Medium member yet? Please use this link to become a member because, at no extra cost for you, I earn a small commission for referring you.