How To Speed Up Python Code with Caching


python-cache-fimg
Image by Author

 

In Python, you can use caching to store the results of expensive function calls and reuse them when the function is called with the same arguments again. This makes your code more performant.

Python provides built-in support for caching through the functools module: the  decorators @cache and @lru_cache. And we’ll learn how to cache function calls in this tutorial.

 

Why Is Caching Helpful?

 

Caching function calls can significantly improve the performance of your code. Here are some reasons why caching function calls can be beneficial:

  • Performance improvement: When a function is called with the same arguments multiple times, caching the result can eliminate redundant computations. Instead of recalculating the result every time, the cached value can be returned, leading to faster execution.
  • Reduction of resource usage: Some function calls may be computationally intensive or require significant resources (such as database queries or network requests). Caching the results reduces the need to repeat these operations.
  • Improved responsiveness: In applications where responsiveness is crucial, such as web servers or GUI applications, caching can help reduce latency by avoiding repeated calculations or I/O operations.

Now let’s get to coding.

 

Caching with the @cache Decorator

 

Let’s code a function that computes the n-th Fibonacci number. Here’s the recursive implementation of the Fibonacci sequence:

 

Without caching, the recursive calls result in redundant computations. If the values are cached, it’d be much more efficient to look up the cached values. And for this, you can use the @cache decorator.

The @cache decorator from the functools module in Python 3.9+ is used to cache the results of a function. It works by storing the results of expensive function calls and reusing them when the function is called with the same arguments. Now let’s wrap the function with the @cache decorator:

from functools import cache

@cache
def fibonacci(n):
    if n 

 

We’ll get to performance comparison later. Now let’s see another way to cache return values from functions using the @lru_cache decorator.

 

Caching with the @lru_cache Decorator

 

You can use the built-in functools.lru_cache decorator for caching as well. This uses the Least Recently Used (LRU) caching mechanism for function calls. In LRU caching, when the cache is full and a new item needs to be added, the least recently used item in the cache is removed to make room for the new item. This ensures that the most frequently used items are retained in the cache, while less frequently used items are discarded.

The @lru_cache decorator is similar to @cache but allows you to specify the maximum size—as the maxsize argument—of the cache. Once the cache reaches this size, the least recently used items are discarded. This is useful if you want to limit memory usage.

Here, the fibonacci function caches up to 7 most recently computed values:

from functools import lru_cache

@lru_cache(maxsize=7)  # Cache up to 7 most recent results
def fibonacci(n):
    if n 

 

Here, the fibonacci function is decorated with @lru_cache(maxsize=7), specifying that it should cache up to 7 most recent results.

When fibonacci(5) is called, the results for fibonacci(4), fibonacci(3), and fibonacci(2) are cached. When fibonacci(3) is called subsequently, fibonacci(3) is retrieved from the cache since it was one of the seven most recently computed values, avoiding redundant computation.

 

Timing Function Calls for Comparison

 

Now let’s compare the execution times of the functions with and without caching. For this example, we don’t set an explicit value for maxsize. So maxsize will be set to the default value of 128:

from functools import cache, lru_cache
import timeit

# without caching
def fibonacci_no_cache(n):
    if n 

 

To compare the execution times, we’ll use the timeit function from the timeit module:

# Compute the n-th Fibonacci number
n = 35  

no_cache_time = timeit.timeit(lambda: fibonacci_no_cache(n), number=1)
cache_time = timeit.timeit(lambda: fibonacci_cache(n), number=1)
lru_cache_time = timeit.timeit(lambda: fibonacci_lru_cache(n), number=1)

print(f"Time without cache: {no_cache_time:.6f} seconds")
print(f"Time with cache: {cache_time:.6f} seconds")
print(f"Time with LRU cache: {lru_cache_time:.6f} seconds")

 

Running the above code should give a similar output:

Output >>>
Time without cache: 2.373220 seconds
Time with cache: 0.000029 seconds
Time with LRU cache: 0.000017 seconds

 

We see a significant difference in the execution times. The function call without caching takes much longer to execute, especially for larger values of n. While the cached versions (both @cache and @lru_cache) execute much faster and have comparable execution times.

 

Wrapping Up

 

By using the @cache and @lru_cache decorators, you can significantly speed up the execution of functions that involve expensive computations or recursive calls. You can find the complete code on GitHub.

If you’re looking for a comprehensive guide on best practices for using Python for data science, read 5 Python Best Practices for Data Science.

 

 

Bala Priya C is a developer and technical writer from India. She likes working at the intersection of math, programming, data science, and content creation. Her areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, she’s working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces, and more. Bala also creates engaging resource overviews and coding tutorials.



Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here