~/Blog

Brandon Rozek

Photo of Brandon Rozek

PhD Student @ RPI studying Automated Reasoning in AI and Linux Enthusiast.

Quick Python: Memoization

Published on

Updated on

Warning: This post has not been modified for over 2 years. For technical posts, make sure that it is still relevant.

There is often a trade-off when it comes to efficiency of CPU vs memory usage. In this post, I will show how the lru_cache decorator can cache results of a function call for quicker future lookup.

from functools import lru_cache

@lru_cache(maxsize=2**7)
def fib(n):
    if n == 1:
        return 0
    if n == 2:
        return 1
    return f(n - 1) + f(n - 2)

In the code above, maxsize indicates the number of calls to store. Setting it to None will make it so that there is no upper bound. The documentation recommends setting it equal to a power of two.

Do note though that lru_cache does not make the execution of the lines in the function faster. It only stores the results of the function in a dictionary.

Reply via Email Buy me a Coffee
Was this useful? Feel free to share: Hacker News Reddit Twitter

Published a response to this? :