Implementing TTL Cache in Python (3)
In this article, we will extend our previously implemented TTL cache decorator with a toggle feature for enabling and disabling the cache.
Overview of Cache Toggle Feature
Here are the specifications for the new cache toggle feature:
- Functions decorated with
lru_cache
will have an additionaluse_cache
argument, allowing one to toggle the cache functionality. - If
use_cache
is set toFalse
, the function will execute without using the cache,
but its result will still be stored in the cache. - If toggled back to
use_cache=True
, the cached content will be utilized.
Implementation
There are various possible ways to implement this feature. Here’s how I approached it:
- Introduced a
cache_offset
counter as an attribute to the input function of the decorator. When the function is executed withuse_cache=False
, this counter is incremented and added to the hash value.
Highlighted sections in the code below indicate the changes made.
def ttl_cache(ttl_seconds=3600, use_cache=True):
"""A decorator for TTL cache functionality.
Adds `use_cache` argument to functions, enabling toggle of caching.
Args:
ttl_seconds (int, optional): Expiration time in seconds. Defaults to 3600.
use_cache (bool, optional): Whether caching is enabled by default. Defaults to True.
"""
def ttl_cache_deco(func):
"""Accepts a function and returns a version of it with TTL cache functionality."""
# Create a function with cache functionality and add dummy argument
@lru_cache(maxsize=None)
def cached_dummy_func(*args, ttl_dummy, **kwargs):
del ttl_dummy
return func(*args, **kwargs)
# Create a function that automatically calculates hash value and inputs it to the dummy argument
@wraps(func)
def ttl_cached_func(*args, use_cache=use_cache, **kwargs):
if not use_cache:
ttl_cached_func.cache_offset += 1
hash = get_ttl_hash(ttl_seconds) + ttl_cached_func.cache_offset
return cached_dummy_func(*args, ttl_dummy=hash, **kwargs)
ttl_cached_func.cache_offset = 0
return ttl_cached_func
return ttl_cache_deco
Below is a usage example.
The feature worked as expected.
# In [1] --------------------
import random
from python_utils import ttl_cache
@ttl_cache(ttl_seconds=20)
def get_random_int():
return random.randint(0, 100)
# In [2] --------------------
get_random_int()
# Out: 64
# In [3] --------------------
get_random_int()
# Out: 64
# In [4] --------------------
get_random_int(use_cache=False)
# Out: 36
# In [5] --------------------
get_random_int(use_cache=False)
# Out: 0
# In [6] --------------------
get_random_int(use_cache=False)
# Out: 91
# In [7] --------------------
get_random_int()
# Out: 91
# In [8] --------------------
get_random_int()
# Out: 91
Bonus Tip: Make the use_cache
Option Visible in Code Autocompletion
In code editors like VSCode that offer autocompletion, the use_cache
option doesn’t currently show up when you’re working with functions that use the ttl_cache
decorator.
However, you can make a small tweak to ensure that use_cache
appears in the code suggestions. Here’s how:
import inspect
...
def ttl_cache(ttl_seconds=3600, use_cache=True):
...
def ttl_cache_deco(func):
...
ttl_cached_func.cache_offset = 0
# Copy the original function's signature and add the "use_cache" parameter
sig = inspect.signature(func)
params = list(sig.parameters.values())
params.append(inspect.Parameter("use_cache", inspect.Parameter.KEYWORD_ONLY, default=use_cache))
new_sig = sig.replace(parameters=params)
ttl_cached_func.__signature__ = new_sig
return ttl_cached_func
return ttl_cache_deco
Conclusion
This concludes our series on implementing TTL cache in Python.
Here are some additional suggestions for improving the decorator.
Feel free to customize as needed.
- Make the decorator usable even without specifying
ttl_seconds
oruse_cache
. - Allow changes to
maxsize
. - Add
use_cache
as an attribute rather than a function argument.
-
Although this implementation was done without third-party libraries, there’s a decorator with TTL caching functionality in the cachetools library.
-
Due to the specifications of
lru_cache
, arguments for functions intended to be cached must be hashable. Functions that take lists as inputs cannot directly use this cache decorator,
so adjustments, such as converting lists to tuples, are needed.