Memoization is an optimization technique used to speed up function calls by storing previously computed results. In Python, you can achieve memoization using decorators or by utilizing the built-in `functools.lru_cache`. Here's an example of how to memoize function results using both methods:
memoization, optimization, Python decorators, functools, lru_cache
This content provides an overview of how to efficiently cache function results in Python, improving performance for recursive functions or expensive calculations.
from functools import lru_cache
# Method 1: Using lru_cache decorator
@lru_cache(maxsize=None)
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
# Method 2: Manual Memoization
memo = {}
def fibonacci_memo(n):
if n in memo:
return memo[n]
if n < 2:
return n
memo[n] = fibonacci_memo(n - 1) + fibonacci_memo(n - 2)
return memo[n]
# Example usage
print(fibonacci(10)) # Output: 55
print(fibonacci_memo(10)) # Output: 55
How do I avoid rehashing overhead with std::set in multithreaded code?
How do I find elements with custom comparators with std::set for embedded targets?
How do I erase elements while iterating with std::set for embedded targets?
How do I provide stable iteration order with std::unordered_map for large datasets?
How do I reserve capacity ahead of time with std::unordered_map for large datasets?
How do I erase elements while iterating with std::unordered_map in multithreaded code?
How do I provide stable iteration order with std::map for embedded targets?
How do I provide stable iteration order with std::map in multithreaded code?
How do I avoid rehashing overhead with std::map in performance-sensitive code?
How do I merge two containers efficiently with std::map for embedded targets?