How to implement Fibonacci more efficiently using dictionaries?

Asked

Viewed 322 times

0

One recursive Fibonacci implementation is:

def F(n):
    if n == 0: return 0
    elif n == 1: return 1
    else: return F(n-1)+F(n-2)

The problem is that it is somewhat inefficient, since, for example, to calculate F(5), we calculate more than once F(3). I heard that the implementation with dictionaries would be more efficient. How to implement Fibonacci more efficiently using dictionaries?

  • I withdrew my answer because I saw you want to continue to opt for recursion, and it really didn’t help you there.

  • @Miguel: I liked your answer! If you can come back...

2 answers

5

You are on the right track of trying not to recalculate a recursive function multiple times when your result does not change, but the easiest and most efficient way is not to use dictionaries; it is to use the lru_cache.

The lru_cache is a decorator that automatically stores the result of its function in a cache. I mean, the first time you call a function, since it hasn’t kept the result, it has to calculate it. Then it returns the result and stores this result in the cache, an internal memory. From the second time you call the function with the same arguments, as it already has the result saved, it returns the result immediately without recalculating the function.

The result of using the cache is quite dramatic. See the comparison of doing calculations of fib(0) until fib(40), on my machine, with exactly the same function and just putting or taking the lru_cache:

from functools import lru_cache
import time

@lru_cache(maxsize=64)
def fib_com_cache(n):
    if n < 2:
        return n
    return fib_com_cache(n-1) + fib_com_cache(n-2)

def fib_sem_cache(n):
    if n < 2:
        return n
    return fib_sem_cache(n-1) + fib_sem_cache(n-2)


def fib_ate_40(funcao):

    t1 = time.perf_counter()
    for j in range(40):
        funcao(j)
    return time.perf_counter() - t1

# Primeiro fazemos fib até 40 sem cache
print('Tempo de execução sem cache: {:.6f} segundos'.format(fib_ate_40(fib_sem_cache)))

# Em seguida, fazemos com cache
print('Tempo de execução com cache: {:.6f} segundos'.format(fib_ate_40(fib_com_cache)))

Upshot:

Tempo de execução sem cache: 71.009936 segundos
Tempo de execução com cache: 0.000053 segundos

Just add the decorator @lru_cache to gain a tremendous performance boost. This is useful for all functions where the result, when calling the function with the same arguments, does not change (that is, it does not depend on the external state) and when it is called many times with the same arguments, and/or when the function calculation is weighed.

How to save results consumes memory, you can specify how many results you want to save in memory at most (this is the parameter maxsize). When the number of stored values is passed from maxsize given (default 128), then the least recently used is discarded (hence the LRU: is from "Least Recently Used").

If your function depends on external state that changes rarely or you for some other reason need to delete these stored values, just call cache_clear:

fib_com_cache.cache_clear()

2

  • dictionaries would be to store values and not have to call function F all the time. Thus, you would not need to recalculate always the same Fibs

  • @Eds this way you don’t calculate Fib for the same value more than once when flame F. In this example, then you have the values of F(20) and F(10) kept, you can use them however you want

Browser other questions tagged

You are not signed in. Login or sign up in order to post.