An alternative is to store the sum so far, and use it to update each element:
vetor = [2, 1, 20, 5, 17, 19, 14, 4, 18]
soma_acumulado = 0
acumulado = []
for n in vetor:
soma_parcial = n + soma_acumulado
acumulado.append(soma_parcial)
soma_acumulado += soma_parcial
print(acumulado) # [2, 3, 25, 35, 82, 166, 327, 644, 1302]
That way you don’t have to call sum
several times, as suggested to another answer (which also works, of course, but the downside is that it always sums up all the elements already existing from the beginning, which seems unnecessary to me, because keeping the accumulated total until then, you just need to update with the new values, instead of going through all the elements from the beginning all the time).
Just to compare, I did a quick test with the module timeit
:
def com_sum(iterable):
accumulated = []
for value in iterable:
accumulated.append(value + sum(accumulated))
return accumulated
def com_total_parcial(iterable):
soma_acumulado = 0
acumulado = []
for n in iterable:
soma_parcial = n + soma_acumulado
acumulado.append(soma_parcial)
soma_acumulado += soma_parcial
return acumulado
from timeit import timeit
# executa 100 vezes cada teste
params = { 'number' : 100, 'globals': globals() }
# lista com mil números
vetor = list(range(1000))
print(timeit('com_sum(vetor)', **params))
print(timeit('com_total_parcial(vetor)', **params))
Times can vary, because it depends on hardware and various other factors, but anyway, I created a list with a thousand numbers and ran each test a hundred times. On my machine the results were (times in seconds):
7.2661133
0.04990649999999963
That is, using sum
It took about 7 seconds, while keeping the partial sum took less than 5 hundredths of a second. Here we can see how it makes a difference to call sum
several times (because it needs to go through all the elements already computed and add everything up again). With the partial sum, this is not necessary, simply add the new value to each iteration.
Of course for small lists the difference will be insignificant, but do not forget that for few data, everything is fast.
And the part that tries to make the new vector? Didn’t get to do?
– Woss
I updated the question
– Lary