Get duplicate files in two lists

Asked

Viewed 39 times

0

I have a report with links and I need to get duplicates, the problem is getting the two locations of file1.html files

relatorio = ['~/file1.html', '~/pasta/file1.html', '~/file2.html']

Code:

for x in relatorio:
        if x.split('/')[-1] not in unicos:
            unicos.append(x.split('/')[-1])
        else:
            if x.split('/')[-1] not in repetidos:
                repetidos.append(x.split('/')[-1])

Upshot:

Unicos: ['file1.html', 'file2.html']

Repeated: ['file1.html']

Expected:

Single: ['~/file2.html']

Repeated: ['~/file1.html', '~/folder/file1.html']

  • 1

    If you want the full name, why add to the list repetidos only the file name with .split('/')[-1]? Just add the full name repetidos.append(x).

  • The code is in development and was therefore incorrect.

  • Igor, that’s right the result does not come out as expected because you are only added the name of the file with the code .split('/')[-1]. Just do as Alex mentioned by adding to the lists the full name of the item being admitted at the moment.

  • Putting the full name '~/file1.html' would be unique and '~/folder/file1.html' would be repeated. I tried a solution with only the name so this "Error" since I could not find. However I got another solution

1 answer

1


A solution not very Pytonica is as follows:

unicos = []
repetidos = []
relatorio = ['~/file1.html', '~/pasta/file1.html', '~/file2.html']

for x in relatorio:
        if x.split('/')[-1] not in unicos:
            unicos.append(x.split('/')[-1])
        else:
            if x.split('/')[-1] not in repetidos:
                repetidos.append(x.split('/')[-1])

for x in relatorio:
     for i in repetidos:
         if x.endswith(i):
             print(x)

Upshot:

'~/file1.html'

'~/folder/file1.html'

Note: This solution is not ideal because it makes two loops in the same report.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.