Adaptation of csv columns with DB fields other than hard code?

Asked

Viewed 24 times

0

Boom day :)

I’m doing a python script that reads a file. csv that takes a few columns (not all columns, just a few) and saves it in the database. As the columns of csv and the name of the database fields are not the same, I wanted to know if there is any smarter way to adapt names other than "hard code".

That is, my function reads the csv file, for now I’m ignoring the first line with next() since head names are not the msm in the bank. In row crossing I’m picking up specific columns. For example there are a csvs with 10 fields and I only get 5. Then I treat these fields and I already play in a string SQL and all to save in the bank.

Sort of like this:

Colunas = "nome, idade, sexo, signo, asc" 

Valores = "'{}', {}, '{}', '{}', '{}'".format(row[2], row[3], row[5], row[7], row[10])

So I have to give one "insert into tb_x ({colunas}) values ({valores})".

Does anyone have any suggestions?

1 answer

1

You can use the dictionary coluna where the key is the field name and the value would be the desired column. Then insert you use colunas.get("nome") for example.

colunas = {"nome":row[2], "idade":row[3], "sexo":row[5], "signo":row[7], "asc":row[10]}

valores = "'{}', {}, '{}', '{}', '{}'".format(colunas.get("nome"), colunas.get("idade"), colunas.get("sexo"), colunas.get("signo"), colunas.get("asc"))

Browser other questions tagged

You are not signed in. Login or sign up in order to post.