Error reading csv file in Python

Asked

Viewed 617 times

-2

When reading a file .csv in python generates the error below. The file is too big and I can’t open it. Someone knows how to proceed and what error is this ?

import pandas as pd
dados = pd.read_csv('nell.csv', encoding='utf-8', sep=";") 

Error

........

ParserError                               Traceback (most recent call last)
<ipython-input-50-00e18ff698ec> in <module>()
----> 1 dados = pd.read_csv('nell.csv', encoding='utf-8')

~\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, doublequote, delim_whitespace, low_memory, memory_map, float_precision)
    676                     skip_blank_lines=skip_blank_lines)
    677 
--> 678         return _read(filepath_or_buffer, kwds)
    679 
    680     parser_f.__name__ = name

~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
    444 
    445     try:
--> 446         data = parser.read(nrows)
    447     finally:
    448         parser.close()

~\Anaconda3\lib\site-packages\pandas\io\parsers.py in read(self, nrows)
   1034                 raise ValueError('skipfooter not supported for iteration')
   1035 
-> 1036         ret = self._engine.read(nrows)
   1037 
   1038         # May alter columns / col_dict

~\Anaconda3\lib\site-packages\pandas\io\parsers.py in read(self, nrows)
   1846     def read(self, nrows=None):
   1847         try:
-> 1848             data = self._reader.read(nrows)
   1849         except StopIteration:
   1850             if self._first_chunk:

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader.read()

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._read_rows()

pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()

pandas\_libs\parsers.pyx in pandas._libs.parsers.raise_parser_error()

ParserError: Error tokenizing data. C error: Expected 1 fields in line 218, saw 4

1 answer

0

According to its error log, the parser is having problems on line 218 when it expects only 1 field with data (1 cell in the language of .csv) and is getting 4.

I can think of 2 solutions: The first would be ignoring the file header, so Pandas will create as many indexes as necessary for each line. To do this just add header=None in the instruction that loads and configures the file .csv

import pandas as pd
dados = pd.read_csv('nell.csv', encoding='utf-8', sep=";", header=None) 

Another, riskier solution is true, it is ignore all lines that fail. So, instead of adding the header add the error_bad_lines.

import pandas as pd
dados = pd.read_csv('nell.csv', encoding='utf-8', sep=";", error_bad_lines=False)

Again: with this second configuration you will ignore all lines with error. I suggest the first, but posted the second as knowledge.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.