1
I wrote a report in Rmarkdown that, after a Windows update, it changed all accents by other characters. The main issue would be to read the external files. But there is a part that I need to read some 'csv’s', that before the encoding "ISO-8859-1" worked perfectly, but now also does not work anymore. Below the reading code:
arquivosRREO <- lapply(arquivosRREOLista, function(x)
{
arquivos <- read_delim(x,";", escape_double = FALSE, locale = locale(decimal_mark = ",", grouping_mark = ".", encoding = "ISO-8859-1", asciify = TRUE),
trim_ws = TRUE, skip = 5)
arquivos <- arquivos %>% filter(UF=="RJ")
})
The only event that happened last time I generated the files for now was a windows 10 update. Do I have to go testing encoding by enconding until one that works? And if you find one, is there any way to prevent external events from touching it?
encoding is always a problem, and it is also very difficult to reproduce. One of the functions that most handle encoding is the
fread
ofdata.table
. If it doesn’t work, I’d use the functionstringi::stri_enc_detect
to try to find the most likely encodings.– Daniel Falbel
encoding problem is in your file
.Rmd
or in the files you are reading?– Guilherme Parreira
In both! But the uqe caused me the biggest problem was in the archives, because there are other elements that in the report that refer to the name!
– Flavio Silva