(for a more cultural focus see this other question)
Question-1. "(...) why there are still applications that adopt standards such as ANSI, among other encodings?"
Answer. I would say "there are very few". Some applications of these are technically justified because they do not use a sharp alphabet; and others, which imposes on Portuguese speakers the absence of accents and/or interchangeability, are doomed to scrapping.
Question-2. "Wouldn’t it be easier to leave (...)?"
Answer. Yeah, when I say "doomed to scrap" that’s about it.
The problem maybe, is that you can’t wait years for this, need in fact applications that respect the UTF-8 today, now...
Many people, even if they do not express in writing, openly say that it is the pressure of international companies, who make soft bodies in Brazil forcing you to use Windows-1252, or government companies, which interrupted the software update in 1980... I do not agree, if only to justify... I think that we can not blame them alone (!), we are ourselves, professionals of the area, we made soft body for years, by not demanding the UTF-8 in our work environment, in our relationship with customers and suppliers.
Completion. We must agree with @utluiz, which reminds us that we must partly strive every day to maintain the whole environment in UTF-8, and in part we must conform, with facts and factors... and forget the subject, until the world changes 100%.
PS: web pages and storing texts in databases, are emblematic cases. Why so many webdesigners took so long (and some even today) to worry about preparing their pages and templates HTML with UTF-8? How many programmers participate in "localization" and improvement of open-source, such as Mysql or Postgresql? The Brazilian distribution of Postgresql does not offer as a standard template (DATABASE
default) something with ENCODING = 'UTF8' LC_COLLATE = 'pt_BR.UTF-8' LC_CTYPE = 'pt_BR.UTF-8'
... And, as it is not default, how many hosting companies took the trouble to change the default to the Brazilian standard? How many programmers, when they could, bothered to set up their databases in this way? I myself was once my victim... Until you change posture.
How about we change a minor detail of the question, change it to "Why do I still allow you to use encodings other than UTF-8?"
An opposite position, as we assume ourselves as part of the environment
In general we position ourselves as "victims" of our environment: ambience as a fact, and as something driven by decisions that we are not part of.
But the "environment" in this case, is something where, for example, the Stackoverflow-Portuguese community, can act, can have some effect, even if small. If we choose to conform to this "small change", the questions we should ask ourselves totally change (!).
Why we, analyst and programmers, cannot demand from our work environment, that we adopt the standard UTF-8? Why software and computer companies cannot require their customers and suppliers to exchange data on UTF-8?
Of course, you cannot ask from those who do not, but we know that 90% of configuration cases default of a national product, nationalized or "localized", may adopt UTF-8. Moreover, when it comes to data exchange, that is, formats such as XML and HTML, fully open and under totally standardized environment (e.g. IETF and W3C recommendations), we can guess that 99% can be UTF-8.
Of course, the second requirement, in extraordinary contexts, when one cannot offer UTF-8, is to ISO 8859-1. There are still a number of contexts, wisely expressed by the reply of @utluiz, where the difficulty of using UTF-8 is "explained" a little better, and is justified by our weak culture of using good practice, as well as culture and history of not demanding their rights as a Portuguese-speaking public or consumer.
This response is in part a reminder, that patterns are useful and necessary, that especially in Brazil, we waste a lot of time of our lives making conversions, adjusting data, adjusting settings, and adapting libraries. Analysts and programmers waste time, users are subject to products and "services" without cedilla.
Contextualizing
When it comes to computer science, computing and digital media, even Portugal means "colonized country". Speakers of the Portuguese language have always been imposed a foreign condition (e.g. conform to a text without accents).
Gradually the European standards were being adopted, and the minimum requirements to express the alphabet of Portuguese in a standardized way, being accepted by manufacturers of machines, software and other resources. The consolidation of the standard was of great importance ISO 8859-1 (known as ISO-Latin-1).
In Brazil, however, a pouporry de encodings... And with the emergence of Unicode, and the emergence of "recommendations by use of UTF-8", the diversity (of this pouporry) only increased.
Reminiscing
As already stated in this and other responses, UTF-8 has been a standard for years in fact and de jure.
The W3C has been suggesting use of UTF-8 (see RFC-3629) in all its recommendations. Likewise, the Brazilian government, with the recommendation e-PING.
All operating systems in use, in fixed or mobile computing, support UTF-8. Even QR-Code offers UTF-8...
On the Web, UTF-8 is already the most widely used encoding (default in fact) since 2007:
article "Moving to Unicode 5.1", 2008, shows in graphs and with Google data, that in December 2007 UTF-8 encoding became the most frequent encoding on web pages, passing ASCII and ISO 8859-1.
article on blogosphere, 2012, re-evaluates and demonstrates that UTF-8 remains predominant, even on "technologically uncompromised" pages such as, blogs, where only 6% of explicitly coded pages were detected with something that was not UTF-8.
Examples of questions related to problems with UTF-8:
It is clearly, even today, a "headache" for Brazilian analysts and programmers, in installations, configurations, and mainly in the exchange of data.
EDIT (ref. comments @Bacco)
On the question of "freedom of choice". Two examples:
We are free to choose between Java, PHP or Python, etc. They are all "standard languages" It is a matter of taste, context, etc. and the programmer "adopts its standard". There is no need for "one language for all" as there are no relevant coordination problems. The benefits of "one for all" do not exceed the benefits of diversity. The existence of a number of large communities is sufficient to reduce excess diversity.
We are not free to choose the (property) number of our house, it should meet a standard that is the street footage or your court. If we invent by numerology or taste, we create confusion in the street, and we make it difficult to deliver letters in the house itself. In this case the general benefit of "adopting the standard" exceeds the personal benefits of diversity.
In the case of codification we are not free to choose: the W3C, the Brazilian government, etc. have already chosen for us. It is the UTF-8. The benefits of adopting the standard (rather than diversity) are much greater, interoperability emerges, simplicity... as programmers we waste much less time of our lives (getting rid of conversion checks, conversions and error risks).
NOTE: this thing of "benefits" (global vs individual) can be measured; the game of possibilities, of having scenarios with more or less diversity, is known as coordination game. The pattern is the only thing that solves a coordination dilemma.
I choose to use Win 1252 in some of my software, without having any accentuation problems, including some that interact with UTF-8 and 16. I save a lot of trouble avoiding unnecessary normalization, and know exactly how much I need storage for the strings. I just see encoding give problem when the programmer does not understand the subject. UTF-8 is not 100% trouble free either, this is legend. The same character can be represented combinatively and not combinatively, and who does not know what he is doing, back and a half also complicates with pure UTF-8.
– Bacco
The Win-1252 is not a standard, neither the Brazilian government, nor the government of Portugal, nor the W3C... No national or international democratic body recommends the use of Win-1252. What we call "standard" here is precisely the "consensual recommendation" already decided. You unfortunately do not have the option to use Win-1252 as you say, only the option to comply or not comply with the UTF-8 usage recommendation. Perhaps there is some general confusion in this regard...
– Peter Krauss
@PK As I posted in your reply, the UTF-8 is not a universal solution. I agree that it’s good for the absurd most cases, but the question talks about abandoning everything in favor of UTF-8. The "disadvantages" sections illustrate some points that show that UTF-8 also has problems: https://en.wikipedia.org/wiki/UTF-8 . And I don’t usually see problems with accents in applications with other patterns, which are not a few. I see problems with people who have trouble understanding the limits of each encoding, which is understandable (besides the poor documentation of many languages with regard to this).
– Bacco
In this other question there was an answer based on facts: http://answall.com/a/30220/70
– Bacco
Relevant: http:/xkcd.com/927/
– Oralista de Sistemas
utf8 does not represent all Unicode chars.. if you saw this statement somewhere, you are wrong..
– Daniel Omine
@Danielomine, give me a hand: you can give me an example of non-Unicode UTF-8?
– JJoao
open new pertunta.. @Jjoao
– Daniel Omine