Is there a difference between "accuracy" and "precision" in computing contexts?

Asked

Viewed 84 times

3

Recently, appeared on the site a question which deals with "inaccuracy" of the value returned by the method Math.toRadians Java. However, in other places (such as in the answers of this question), the term "precision”.

I am not saying that none of the texts used the terms incorrectly, I only used them to demonstrate that in this area the two terms can appear frequently. As far as I know, the meaning is very similar, but maybe there’s a difference.

I wonder if, in the context of computer science, there is a difference between the meaning of exactness and of precision. If yes, what is it? Or these terms can be used (formally) interchangeably?

  • Accuracy : "3.absolute rigor in the determination of measure, weight, value, etc.; accuracy." I always think, in syntax and semantics. These two words with their syntax in Portuguese have the semantics as mentioned. In the context of Computer Science, in the field of theory, you can make use of them in the context that suits you, in practice in your root all syntax will be converted to binary and the semantics is implicit in the instructions regardless of what they may mean at a high level.

  • 2

    (non-canonical) Accuracy is absolute precision. The more precision you TEND to accuracy. Example: an (ideal) part with an exact 6.332214mm. A caliper will give greater accuracy in measurement than a ruler, a micrometer more so, but perhaps none gives the exact value. But clearly you know which one has the most precision for measuring. Each mechanism exists to combine required accuracy with ease of use. The appropriate level of accuracy differs according to each use, and has case that accurate. No float is used - however accurate it is - in monetary value, the value has to be exact.

  • 1

    Related: https://stackoverflow.com/a/8270869

2 answers

2


I have never seen anything specific to all computation. At specific points in computation it may be that the term has some specificity (in fact has definition) or may have used it in a certain way, even though it is not the general consensus. I will try to show this using the context provided in the question that I believe is most important.

Formal definition

In the Wikipedia has an entry just talking about this difference in general, to see the importance of the theme.

The terms cannot be formally used interchangeably, because they have different and clear definitions.

Exactness is about the characteristic that something has to be close to reality.

Flechas no centro do alvo

Precision is about the feature that something have enough details to understand the information correctly.

Pessoa consertando mecanismo pequeno e sofisticado com uma lente de aumento

I am putting with my words that I understand to be simpler than can be read in the article above. There is more exact. And as always, Wikipedia is not always the most canonical and accurate answer possible.

Accuracy and precision have degrees. There is absolute exact and there is exact at low level. But I think this does not serve us much.

Informally or as we useuse

In my interpretation, which may not be the formal one, it is that if it is not the absolute exact it is inaccurate. But from the definitions I found it seems that it’s not quite so, low accuracy can be called exact because of these degrees.

If you take this to iron and fire in this way, some of my answers are not so certain because I very well separate what has low accuracy as being inaccurate, but which may be necessary. Some places I used both terms.

I used a lot to talk about monetary values. For me, and I have seen this definition in many places, the monetary value must be exact, must be absolute.

Binary floating point types cannot be exact, because it is an approximation, and can have the necessary detail (whether it is positive or negative, the magnitudes for all possible values or the pennies or fraction of them expected).

For me who should have degrees is precision, something is more or less accurate, I think being more or less exact is the same as being more or less pregnant. At least it is so in the scenarios of the examples of the question.

Informally they end up being interchanged, I myself must have done here and there because I was only attentive to the real definition after I started using Sopt (one of the reasons to use it was precisely to force myself to seek more accuracy (which is the same as accuracy :D).

The question now made me see even a little better on the subject to give a more accurate answer possible :) What made me see that even the interpretation I used was not perfect, although I like it, I think it gives a better idea, and could be about that the AP wants to know.

Trying to compare uses in a context

Comparação com as 4 situações no eixo de precisão e exatidão

By the general formal definition adopted, these binary floating types can be considered exact, i.e., they may not give the actual value, provided that each time, under the same conditions, it gives the same result. And they do. So technically they should be called exact, at least on some level. I don’t like that, I think it’s an inaccurate definition :)

By my definition these types are inaccurate but accurate, as they have a sufficient approximation for most cases. By formal definition they are better than accurate. They would only be accurate if each moment you get a slightly different value (or equal just by coincidence), but that’s not the case. Every time you have a real 0.3 on the computer, using a binary floating type, in the same implementation, it will give a value very close to that, not exactly it, but it will always be that same value.

So let’s try to interpret the above graph in a way that the definitions converge.

Let us understand the circle as something microscopic and what is within the circle cannot be observed by anyone or anything. If it is inside it is all the same thing. Then it is easy to realize that if something goes out of the circle it is outside of reality and is inaccurate.

But accuracy complicates it because it only makes sense if we look at multiple samples and in the context we want, we look at just one. She accepts that there is a certain randomness when we take a value.

Lack of contextual canonical information

I am critical of some definitions of the area, or of the lack of more formal and universal definitions. There’s a lot I wish you had a better definition of, more exact :P.

I think it leaves room for me to continue to be used in the same way in basic computing, just as I’ve seen other people do. The exact is that which is not inaccurate, is absolute, is perfect. What is not absolutely exact to me can only be (or not) accurate, but never say that it has some accuracy. At least in computing. But I can’t impose my will, I can’t argue as well as I did on What is the difference between attribute and field in the classes?.

I think we need a more formal definition for our reality, because we don’t do experiments and we see if we get the same result, it doesn’t serve us well (for computing itself, it may serve for the domain of some application we do).

So I use the term exactness as being something that is reality, in an absolute exact way, I do not measure the degrees of accuracy.

And I use the term precision to say about the amplitude of the measurement, that is, how great I can reach. I can see only the millions of units, or thousands, or hundreds, or units (very common), or I can see fractions of unit either by the house of the decimal fraction, by the ten (the most common in monetary value), by the hundred, thousand, etc.

Note that I am using the terms in a specific context of computing and not for all of it.

I can find material that talks like this, but it’s intuitive use, I didn’t find formal definition speaking the same as me (I haven’t given up yet).

I found something in the book which is "the Bible" of computing, is not a definition, but is a use that shows that the way is this.

Examples of the question

The first example of the question is correct, this is how we use it, how we document it. The way we understand it, that question uses the correct term, even came from the documentation. The result coming out of the very "narrow" circle gives a number that is not what we want to quantify, it does not show the purest reality, only something close to it. Nothing to do with precision there.

Strictly speaking, and also by the definition that I see people using in computing, especially in the context used there, the second example is wrong, somehow.

There was not used the term "inaccurate", which is the correct to say there, by any definition adopted, and this is already an error of the question/answer. And uses "inaccurate" which is not the case, also by any definition.

By the definition that we find on Wikipedia, and is corroborated by several sources, it has precision because every time the error is equal, it is within what is expected, it does not vary. You don’t add 0.1 to 0.2 and each hour gives a different value, it just doesn’t give exactly 0.3.

But for this analysis this does not serve because in computation only it is not deterministic what you have no control and this type of operation is always under control.

And by the definition that I and other people use in computing, it has a very good number of digits in the fractional part, it has enough details, maybe more than is necessary, so it has precision, it has the accuracy of the inaccurate value.

You can argue and say that by the general definition the term "precision" does not fit here and should not analyze, nor should you look if it has enough detail, even because this definition is not formal.

The term on error is not "inaccurate", it is "inaccurate". By the general formal definition, which I do not like, that is even "exact" (always equal), which does not help us, besides being "precise" (it is close enough).

But I think people understood what you mean in the question/answer, even if you were wrong. Only also consolidated the wrong understanding of the term.

However, when the error is an opposition, it should be considered serious. But it is so rooted that it ends up getting beaten.

I read in several places that even in other contexts people confuse the terms.

Completion

Note that I am answering the question, by any definition they are not interchangeable and they are different, but the way they are different can change if you use the formal definition found in the general context or if you observe how it is used in some computation context.

I am not answering in the sense of giving a canonical answer that can be used for everyone to reference to vindicate unequivocally how to use each in our context. Can be referenced to give an argument in favor of a view.

So what you have there, the value does not represent reality, or represents but each hour gives a different thing?

If it is accurate, does it have all the details you need? That is, does this example have all decimal places that are useful? This part causes controversy for not having a formal definition.

It wasn’t asked about Machine Learning or something like that, it’s another story.

If anyone has another vision, put it on, maybe I can review my vision.

-4

Browser other questions tagged

You are not signed in. Login or sign up in order to post.