Why are primitive types with floating points divided or multiplied by certain multiples of 10 displayed in scientific notation?

Asked

Viewed 360 times

12

In this answer we can see that Java has a peculiarity when displaying the result in certain types of operations with primitive types that have floating point, such as division by multiples of 10, which can be seen in the example taken from the linked question, below:

int num1 = 5;
float num2 = num1 / 10000f;
System.out.println(num2);

How can it be seen here, the result is 5.0E-4 and not 0.0005.

I believe that this is a scientific notation, which means 5.0 * 10^-4, that leads to the same value, but I don’t understand why Java makes this change in the display.

In the linked answer, there is an excerpt from the documentation that says:

(...) when the magnitude of the value is less than 10 -3 or more than 10 7 the value will be displayed with the scientific notation.

Is there any convention or official reason for language to adopt this kind of display in the case mentioned in the citation? Or as stated in the answer, it is only by readability?

Note: this characteristic does not occur with types int and long, how can it be seen here

  • 4

    I lean towards readability purely. 0.0000000523 or 100000000 are very difficult to read and have a sense of greatness whereas 5.23 × 10^-8 and 1 × 10^8 are much easier.

  • 8

    It seems to me that this is an inheritance of printf("%g", f) of C. In it, it is specified that the shortest representation will be assumed between the absolute model and where magnitude. And the turning point is precisely 0.001, which has 4 characters, for 1e-3, with four as well. Therefore, the smallest decimal number would imply needing to be described with the notation e to be the smallest possible string

  • 3

    In the documentation of Float.toString there’s this stretch: "How Many digits must be printed for the fractional part of m or a? There must be at least one Digit to represent the fractional part, and Beyond that as Many, but only as Many, more digits as are needed to uniquely distinguish the argument value from Adjacent values of type float." (adapted); which gives, in a way, support to Jefferson’s comment about always displaying as few characters as possible - in the excerpt it is talked about the number of decimals, but the idea is the same: to display what matters without introducing noises.

  • @Jeffersonquesado Still there is something inconsistent in this logic because 100000.0 is written as 100000.0 instead of 1.0E5 that would be shorter.

  • Maybe it’s in order to save memory... https://dealunoparaaluno.wordpress.com/2013/04/03/os-8-datatype/

2 answers

2

It is purely arbitrary, each language has its criterion, and aims to prevent a very large or very small number from generating a very large string (imagine 10 300, which is 1 followed by 300 zeros).

Now, this is the default formatter, which you never use in a "real" program, except to debug. When showing the value to the user you should always use a specific formatter, then you assume the responsibility e.g. in an accounting program you can allow the display of very large numbers (up to trillions, maybe) but always with 2 fixed decimals.

0

It must be a legacy of the c++ language project, where an integer has even more memory space than a float.

  • 1

    I would like an official source of this information, including, I quoted it in the question.

  • In this case, I recommend the book Concepts of Programming Language, by Robert Sebesta. Will remedy this and several other questions.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.