How to increase number to fractional power?

Asked

Viewed 1,725 times

6

I would like to know how to raise a number to a fractional power, for example 2^2.5, 3^0.7 ?

For positive integers it works more or less like this :

#include <iostream>
#include <math.h>
using namespace std;

int main()
{
    int exp=4,num=2,soma=1;

    for(int i=1;i<=exp;i++){
        soma=soma*num;
    }
    cout << "o resultado eh  " << soma << endl;
    return 0;
}

But how would it be for fractional numbers ?

2 answers

12


You don’t want to use the math library and the method pow? Well, let’s go to the raw definition.

The @Phelipe commented about a source very interesting how to calculate potentiation. Let’s go to the definitions seen in high school and then get into the subject of convergent series with functions formed by the infinite sum of polynomials.

Natural exponentiation

From now on, we will indicate in text the operation in the form b^n, which means:

  • b is the basis of exponentiation
  • ^ is the exponentiation operator
  • n is the exponent

For simplicity and to avoid complexes, let’s keep b strictly positive and different from 1.

ATTENTION

Below I will skip several formal steps to focus on a slightly more intuitive understanding of the issue. So, if you are missing some formalism or that I jumped quickly to a conclusion without having all the necessary tools for such a conclusion, don’t crucify me. I will try to put references to a correct demonstration.

Natural exponentiation equals direct point (disregard 0 as natural now). The exponent indicates how many times you should multiply the base in the following algorithm:

  1. if n == 1, return b
  2. otherwise, return b * b^(n-1)

This definition, however, only provides results for n natural. For your case, n as a floating point (which is a subset of rationals), we need to go further. We need to check some questions:

  1. negative numbers
  2. n zero/zero
  3. fractional numbers

Let’s include the whole numbers <= 0?

Whole exponentiation

What happens when we know the power of a given base, and we need to get the number just below it?

By definition, we know the value of b^(n+1) and we need to get into b^n. We can totally ignore the value of b^(n+1) and make n multiplications or, if we are smart, make only a single division. How? Analyzing the definition, obvious.

We know from the recursive definition that b^(n+1) = b*b^n for all n natural. Then the way with fewer operations to get b^n is to make b^n = b^(n+1)/b.

That makes sense, because it’s like once we take out the factor b of the move. So a generalization is to say that 1/b = b^-1. So what would it be (1/b)^4? Seria 1/b * 1/b * 1/b * 1/b from the definition of natural exponentiation, which gives signs that we are on the right path.

This includes the negative integers. And n == 0? Well, we can get to him by doing the following thinking:

We can extrapolate that b^n * b == b^(n+1) is always true. So, to n == 0, we have to b^0 * b == b^(0+1) == b^1. By definition, b^1 == b. We can treat b^0 as an unknown, therefore b^0 == b/b == 1.

We’ve managed to extend natural exponentiation to the entire exponentiation. To extend to the rational, before I would like to present the reverse operation.

Logarithmic

The logarithm is the inverse operation of exponentiation. That means, data b and b^n, it is possible to obtain what that would be n. How is the logarithm defined? Well, let’s go to notation:

log_b(x) = n

Here we are asking what would be the exponent n such that the basis b raising the n result in the operand x, therefore x = b^n. There is a very special basis in logarithms which is the Euler number e. It is so important that it even has its own notation:

ln(x) = log_e(x)

Sometimes sources say that ln is natural logarithm, others that is logarithm of Napier/neperian logarithm (neperiano in this case is related to Napier). I like the second way of calling because it reminds of mathematics Napier who made the first reference to this constant, even if in a tortuous manner

With this notation and also with the fact that ln(x) is the inverse of e^x, then we have to e^(ln(x)) == x.

Another property of a logarithm is that ln(b^c) == c*ln(b), given that:

  • whichever c
  • b positive real other than 1.

But how does that help me? Because both unary functions of exponentiation by e and logarithm are defined mathematically.

e^x

The exponential function is defined by a limit of a sum:

exponencial como soma infinita

Can we use infinite sums on the computer? Oddly enough, yes, we can, but as long as it is convergent.

Even so, we are taking a risk of losing precision. I give an example of how to incur loss of precision when calculating 30% sales tax from 72 items to 574.75 in that reply. In another note I talk about the loss of precision, because it depends on the method used

To begin to answer, first we need to check whether this infinite series converges. The formal definition is very boring, but we can use something derived from it to come up with something easier to handle.

Take an endless series S. Be it Sn nth term of the series. Be f(n) = |Sn| a function of integers in reals. If f is decreasing for all x >= N, then S converges.

Okay, what’s the term Sn to the exponential? It is x^n/n! (which is a non-negative number). So we need to find some value for n such that S_(n+1) < Sn. And then demonstrate that this is valid for any value above n+1. Easy, huh?

In the first step, let’s see what happens to Sx (for a x whole)? We have exactly x^x/x!. By the definition of factorial we have that x! == x * (x-1)!. By the definition of exponentiation, we have to x^x == x^(x-1) * x. Hence:

x^x/x!
x*x^(x-1) / x * (x-1)!
(x/x)  *  (x^(x-1)/(x-1)!)
1 * x^(x-1)/(x-1)!

Look at that coincidence! Sx == 1*S_(x-1). What if we can increase it? Say, to S_(x+1):

x^(x+1) / (x+1)!
x * x^x / (x+1) * x!
(x^x/x!) * (x/x+1)
Sx * (x/x+1)

Like x/x+1 is less than 1, we have to multiply by a value less than 1. For x+2, let’s get S_(x+1) and multiply by x/x+2, which is smaller than x/x+1. So we have to, to n >= x, the function f(n) = Sn becomes decreasing. Then converges.

And if we have a x not whole? A x real? Well, in that case, we can extrapolate that thought to take y the first integer smaller than x. Our cut number, for which f(n) becomes decreasing, is y (because x/y > 1 and x/y+1 < 1, therefore becoming decreasing from that point on).

Now that we have a convergent series, how to make that sum? Roughly, until the sum has importance in our floating point.

Important sum with floating point

Let’s take any number, a. It is a number belonging to the finite mantissa number set of n digits (as well as the IEEEE754 also, 24-bit).

I’m going to assume here that the mantissa is 4 and we’re talking about digits on base 10, but it’s not hard to extrapolate.

It means that a is in the following format:

x.yzk * 10^e

A possible value for a, so it would be:

6.473 * 10^7

What happens if we add a number with exponent 6 (7-1)? Let’s take an example where none occurs carry for simplicity’s sake:

6.473 * 10^7   +
5.112 * 10^6
-----------------
6.473 * 10^7   +
0.5112 * 10^7
-----------------
6.9842 * 10^7

However, how 2 is the fifth digit, it is no longer representable in a fixed mantissa number. Therefore it is not possible to distinguish between this top sum and this other sum:

6.473 * 10^7   +
5.110 * 10^6
-----------------
6.473 * 10^7   +
0.5110 * 10^7
-----------------
6.9840 * 10^7

This means that the value of the least significant digit has become irrelevant. Inclusive, if we take a number y = 0.009 * 10^6 and make s = a+y, the value of s will be s == a.

Since we’re not talking about our everyday arithmetic, there are some things worth talking about. The sum, for example, is not commutative. Let’s prove it?

We know that a+y == a on account of the previous step. This also means that:

a == a+y
a == (a+y)+y
a == ((a+y)+y)+y
...

And so it goes. But what if we group y otherwise? For example, we disappear like this:

a + (y+y+y+y+y+y+y+y+y+y)

Why, here we add a with y*10. What does that mean? That the digit of y now becomes relevant. So a + (y+y+y+y+y+y+y+y+y+y) > a.

In an infinite convergent series, we are interested in adding up Sn for rising indices. This means that while Sn + S_(n+1) > Sn, the sum has relevance and we must continue. However, if Sn + S_(n+1) == Sn, then we come to a point where the new values obtained become irrelevant.

To add the terms of f(x) that has relevance, we can do so:

int n = 0;
double sum = 0;
while (sum + f(n) > sum) {
    sum += f(n);
    n++;
}

This algorithm can be optimized, especially if we know the behavior of f(x+1) in relation to f(x) (but it can also be absurdly optimized without knowing it), but the general idea is this.

ln(x)

This is the function that defines the natural logarithm of a number:

fórmula de logaritmo neperiano de um número através de uma integral

We can use some distinct mathematical methods to calculate the integral, my favorite is the trapezoid method. Behold here for a Javascript implementation.

Put it all together

To calculate everything, according to the reply indicated by @Phelipe, just do e^(ln(b)*n).

So, first, we calculate the value of ln(b) using the trapezius method to calculate the integral. Then, multiply this result by n, be it n any number. On the result of this multiplication, we apply the result then in the infinite series that describes the exponentiation, summing its terms individually until we find an index i any such thing as add Si is irrelevant to the accuracy of the fixed mantissa used: somatorio_ateh_i + Si == somatorio_ateh_i.

It is worth noting that the trapezius method and the sum of the infinite series are extremely inefficient.

Completion

It was a beautiful mathematical joke, but if you need to use exponentiation of two non-integer numbers... Do with operations of standard library, they will save you performance problems. Same as the @Phelipe and the @epx commented.

  • 1

    Great answer. I wanted to understand how this calculation worked with floating points.

  • I will replace it with a concrete example. Maybe it will help. Of all sorts, in one of the answers I quoted has reference to some videos on YT (in English) that help understand how these operations take place

  • @Beto , I changed the example of floating point sum to something concrete. It can help in understanding. I am also taking care of possible rounding that could be done there round the number before adding up, which I disregarded

  • 1

    Okay. It’s better to understand, thank you.

  • 1

    And some people think that pow(x, y) only adds the cost of an extra function call to the :D code

  • @Beto, further reference on the subject of floating point number manipulation: https://answall.com/a/38140/64969

  • 1

    Using library Pow() might be good because it might use optimized processor instructions.

  • @epx, can it be good? 'Cause I’m sure it’ll be better. The idea of the answer was to start from this premise (already described in the first line): "Do not want to use the mathematical library and the method pow?" I think I’ll put a conclusion at the end of the answer indicating that the pow is the best of cases almost always

  • True, I only mentioned it in case anyone thought you discovered gunpowder by doing it "in hand".

Show 4 more comments

3

You are already using the mathematical library that brings several functions to work with mathematical calculations. To do the potentiation you can use the function pow

Here’s an example of how to do this:

#include <iostream>
#include <math.h>
using namespace std;

int main()
{
    float exp=4.9,num=2.0;
    cout << "o resultado eh  " << pow(num,exp) << endl;
    return 0;
}

If you want to know more you can access this link:

  • I wanted to know how to do it in the same hand. To understand the functioning in programming.

  • 1

    You can look here then. It has from the algorithm to an explanation of how the Pow function works. https://answall.com/questions/5250/como-elevar-um-n%C3%Bamero-a-pot%C3%Aancia-sem-usar-a-biblioteca-Math-h

  • 2

    Remember that it is not recommended to use your own functions to do this type of calculation. Existing functions are as optimized as possible and tested by a large number of people.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.