Omni Calculator logo

Floating-Point Calculator

Created by Rijk de Wet
Reviewed by Steven Wooding
Based on research by
Institute of Electrical and Electronics Engineers IEEE Standard for Floating-Point Arithmetic; IEEE Std 754-2019 (Revision of IEEE 754-2008); 2019See 1 more source
Institute of Electrical and Electronics Engineers IEEE Standard for Floating-Point Arithmetic; IEEE Std 754-2008; 2008
Last updated: Jan 18, 2024


The floating-point calculator is here to help you understand the IEEE754 standard for the floating-point format. It acts as a converter for floating-point numbers — it converts 32-bit floats and 64-bit floats from binary representations to real decimal numbers and vice versa.

💡 Are you looking to convert binary numbers to the decimal system? Perhaps you'd be interested in our binary calculator!

How to use the floating-point calculator

Before we get into the bits and bytes of the float32 and float64 number formats, let's learn how the floating-point calculator works. Just follow these easy steps:

  1. If you want to convert the binary encoding of a floating-point number to the decimal number it represents, select floating-point to number at the top of the calculator. Then:

    • Select the precision used. This determines how your binary representation will be interpreted.

    • Enter the floating-point number's binary digits. You can enter the sign, exponent, and fraction separately, or you can enter the entire bit-string in one go — select your preference in the Bit input method dropdown menu.

    • The value stored in your float will be shown at the bottom of the calculator.

  2. If you want to convert a value into its floating-point representation, select number to floating-point at the top of the calculator. Then:

    • Enter your number in the field below that.

    • The IEEE754 floating-point binary and hexadecimal representations of both single- and double-precision floats will be shown below.

    • The floating-point calculator will also show you the actual value stored due to the loss of precision inherent to the floating-point format. The error due to this loss of precision will also be reported.

What is an IEEE754 floating-point number?

In computing, a floating-point number is a data format used to store fractional numbers in a digital machine. A floating-point number is represented by a series of bits (1s and 0s). Computers perform mathematical operations on these bits directly instead of how a human would do the math. When a human wants to read the floating-point number, a complex formula reconstructs the bits into the decimal system.

The IEEE754 standard (established in 1985 by the Institute of Electrical and Electronics Engineers) standardized the floating-point format and also defined how the format can be operated upon to perform math and other tasks. It's been widely adopted by hardware and software designers and has become the de facto standard for representing floating-point numbers in computers. When someone mentions "floating-point numbers" in the context of computing, they'd generally mean the IEEE754 data format.

💡 Computer performance (i.e., speed) can be measured by the number of floating-point operations it can do per second. This metric is called FLOPS and is crucial in fields of scientific computing.

The most well-known IEEE754 floating-point format (single-precision, or "32-bit") is used in almost all modern computer applications. The format is highly flexible: float32s can encode numbers as small as 1.4×10−45 and as large as 3.4×1038 (both positive and negative).

Besides single-precision, the IEEE754 standard also codifies double-precision ("64-bit" or float64), which can store numbers from 5×10−324 to 1.7×10308. Less commonly used IEEE754 formats include:

  • Half-precision ("16-bit");
  • Quadruple-precision ("128-bit"); and
  • Octuple-precision ("256-bit").

💡 Technically, although IEEE754 only defines these formats, any arbitrary precision is possible — many older computers used 24-bit floating-point numbers!

However, a floating-point number is not just a number converted to the binary number system — it's much more complicated than that! Let's learn how the IEEE754 floating-point standard works.

How are real numbers stored with floating-point representation?

Any floating-point binary representation consists of three segments of bits: sign, exponent, and fraction.

  • The sign (S) indicates positive or negative;
  • The exponent (E) raises 2 to some power to scale the number; and
  • The fraction (F) determines the number's exact digits.

The segments' lengths and exact formula applied to S, E, and F to recreate the number depend on the format's precision.

When stored in memory, SS, EE, and FF are laid end-to-end to create the full binary representation of the floating-point number. In computer memory, it might look like this:

The bits of a floating point number stored in memory.

These bits are what the computer manipulates when arithmetic and other operations are performed. The computer never sees a number as its decimal digits — it only sees and works with these bits.

💡 The choice of precision depends on what its application requires. More precision means more bits and higher accuracy, but also bigger storage footprint and longer computation time.

Let's look at the two most commonly used floating-point formats: float32 and float64.

The single-precision 32-bit float format

float32 is the most commonly used of the IEEE754 formats. As suggested by the term "32-bit float", its underlying binary representation is 32 bits long. These are segmented as follows:

  • 1 bit for the sign (SS);
  • 8 bits for the exponent (EE); and
  • 23 bits for the fraction (FF).

When it's stored in memory, the bits look like this:

The bits of a 32-bit floating-point number.

We can rewrite it as a binary string:

b1 b2 b3 ... b30 b31 b32\small b_1\ b_2\ b_3 \ ...\ b_{30}\ b_{31}\ b_{32}

The real value that this 32-bit float stores can be calculated as:

(1)b1 ⁣× ⁣2(b2 ... b9)2127 ⁣× ⁣(1.b10...b32)2\small (-1)^{b_1} \!\times\! 2^{(b_2\ ...\ b_9)_2-127} \!\times\! (1.b_{10}...b_{32})_2

where:

  • 127127 is called the exponent bias and is inherent to the single-precision format;
  • x2x_2 means that xx must be interpreted as if in base 2 or binary; and
  • (1.b10...b32)2(1.b_{10}...b_{32})_2 means to take the binary bits b10b_{10} to b32b_{32} and use it as the fractional part of 11 to form a binary fraction. See our binary fraction calculator for help on that.

We can rewrite the formula better using SS, EE, and FF:

(1)S×2E127×(1.F)2\small (-1)^S \times 2^{E-127} \times (1.F)_2

where

  • S=b1S = b_1;
  • E=(b2 ... b9)2E = (b_2\ ...\ b_9)_2; and
  • F=(b10 ... b32)2F = (b_{10}\ ...\ b_{32})_2.

To let floating-point formats store really small numbers with high precision, E=0E=0 and F>0F>0 activates a separate formula. For float32, that formula is

(1)S×2126×(0.F)2\small (-1)^S \times 2^{-126} \times (0.F)_2

Numbers created by this formula are called "subnormal numbers", while "normal numbers" are created using the previous formula.

There are other special cases, encoded by specific values for EE and FF:

EE

FF

Value

00

00

±0\pm 0

00

>0>0

(1)S×2126×(0.F)2\small (-1)^S \times 2^{-126} \times (0.F)_2

0<E<2550 < E<255

Any

(1)S×2E127×(1.F)2\small (-1)^S \times 2^{E-127} \times (1.F)_2

255255

00

±\pm \infty

255255

>0>0

NaN\text{NaN}

NaN\text{NaN} means "not a number", which is returned when you divide by zero or perform impossible math using infinity. For the cases of ±0\pm 0 and ±\pm \infty, the sign bit determines whether the number is positive or negative. And yes, negative zero is a thing!

An example

Let's convert the binary floating-point representation 01000001110111100000000000000000 to the real number it represents.

  1. First, let's segment it into SS, EE, and FF:

    • S=02=0S = 0_2 = 0

    • E=100000112=131E = 10000011_2 = 131

    • F=101111000000000000000002F = 10111100000000000000000_2

  2. Since 0<E<2550<E<255, we use the normal number formula:

    (1)S×2E127×(1.F)2(-1)^S \times 2^{E-127} \times (1.F)_2

    • (1)S=1(-1)^S = 1 (so the number is positive!)

    • 2E127=24=162^{E-127} = 2^{4} = 16

    • (1.F)2=1.1011112=1.734375(1.F)_2 = 1.101111_2 = 1.734375

  3. Combine the three multiplicands:

    1×16×1.734375=27.751 \times 16 \times 1.734375 = \textbf{27.75}

Want to see for yourself? Try this value in our floating-point calculator to see it in action!

Note that 01000001110111100000000000000000201000001110111100000000000000000_2 converted directly from binary to decimal is not 27.7527.75, but 1, ⁣105, ⁣068, ⁣0321,\!105,\!068,\!032. Quite the difference, wouldn't you say?

💡 Floating-point numbers' formula can be seen as a form of scientific notation, where the exponential aspect uses base 2. We can rewrite the example above as 1.734375×241.734375 \times 2^{4}. See our scientific notation calculator for more information.

The double-precision 64-bit float format

The inner workings of the float64 data type are much the same as that of float32. All that differs are:

  • The lengths of the exponent and fraction segments of the binary representation — in 64-bit floats, EE takes up 11 bits, and FF takes up 52.
  • The exponent bias — in 64-bit floats, it's 1023 (whereas in 32-bit floats, it's 127).

The formulas for the reconstruction of 64-bit normal and subnormal floating-points are, therefore, respectively:

(1)S×2E1023×(1.F)2\small (-1)^S \times 2^{E-1023} \times (1.F)_2

and

(1)S×21022×(0.F)2\small (-1)^S \times 2^{-1022} \times (0.F)_2 \\

Because of the additional exponent and fraction bits compared to float32s, float64s can store much larger numbers and at much higher accuracy.

💡 All IEEE754 floating-point formats follow this pattern, with the biggest differences being the bias and the lengths of the segments.

How do I convert a number to floating-point?

To convert a decimal number to a floating-point representation, follow these steps:

  1. Convert the entire number to binary and then normalize it, i.e. write it in scientific notation using base 2.
  2. Truncate the fraction after the decimal point to the allowable fraction length.
  3. Extract the sign, exponent, and fraction segments.

Refer to the IEEE754 standard for more detailed instructions.

An example

Let's convert 27.7527.75 back to its floating-point representation.

  1. Our integer part is 2727, which in binary is 11011211011_2. Our fraction part is 0.750.75. Let's convert it to a binary fraction:

    • 0.75×2=1.50=1+0.50.75 \times 2 = 1.50 = \textbf{1} + 0.5

    • 0.5×2=1.00=1+00.5 \times 2 = 1.00 = \textbf{1} + 0

    So 0.75=0.1120.75 = 0.11_2, and then 27.75=11011.11227.75 = 11011.11_2. To normalize it, we rewrite it as:

27.75= 11011.112= (1.101111×24)2\small \begin{aligned} \qquad\quad& 27.75 \\ =&\ 11011.11_2 \\ =&\ (1.\textcolor{red}{101111} \times 2^{\textcolor{blue}{4}})_2 \\ \end{aligned}
  1. The digits after the decimal point are already a suitable length — for float32, we'd be limited to 23 bits, but we have only five.

From the normalized rewrite, we can extract that:

  • S=0S = 0, since the number is positive;
  • E=4+127=100000112E = \textcolor{blue}{4}+127 = 10000011_2 (we add 127127 because E127E-127 must be 4\textcolor{blue}{4}); and
  • F=10111100000000000000000F = \textcolor{red}{101111}00000000000000000 (zeroes get padded on the right because we're dealing with fractional digits and not an integer).

Therefore, our floating-point representation is 010000011101111000000000000000000\textcolor{blue}{10000011}\textcolor{red}{101111}00000000000000000.

You can verify this result with our floating-point calculator!

Floating-point accuracy

Floating-point numbers cannot represent all possible numbers with complete accuracy. This makes intuitive sense for sufficiently large numbers and for numbers with an infinite number of decimals, such as pi (π\pi) and Euler's number (ee). But did you know that computers cannot store a value as simple as 0.10.1 with 100% accuracy? Let's look into this claim!

When you ask a computer to store 0.10.1 as a float32, it will store this binary string:

00111101110011001100110011001101

If we convert that back to decimal with the floating-point formulas we learned above, we get the following:

  • S=0S = 0
  • E=011110112=123E = 01111011_2 = 123
  • F=100110011001100110011012F = 10011001100110011001101_2
 (1)S×2E127×(1.F)2= (1)0×2123127×(1.1001...1)2= 1×0.0625×1.6000000238418579101= 0.10000000149011611938477\small \begin{aligned} &\ (-1)^S \times 2^{E-127} \times (1.F)_2 \\ =&\ (-1)^0 \times 2^{123-127} \times (1.1001...1)_2\\ =&\ 1 \times 0.0625 \\ &\quad \times 1.6000000238418579101 \\ =&\ 0.10000000149011611938477 \end{aligned}

That's a little bit more than 0.10.1! The error (how far away the stored floating-point value is from the "correct" value of 0.10.1) is 1.49×1091.49 \times 10^{-9}.

A meme about floating-point accuracy problems.

Let's try to rectify this mistake and make the smallest change possible. Our stored number is a little too big, so let's change that last bit from a 11 to a 00 to make our float a little bit smaller. We're now converting:

00111101110011001100110011001100

  • S=0S = 0
  • E=011110112=123E = 01111011_2 = 123
  • F=100110011001100110011002F = 10011001100110011001100_2
 (1)S×2E127×(1.F)2= (1)0×2123127×(1.1001...0)2= 1×0.0625×1.5999999046325683594= 0.09999999403953552246094\small \begin{aligned} &\ (-1)^S \times 2^{E-127} \times (1.F)_2 \\ =&\ (-1)^0 \times 2^{123-127} \times (1.1001...0)_2\\ =&\ 1 \times 0.0625 \\ &\quad \times 1.5999999046325683594 \\ =&\ 0.09999999403953552246094 \end{aligned}

And we've missed our mark of 0.10.1 again! This time, the error is a little larger (5.96×1095.96\times 10^{-9}). The first binary string that ended in 1 was even more correct than this one!

You may think, "what if we used more bits?" Well, if we were to do the same with the float64 format instead, we'd find the same problem, although less severe. 0.10.1 converted to a 64-bit floating-point and back to decimal is 0.1000000000000000060.100000000000000006, and the error here is 5.55×10185.55 \times 10^{-18}. This is the higher accuracy of float64 in action, but the error is still not 00 — the conversion is still not lossless.

This is, unfortunately, the drawback of the ubiquitous floating-point format — it's not 100% precise. Small bits of information get lost, and they can wreak havoc if not accounted for. The only numbers that can be stored perfectly as a float without any losses are powers of 2 scaled by integers because that's how the format stores numbers. All other numbers are simply approximated when stored as a floating-point number. But it's still the best we've got!

💡 Some operations are more resilient against precision loss. Try our condition number calculator to see how severely a loss of accuracy will affect matrix operations.

FAQ

Why do we use floating-point numbers?

The IEEE754 floating-point format standard enables efficient storage and processing of numbers in computers.

  • From a hardware perspective, many simplifications of floating-point operations can be made to significantly speed up arithmetic, thanks to the IEEE754 standard's specifications.

  • For software, floats are very precise and typically lose a few millionths (if not less) per operation, which enables high-precision scientific and engineering applications.

What is the floating-point representation of 12.25?

The floating-point representation of 12.25 is 01000001010001000000000000000000. Its sign is 0, its exponent is 100000102 = 130, and its fraction is 100010...0. Reconstructed to decimal, we get:

(−1)0 × 2(130−127) × (1.10001)2
= 1 × 23 × 1.53125
= 12.25

Rijk de Wet
I want to convert
floating-point to number
Precision
Single-precision (32 bits)
Bit input method
Separately
Input the sign, exponent, and fraction, all in binary.
Sign
S = 02 = 0
Exponent
E = 000000002 = 0
Fraction
F = 000000000000000000000002 = 0
Single-precision floating-point representations:
Binary: 000000000000000000000000000000002
Hexadecimal: 00000000H
Result
+0
E=0 and F=0 is a special case for ±0. Because S=0, the final result is +0.
Check out 32 similar tech and electronics calculators 💻
3D printing costAmdahl's lawBattery capacity… 29 more
People also viewed…

Helium balloons

Wondering how many helium balloons it would take to lift you up in the air? Try this helium balloons calculator! 🎈

Latitude longitude distance

This latitude longitude distance calculator will obtain the distance between any two points on Earth's surface given their coordinates.

Snowman

The perfect snowman calculator uses math & science rules to help you design the snowman of your dreams!

Travel cost - job application edition