Floating Point Math

Your language isn’t broken, it’s doing floating point math. Computers can only natively store integers, so they need some way of representing decimal numbers. This representation is not perfectly accurate. This is why, more often than not, 0.1 + 0.2 != 0.3.

Why does this happen?

It’s actually rather interesting. When you have a base-10 system (like ours), it can only express fractions that use a prime factor of the base. The prime factors of 10 are 2 and 5. So 1/2, 1/4, 1/5, 1/8, and 1/10 can all be expressed cleanly because the denominators all use prime factors of 10. In contrast, 1/3, 1/6, 1/7 and 1/9 are all repeating decimals because their denominators use a prime factor of 3 or 7.

In binary (or base-2), the only prime factor is 2, so you can only cleanly express fractions whose denominator has only 2 as a prime factor. In binary, 1/2, 1/4, 1/8 would all be expressed cleanly as decimals, while 1/5 or 1/10 would be repeating decimals. So 0.1 and 0.2 (1/10 and 1/5), while clean decimals in a base-10 system, are repeating decimals in the base-2 system the computer uses. When you perform math on these repeating decimals, you end up with leftovers which carry over when you convert the computer’s base-2 (binary) number into a more human-readable base-10 representation.

Below are some examples of sending .1 + .2 to standard output in a variety of languages.

Read more:

Language Code Result

πŸ”—

πŸ”—

PowerShell by default uses double type, but because it runs on .NET it has the same types as C# does. Thanks to that the Decimal type can be used - directly by providing the type name [decimal] or via suffix d.

More about that in the C# section.

πŸ”— ABAP

πŸ”— ABAP

WRITE / CONV f( '.1' + '.2' ).
and
WRITE / CONV decfloat16( '.1' + '.2' ).
0.30000000000000004
and
0.3

πŸ”— APL

πŸ”— APL

0.1 + 0.2
and
βŽ•PP ← 17
0.1 + 0.2
and
0.3 = 0.1 + 0.2
and
βŽ•CT←0
0.3 = 0.1 + 0.2
and
βŽ•FR ← 1287
βŽ•PP ← 34
0.1 + 0.2
and
βŽ•FR ← 1287
βŽ•DCT ← 0
0.3 = 0.1 + 0.2
0.3
and
0.30000000000000004
and
1
and
0
and
0.3
and
1

APL has a default printing precision of 10 significant digits. Setting βŽ•PP to 17 shows the error, however 0.3 = 0.1 + 0.2 is still true (1) because there’s a default comparison tolerance of about 10-14. Setting βŽ•CT to 0 shows the inequality. Dyalog APL also supports 128-bit decimal numbers (activated by setting the float representation, βŽ•FR, to 1287, i.e. 128-bit decimal), where even setting the decimal comparison tolerance (βŽ•DCT) to zero still makes the equation hold true. Try it online! Multi-precision floats, unlimited precision rationals, and ball arithmetic are available in NARS2000.

πŸ”— Ada

πŸ”— Ada

with Ada.Text_IO; use Ada.Text_IO;
procedure Sum is
  A : Float := 0.1;
  B : Float := 0.2;
  C : Float := A + B;
begin
  Put_Line(Float'Image(C));
  Put_Line(Float'Image(0.1 + 0.2));
end Sum;
3.00000E-01  
3.00000E-01

πŸ”— AutoHotkey

πŸ”— AutoHotkey

MsgBox, % 0.1 + 0.2
0.3

πŸ”— AutoIt

πŸ”— AutoIt

ConsoleWrite(0.1 + 0.2)
0.3

πŸ”— C

πŸ”— C

#include <stdio.h>

int main(int argc, char** argv) {
  printf("%.17f\n", .1 + .2);
  return 0;
}
0.30000000000000004

πŸ”— C#

πŸ”— C#

Console.WriteLine("{0:R}", .1 + .2);
and
Console.WriteLine("{0:R}", .1f + .2f);
and
Console.WriteLine("{0:R}", .1m + .2m);
0.30000000000000004
and
0.3
and
0.3

C# has support for 128-bit decimal numbers, with 28-29 significant digits of precision. Their range, however, is smaller than that of both the single and double precision floating point types. Decimal literals are denoted with the m suffix.

πŸ”— C++

πŸ”— C++

#include <iomanip>
#include <iostream>

int main() {
  std::cout << std::setprecision(17) << 0.1 + 0.2;
}
0.30000000000000004

πŸ”— Clojure

πŸ”— Clojure

(+ 0.1 0.2)
0.30000000000000004

Clojure supports arbitrary precision and ratios. (+ 0.1M 0.2M) returns 0.3M, while (+ 1/10 2/10) returns 3/10.

πŸ”— ColdFusion

πŸ”— ColdFusion

<cfset foo = .1 + .2>
<cfoutput>#foo#</cfoutput>
0.3

πŸ”— Common Lisp

πŸ”— Common Lisp

(+ .1 .2)
and
(+ 1/10 2/10)
and
(+ 0.1d0 0.2d0)
and
(- 1.2 1.0)
0.3
and
3/10
and
0.30000000000000004d0
and
0.20000005

CL’s spec doesn’t actually even require radix-2 floats (let alone specifically 32-bit singles and 64-bit doubles), but the high-performance implementations all seem to use IEEE floats with the usual sizes. This was tested on SBCL and ECL in particular.

πŸ”— Crystal

πŸ”— Crystal

puts 0.1 + 0.2
and
puts 0.1_f32 + 0.2_f32
0.30000000000000004
and
0.3

πŸ”— D

πŸ”— D

import std.stdio;

void main(string[] args) {
  writefln("%.17f", .1+.2);
  writefln("%.17f", .1f+.2f);
  writefln("%.17f", .1L+.2L);
}
0.29999999999999999  
0.30000001192092896  
0.30000000000000000

πŸ”— Dart

πŸ”— Dart

print(.1 + .2);
0.30000000000000004

πŸ”— Delphi XE5

πŸ”— Delphi XE5

writeln(0.1 + 0.2);
0.3

πŸ”— Elixir

πŸ”— Elixir

IO.puts(0.1 + 0.2)
0.30000000000000004

πŸ”— Elm

πŸ”— Elm

0.1 + 0.2
0.30000000000000004

πŸ”— Elvish

πŸ”— Elvish

+ .1 .2
0.30000000000000004

Elvish uses Go’s double for numerical operations.

πŸ”— Emacs Lisp

πŸ”— Emacs Lisp

(+ .1 .2)
0.30000000000000004

πŸ”— Erlang

πŸ”— Erlang

io:format("~w~n", [0.1 + 0.2]).
io:format("~f~n", [0.1 + 0.2]).
io:format("~e~n", [0.1 + 0.2]).
io_lib:format("~.1f~n", [0.1 + 0.2]).
io_lib:format("~.2f~n", [0.1 + 0.2]).
0.30000000000000004
0.300000
3.00000e-1
"0.3\n"
"0.30\n"

πŸ”— FORTRAN

πŸ”— FORTRAN

program FLOATMATHTEST
  real(kind=4) :: x4, y4
  real(kind=8) :: x8, y8
  real(kind=16) :: x16, y16
  ! REAL literals are single precision, use _8 or _16
  ! if the literal should be wider.
  x4 = .1; x8 = .1_8; x16 = .1_16
  y4 = .2; y8 = .2_8; y16 = .2_16
  write (*,*) x4 + y4, x8 + y8, x16 + y16
end
0.300000012  
0.30000000000000004  
0.300000000000000000000000000000000039

πŸ”— Fish

πŸ”— Fish

math .1 + .2
0.3

πŸ”— GHC (Haskell)

πŸ”— GHC (Haskell)

0.1 + 0.2 :: Double
and
0.1 + 0.2 :: Float
and
0.1 + 0.2 :: Rational
0.30000000000000004
and
0.3
and
3 % 10

If you need real numbers, packages like exact-real give you the correct answer.

πŸ”— GNU Octave

πŸ”— GNU Octave

0.1 + 0.2
and
single(0.1)+single(0.2)
and
double(0.1)+double(0.2)
and
0.1+single(0.2)
and
0.1+double(0.2)
and
sprintf('%.17f',0.1+0.2)
0.3
and
0.3
and
0.3
and
0.3
and
0.3
and
0.30000000000000004

πŸ”— Gforth

πŸ”— Gforth

0.1e 0.2e f+ f.
and
0.1e 0.2e f+ 0.3e f= .
and
0.3e 0.3e f= .
0.3
and
0
and
-1

In Gforth 0 means false and -1 means true. First example print 0.3 but it’s not equal to actuall 0.3.

πŸ”— Go

πŸ”— Go

package main
import "fmt"

func main() {
  fmt.Println(.1 + .2)
  var a float64 = .1
  var b float64 = .2
  fmt.Println(a + b)
  fmt.Printf("%.54f\n", .1 + .2)
}
0.3  
0.30000000000000004  
0.299999999999999988897769753748434595763683319091796875

Go numeric constants have arbitrary precision.

πŸ”— Groovy

πŸ”— Groovy

println 0.1 + 0.2
0.3

Literal decimal values in Groovy are instances of java.math.BigDecimal.

πŸ”— Guile

πŸ”— Guile

(+ 0.1 0.2)
and
(+ 1/10 2/10)
0.30000000000000004
and
3/10

πŸ”— Hugs (Haskell)

πŸ”— Hugs (Haskell)

0.1 + 0.2
0.3

πŸ”— Io

πŸ”— Io

(0.1 + 0.2) print
0.3

πŸ”— Java

πŸ”— Java

System.out.println(.1 + .2);
and
System.out.println(.1F + .2F);
0.30000000000000004
and
0.3

Java has built-in support for arbitrary-precision numbers using the BigDecimal class.

πŸ”— JavaScript

πŸ”— JavaScript

console.log(.1 + .2);
0.30000000000000004

The decimal.js library provides an arbitrary-precision Decimal type for JavaScript.

πŸ”— Julia

πŸ”— Julia

.1 + .2
0.30000000000000004

Julia has built-in rational numbers support and also a built-in arbitrary-precision BigFloat data type. To get the math right, 1//10 + 2//10 returns 3//10.

πŸ”— K (Kona)

πŸ”— K (Kona)

0.1 + 0.2
0.3

πŸ”— Kotlin

πŸ”— Kotlin

println(.1 + .2)
and
println(.1F + .2F)
0.30000000000000004
and
0.3

See Reference documentation.

πŸ”— Lua

πŸ”— Lua

print(.1 + .2)
and
print(string.format("%0.17f", 0.1 + 0.2))
0.3
and
0.30000000000000004

πŸ”— MATLAB

πŸ”— MATLAB

0.1 + 0.2
and
sprintf('%.17f', 0.1 + 0.2)
0.3
and
0.30000000000000004

πŸ”— MIT/GNU Scheme

πŸ”— MIT/GNU Scheme

(+ 0.1 0.2)
and
(+ \#e0.1 \#e0.2)
0.30000000000000004
and
3/10

The scheme specification has a concept exactness.

πŸ”— Mathematica

πŸ”— Mathematica

0.1 + 0.2
0.3

Mathematica has a fairly thorough internal mechanism for dealing with numerical precision and supports arbitrary precision.

By default, the inputs 0.1 and 0.2 in the example are taken to have MachinePrecision. At a common MachinePrecision of 15.9546 digits, 0.1 + 0.2 actually has a [FullForm][4] of 0.30000000000000004, but is printed as 0.3.

Mathematica supports rational numbers: 1/10 + 2/10 is 3/10 (which has a FullForm of Rational[3, 10]).

πŸ”— MySQL

πŸ”— MySQL

SELECT .1 + .2;
0.3

πŸ”— Nim

πŸ”— Nim

echo(0.1 + 0.2)
0.3

πŸ”— OCaml

πŸ”— OCaml

0.1 +. 0.2;;
float = 0.300000000000000044

πŸ”— Objective-C

πŸ”— Objective-C

#import <Foundation/Foundation.h>

int main(int argc, const char * argv[]) {
  @autoreleasepool {
    NSLog(@"%.17f\n", .1+.2);
  }
  return 0;
}
0.30000000000000004

πŸ”— PHP

πŸ”— PHP

echo .1 + .2;
and
var_dump(.1 + .2);
and
var_dump(bcadd(.1, .2, 1));
0.3
and
float(0.30000000000000004441)
and
string(3) "0.3"

PHP echo converts 0.30000000000000004441 to a string and shortens it to β€œ0.3”. To achieve the desired floating-point result, adjust the precision setting: ini_set("precision", 17).

πŸ”— Perl

πŸ”— Perl

perl -E 'say 0.1+0.2'
and
perl -e 'printf q{%.17f}, 0.1+0.2'
and
perl -MMath::BigFloat -E 'say Math::BigFloat->new(q{0.1}) + Math::BigFloat->new(q{0.2})'
0.3
and
0.30000000000000004
and
0.3

The addition of float primitives only appears to print correctly because not all of the 17 digits are printed by default. The core Math::BigFloat allows true arbitrary precision floating point operations by never using numeric primitives.

πŸ”— PicoLisp

πŸ”— PicoLisp

[load "frac.min.l"]
[println (+ (/ 1 10) (/ 2 10))]
(/ 3 10)

You must load file β€œfrac.min.l”.

πŸ”— PostgreSQL

πŸ”— PostgreSQL

SELECT 0.1::float + 0.2::float;
and
SELECT 0.1 + 0.2;
0.30000000000000004
and
0.3

PostgreSQL treats decimal literals as arbitrary precision numbers with fixed point. Explicit type casts are required to get floating-point numbers.

PostgreSQL 11 and earlier outputs 0.3 as a result for query SELECT 0.1::float + 0.2::float;, but the result is rounded only for display, and under the hood it is still good old 0.30000000000000004.

In PostgreSQL 12 default behavior for textual output of floats was changed from more human-readable rounded format to shortest-precise format. Format can be customized by the extra_float_digits configuration parameter.

πŸ”— Prolog (SWI-Prolog)

πŸ”— Prolog (SWI-Prolog)

?- X is 0.1 + 0.2.
X = 0.30000000000000004.

πŸ”— Pyret

πŸ”— Pyret

0.1 + 0.2
and
~0.1 + ~0.2
0.3
and
~0.30000000000000004

Pyret has built-in support for both rational numbers and floating points. Numbers written normally are assumed to be exact. In contrast, RoughNums are represented by floating points, and are written prefixed with a ~, indicating that they are not precise answers – the ~ is meant to visually evoke hand-waving. A user who sees a computation produce ~0.30000000000000004 knows to treat the value with skepticism. RoughNums cannot be compared directly for equality; they can only be compared up to a given tolerance.

πŸ”— Python 2

πŸ”— Python 2

print .1 + .2
and
.1 + .2
and
float(decimal.Decimal(".1") + decimal.Decimal(".2"))
and
float(fractions.Fraction('0.1') + fractions.Fraction('0.2'))
0.3
and
0.30000000000000004
and
0.3
and
0.3

Python 2’s print statement converts 0.30000000000000004 to a string and shortens it to β€œ0.3”. To achieve the desired floating point result, use print repr(.1 + .2). This was fixed in Python 3 (see below).

πŸ”— Python 3

πŸ”— Python 3

print(.1 + .2)
and
.1 + .2
and
float(decimal.Decimal('.1') + decimal.Decimal('.2'))
and
float(fractions.Fraction('0.1') + fractions.Fraction('0.2'))
0.30000000000000004
and
0.30000000000000004
and
0.3
and
0.3

Python (both 2 and 3) supports decimal arithmetic with the decimal module, and true rational numbers with the fractions module.

πŸ”— R

πŸ”— R

print(.1 + .2)
and
print(.1 + .2, digits=18)
0.3
and
0.30000000000000004

πŸ”— Racket (PLT Scheme)

πŸ”— Racket (PLT Scheme)

(+ .1 .2)
and
(+ 1/10 2/10)
0.30000000000000004
and
3/10

πŸ”— Raku

πŸ”— Raku

raku -e 'say 0.1 + 0.2'
and
raku -e 'say (0.1 + 0.2).fmt(\"%.17f\")'
and
raku -e 'say 1/10 + 2/10'
and
raku -e 'say 0.1e0 + 0.2e0'
0.3
and
0.30000000000000000
and
0.3
and
0.30000000000000004

Raku uses rationals by default, so .1 is stored something like { numerator => 1, denominator => 10 }. To actually trigger the behavior, you must force the numbers to be of type Num (double in C terms) and use the base function instead of the sprintf or fmt functions (since those functions have a bug that limits the precision of the output).

πŸ”— Regina REXX

πŸ”— Regina REXX

say .1+.2
0.3

πŸ”— Ruby

πŸ”— Ruby

puts 0.1 + 0.2
and
puts 1/10r + 2/10r
0.30000000000000004
and
3/10

Ruby supports rational numbers in syntax with version 2.1 and newer directly. For older versions use Rational. Ruby also has a library specifically for decimals: BigDecimal.

πŸ”— Rust

πŸ”— Rust

extern crate num;
use num::rational::Ratio;

fn main() {
  println!("{}", 0.1 + 0.2);
  println!("{}", 0.1_f32 + 0.2_f32);
  println!("1/10 + 2/10 = {}", Ratio::new(1, 10) + Ratio::new(2, 10));
}
0.30000000000000004
0.3
1/10 + 2/10 = 3/10

Rust has rational number support from the num crate.

πŸ”— SageMath

πŸ”— SageMath

.1 + .2
and
RDF(.1) + RDF(.2)
and
RBF('.1') + RBF('.2')
and
QQ('1/10') + QQ('2/10')
0.3
and
0.30000000000000004
and
["0.300000000000000 +/- 1.64e-16"]
and
3/10

SageMath supports various fields for arithmetic: Arbitrary Precision Real Numbers, RealDoubleField, Ball Arithmetic, Rational Numbers, etc.

πŸ”— Scala

πŸ”— Scala

scala -e 'println(0.1 + 0.2)'
and
scala -e 'println(0.1F + 0.2F)'
and
scala -e 'println(BigDecimal(\"0.1\") + BigDecimal(\"0.2\"))'
0.30000000000000004
and
0.3
and
0.3

πŸ”— Smalltalk

πŸ”— Smalltalk

(1/10) + (2/10).
and
0.1 + 0.2.
and
0.1s17 + 0.2s17.
(3/10)
and
0.30000000000000004
and
0.30000000000000000s17

Smalltalk uses fractions by default in most operations; in fact, standard division results in fractions, not floating point numbers. Squeak and similar Smalltalks provide β€œscaled decimals” that allow fixed-point real numbers (s-suffix indicating precision places).

πŸ”— Swift

πŸ”— Swift

0.1 + 0.2
and
Decimal(0.1) + Decimal(0.2)
0.30000000000000004
and
0.3

Swift supports decimal arithmetic with the Foundation module.

πŸ”— TCL

πŸ”— TCL

puts [expr .1 + .2]
0.30000000000000004

πŸ”— Turbo Pascal 7.0

πŸ”— Turbo Pascal 7.0

writeln(0.1 + 0.2);
0.3

πŸ”— Vala

πŸ”— Vala

static int main(string[] args) {
  stdout.printf("%.17f\n", 0.1 + 0.2);
  return 0;
}
0.30000000000000004

πŸ”— Visual Basic 6

πŸ”— Visual Basic 6

a# = 0.1 + 0.2: b# = 0.3
Debug.Print Format(a - b, "0." & String(16, "0"))
Debug.Print a = b
0.0000000000000001  
False

Appending the identifier type character # to any identifier forces it to Double.

πŸ”— WebAssembly (WAST)

πŸ”— WebAssembly (WAST)

(func $add_f32 (result f32)
  f32.const 0.1
  f32.const 0.2
  f32.add)
(export "add_f32" (func $add_f32))
and
(func $add_f64 (result f64)
  f64.const 0.1
  f64.const 0.2
  f64.add)
(export "add_f64" (func $add_f64))
0.30000001192092896
and
0.30000000000000004

πŸ”— awk

πŸ”— awk

awk 'BEGIN { print 0.1 + 0.2 }'
0.3

πŸ”— bc

πŸ”— bc

0.1 + 0.2
0.3

πŸ”— dc

πŸ”— dc

0.1 0.2 + p
0.3

πŸ”— ivy

πŸ”— ivy

0.1 + 0.2
and
0.1 + sqrt(0.04)
3/10
and
0.3

Ivy is an interpreter for an APL-like language. It uses exact rational arithmetic so it can handle arbitrary precision. When ivy evaluates an irrational function, the result is stored in a high-precision floating-point number (default 256 bits of mantissa).

πŸ”— zsh

πŸ”— zsh

echo "$((.1 + .2))"
0.30000000000000004