# Floating Point Math

Your language isn’t broken, it’s doing floating point math. Computers can only natively store integers, so they need some way of representing decimal numbers. This representation is not perfectly accurate. This is why, more often than not, `0.1 + 0.2 != 0.3`.

## Why does this happen?

It’s actually rather interesting. When you have a base-10 system (like ours), it can only express fractions that use a prime factor of the base. The prime factors of 10 are 2 and 5. So 1/2, 1/4, 1/5, 1/8, and 1/10 can all be expressed cleanly because the denominators all use prime factors of 10. In contrast, 1/3, 1/6, and 1/7 are all repeating decimals because their denominators use a prime factor of 3 or 7.

In binary (or base-2), the only prime factor is 2, so you can only cleanly express fractions whose denominator has only 2 as a prime factor. In binary, 1/2, 1/4, 1/8 would all be expressed cleanly as decimals, while 1/5 or 1/10 would be repeating decimals. So 0.1 and 0.2 (1/10 and 1/5), while clean decimals in a base-10 system, are repeating decimals in the base-2 system the computer uses. When you perform math on these repeating decimals, you end up with leftovers which carry over when you convert the computer’s base-2 (binary) number into a more human-readable base-10 representation.

Below are some examples of sending `.1 + .2` to standard output in a variety of languages.

Language Code Result

### 🔗ABAP

``WRITE / CONV f( '.1' + '.2' ).``
and
``WRITE / CONV decfloat16( '.1' + '.2' ).``
``0.30000000000000004``
and
``0.3``

### 🔗APL

``0.1 + 0.2``
``0.30000000000000004``

``````with Ada.Text_IO; use Ada.Text_IO;
procedure Sum is
A : Float := 0.1;
B : Float := 0.2;
C : Float := A + B;
begin
Put_Line(Float'Image(C));
Put_Line(Float'Image(0.1 + 0.2));
end Sum;``````
``````3.00000E-01
3.00000E-01``````

### 🔗AutoHotkey

``MsgBox, % 0.1 + 0.2``
``0.3``

### 🔗C

``````#include <stdio.h>

int main(int argc, char** argv) {
printf("%.17f\n", .1 + .2);
return 0;
}``````
``0.30000000000000004``

### 🔗C#

``Console.WriteLine("{0:R}", .1 + .2);``
and
``Console.WriteLine("{0:R}", .1f + .2f);``
and
``Console.WriteLine("{0:R}", .1m + .2m);``
``0.30000000000000004``
and
``0.3``
and
``0.3``

C# has support for 128-bit decimal numbers, with 28-29 significant digits of precision. Their range, however, is smaller than that of both the single and double precision floating point types. Decimal literals are denoted with the `m` suffix.

### 🔗C++

``````#include <iomanip>
#include <iostream>

int main() {
std::cout << std::setprecision(17) << 0.1 + 0.2;
}``````
``0.30000000000000004``

### 🔗Clojure

``(+ 0.1 0.2)``
``0.30000000000000004``

Clojure supports arbitrary precision and ratios. `(+ 0.1M 0.2M)` returns `0.3M`, while `(+ 1/10 2/10)` returns `3/10`, while `(+ 1/10 2/10)` returns `3/10`.

### 🔗ColdFusion

``````<cfset foo = .1 + .2>
<cfoutput>#foo#</cfoutput>``````
``0.3``

### 🔗Common Lisp

``(+ .1 .2)``
and
``(+ 1/10 2/10)``
and
``(+ 0.1d0 0.2d0)``
and
``(- 1.2 1.0)``
``0.3``
and
``3/10``
and
``0.30000000000000004d0``
and
``0.20000005``

CL’s spec doesn’t actually even require radix-2 floats (let alone specifically 32-bit singles and 64-bit doubles), but the high-performance implementations all seem to use IEEE floats with the usual sizes. This was tested on SBCL and ECL in particular.

### 🔗Crystal

``puts 0.1 + 0.2``
and
``puts 0.1_f32 + 0.2_f32``
``0.30000000000000004``
and
``0.3``

### 🔗D

``````import std.stdio;

void main(string[] args) {
writefln("%.17f", .1+.2);
writefln("%.17f", .1f+.2f);
writefln("%.17f", .1L+.2L);
}``````
``````0.29999999999999999
0.30000001192092896
0.30000000000000000``````

### 🔗Dart

``print(.1 + .2);``
``0.30000000000000004``

### 🔗Delphi XE5

``writeln(0.1 + 0.2);``
``0.3``

### 🔗Elixir

``IO.puts(0.1 + 0.2)``
``0.30000000000000004``

### 🔗Elm

``0.1 + 0.2``
``0.30000000000000004``

### 🔗Elvish

``+ .1 .2``
``0.30000000000000004``

Elvish uses Go’s `double` for numerical operations.

### 🔗Emacs Lisp

``(+ .1 .2)``
``0.30000000000000004``

### 🔗Erlang

``````io:format("~w~n", [0.1 + 0.2]).
io:format("~f~n", [0.1 + 0.2]).
io:format("~e~n", [0.1 + 0.2]).
io_lib:format("~.1f~n", [0.1 + 0.2]).
io_lib:format("~.2f~n", [0.1 + 0.2]).``````
``````0.30000000000000004
0.300000
3.00000e-1
"0.3\n"
"0.30\n"``````

### 🔗FORTRAN

``````program FLOATMATHTEST
real(kind=4) :: x4, y4
real(kind=8) :: x8, y8
real(kind=16) :: x16, y16
! REAL literals are single precision, use _8 or _16
! if the literal should be wider.
x4 = .1; x8 = .1_8; x16 = .1_16
y4 = .2; y8 = .2_8; y16 = .2_16
write (*,*) x4 + y4, x8 + y8, x16 + y16
end``````
``````0.300000012
0.30000000000000004
0.300000000000000000000000000000000039``````

``0.1 + 0.2 :: Double``
and
``0.1 + 0.2 :: Float``
and
``0.1 + 0.2 :: Rational``
``0.30000000000000004``
and
``0.3``
and
``3 % 10``

If you need real numbers, packages like exact-real give you the correct answer.

### 🔗GNU Octave

``0.1 + 0.2``
and
``single(0.1)+single(0.2)``
and
``double(0.1)+double(0.2)``
and
``0.1+single(0.2)``
and
``0.1+double(0.2)``
and
``sprintf('%.17f',0.1+0.2)``
``0.3``
and
``0.3``
and
``0.3``
and
``0.3``
and
``0.3``
and
``0.30000000000000004``

### 🔗Gforth

``0.1e 0.2e f+ f.``
``0.3``

### 🔗Go

``````package main
import "fmt"

func main() {
fmt.Println(.1 + .2)
var a float64 = .1
var b float64 = .2
fmt.Println(a + b)
fmt.Printf("%.54f\n", .1 + .2)
}``````
``````0.3
0.30000000000000004
0.299999999999999988897769753748434595763683319091796875``````

Go numeric constants have arbitrary precision.

### 🔗Groovy

``println 0.1 + 0.2``
``0.3``

Literal decimal values in Groovy are instances of java.math.BigDecimal.

``0.1 + 0.2``
``0.3``

### 🔗Io

``(0.1 + 0.2) print``
``0.3``

### 🔗Java

``System.out.println(.1 + .2);``
and
``System.out.println(.1F + .2F);``
``0.30000000000000004``
and
``0.3``

Java has built-in support for arbitrary-precision numbers using the BigDecimal class.

### 🔗JavaScript

``console.log(.1 + .2);``
``0.30000000000000004``

The decimal.js library provides an arbitrary-precision Decimal type for JavaScript.

### 🔗Julia

``.1 + .2``
``0.30000000000000004``

Julia has built-in rational numbers support and also a built-in arbitrary-precision BigFloat data type. To get the math right, ```1//10 + 2//10``` returns `3//10`.

### 🔗K (Kona)

``0.1 + 0.2``
``0.3``

### 🔗Kotlin

``println(.1 + .2)``
and
``println(.1F + .2F)``
``0.30000000000000004``
and
``0.3``

### 🔗Lua

``print(.1 + .2)``
and
``print(string.format("%0.17f", 0.1 + 0.2))``
``0.3``
and
``0.30000000000000004``

### 🔗MATLAB

``0.1 + 0.2``
and
``sprintf('%.17f', 0.1 + 0.2)``
``0.3``
and
``0.30000000000000004``

### 🔗Mathematica

``0.1 + 0.2``
``0.3``

Mathematica has a fairly thorough internal mechanism for dealing with numerical precision and supports arbitrary precision.

### 🔗MySQL

``SELECT .1 + .2;``
``0.3``

### 🔗Nim

``echo(0.1 + 0.2)``
``0.3``

### 🔗OCaml

``0.1 +. 0.2;;``
``float = 0.300000000000000044``

### 🔗Objective-C

``````#import <Foundation/Foundation.h>

int main(int argc, const char * argv[]) {
@autoreleasepool {
NSLog(@"%.17f\n", .1+.2);
}
return 0;
}``````
``0.30000000000000004``

### 🔗PHP

``echo .1 + .2;``
and
``var_dump(.1 + .2);``
and
``var_dump(bcadd(.1, .2, 1));``
``0.3``
and
``float(0.30000000000000004441)``
and
``string(3) "0.3"``

PHP `echo` converts `0.30000000000000004441` to a string and shortens it to “0.3”. To achieve the desired floating-point result, adjust the precision setting: `ini_set("precision", 17)`.

### 🔗Perl

``perl -E 'say 0.1+0.2'``
and
``perl -e 'printf q{%.17f}, 0.1+0.2'``
and
``perl -MMath::BigFloat -E 'say Math::BigFloat->new(q{0.1}) + Math::BigFloat->new(q{0.2})'``
``0.3``
and
``0.30000000000000004``
and
``0.3``

The addition of float primitives only appears to print correctly because not all of the 17 digits are printed by default. The core Math::BigFloat allows true arbitrary precision floating point operations by never using numeric primitives.

### 🔗PicoLisp

``````[load "frac.min.l"]
[println (+ (/ 1 10) (/ 2 10))]``````
``(/ 3 10)``

### 🔗PostgreSQL

``SELECT select 0.1::float + 0.2::float;``
``0.3``

### 🔗Powershell

``0.1 + 0.2``
``0.3``

### 🔗Prolog (SWI-Prolog)

``?- X is 0.1 + 0.2.``
``X = 0.30000000000000004.``

### 🔗Pyret

``0.1 + 0.2``
and
``~0.1 + ~0.2``
``0.3``
and
``~0.30000000000000004``

Pyret has built-in support for both rational numbers and floating points. Numbers written normally are assumed to be exact. In contrast, RoughNums are represented by floating points, and are written prefixed with a `~`, indicating that they are not precise answers – the `~` is meant to visually evoke hand-waving. A user who sees a computation produce `~0.30000000000000004` knows to treat the value with skepticism. RoughNums cannot be compared directly for equality; they can only be compared up to a given tolerance.

### 🔗Python 2

``print(.1 + .2)``
and
``.1 + .2``
and
``float(decimal.Decimal(".1") + decimal.Decimal(".2"))``
and
``float(fractions.Fraction('0.1') + fractions.Fraction('0.2'))``
``0.3``
and
``0.30000000000000004``
and
``0.3``
and
``0.3``

Python 2’s `print` statement converts `0.30000000000000004` to a string and shortens it to “0.3”. To achieve the desired floating point result, use `print(repr(.1 + .2))`. This was fixed in Python 3 (see below).

### 🔗Python 3

``print(.1 + .2)``
and
``.1 + .2``
and
``float(decimal.Decimal('.1') + decimal.Decimal('.2'))``
and
``float(fractions.Fraction('0.1') + fractions.Fraction('0.2'))``
``0.30000000000000004``
and
``0.30000000000000004``
and
``0.3``
and
``0.3``

Python (both 2 and 3) supports decimal arithmetic with the decimal module, and true rational numbers with the fractions module.

### 🔗R

``print(.1 + .2)``
and
``print(.1 + .2, digits=18)``
``0.3``
and
``0.30000000000000004``

### 🔗Racket (PLT Scheme)

``(+ .1 .2)``
and
``(+ 1/10 2/10)``
``0.30000000000000004``
and
``3/10``

### 🔗Raku

``raku -e 'say 0.1 + 0.2'``
and
``raku -e 'say (0.1 + 0.2).fmt(\"%.17f\")'``
and
``raku -e 'say 1/10 + 2/10'``
and
``raku -e 'say 0.1e0 + 0.2e0'``
``0.3``
and
``0.3``
and
``0.3``
and
``0.30000000000000004``

Raku uses rationals by default, so `.1` is stored something like ```{ numerator => 1, denominator => 10 }```. To actually trigger the behavior, you must force the numbers to be of type Num (double in C terms) and use the base function instead of the `sprintf` or `fmt` functions (since those functions have a bug that limits the precision of the output).

### 🔗Ruby

``puts 0.1 + 0.2``
and
``puts 1/10r + 2/10r``
``0.30000000000000004``
and
``3/10``

Ruby supports rational numbers in syntax with version 2.1 and newer directly. For older versions use Rational. Ruby also has a library specifically for decimals: BigDecimal.

### 🔗Rust

``````extern crate num;
use num::rational::Ratio;

fn main() {
println!("{}", 0.1 + 0.2);
println!("1/10 + 2/10 = {}", Ratio::new(1, 10) + Ratio::new(2, 10));
}``````
``````0.30000000000000004
1/10 + 2/10 = 3/10``````

Rust has rational number support from the num crate.

### 🔗SageMath

``.1 + .2``
and
``RDF(.1) + RDF(.2)``
and
``RBF('.1') + RBF('.2')``
and
``QQ('1/10') + QQ('2/10')``
``0.3``
and
``0.30000000000000004``
and
``["0.300000000000000 +/- 1.64e-16"]``
and
``3/10``

SageMath supports various fields for arithmetic: Arbitrary Precision Real Numbers, RealDoubleField, Ball Arithmetic, Rational Numbers, etc.

### 🔗Scala

``scala -e 'println(0.1 + 0.2)'``
and
``scala -e 'println(0.1F + 0.2F)'``
and
``scala -e 'println(BigDecimal(\"0.1\") + BigDecimal(\"0.2\"))'``
``0.30000000000000004``
and
``0.3``
and
``0.3``

### 🔗Smalltalk

``0.1 + 0.2.``
``0.30000000000000004``

### 🔗Swift

``0.1 + 0.2``
and
``Decimal(0.1) + Decimal(0.2)``
``0.30000000000000004``
and
``0.3``

Swift supports decimal arithmetic with the Foundation module.

### 🔗TCL

``puts [expr .1 + .2]``
``0.30000000000000004``

### 🔗Turbo Pascal 7.0

``writeln(0.1 + 0.2);``
``0.3``

### 🔗Vala

``````static int main(string[] args) {
stdout.printf("%.17f\n", 0.1 + 0.2);
return 0;
}``````
``0.30000000000000004``

### 🔗Visual Basic 6

``````a# = 0.1 + 0.2: b# = 0.3
Debug.Print Format(a - b, "0." & String(16, "0"))
Debug.Print a = b``````
``````0.0000000000000001
False``````

Appending the identifier type character `#` to any identifier forces it to Double.

### 🔗WebAssembly (WAST)

``````(func \$add_f32 (result f32)
f32.const 0.1
f32.const 0.2
and
``````(func \$add_f64 (result f64)
f64.const 0.1
f64.const 0.2
``0.30000001192092896``
and
``0.30000000000000004``

See demo.

### 🔗awk

``awk 'BEGIN { print 0.1 + 0.2 }'``
``0.3``

### 🔗bc

``0.1 + 0.2``
``0.3``

### 🔗dc

``0.1 0.2 + p``
``0.3``

### 🔗zsh

``echo "\$((.1 + .2))"``
``0.30000000000000004``