Your language isn't broken, it's doing floating point math.
Computers can only natively store integers,
so they need some way of representing decimal numbers.
This representation comes with some degree of inaccuracy.
That's why, more often than not, .1 + .2 != .3
.
It's actually pretty simple. When you have a base 10 system (like ours), it can only express fractions that use a prime factor of the base. The prime factors of 10 are 2 and 5. So 1/2, 1/4, 1/5, 1/8, and 1/10 can all be expressed cleanly because the denominators all use prime factors of 10. In contrast, 1/3, 1/6, and 1/7 are all repeating decimals because their denominators use a prime factor of 3 or 7. In binary (or base 2), the only prime factor is 2. So you can only express fractions cleanly which only contain 2 as a prime factor. In binary, 1/2, 1/4, 1/8 would all be expressed cleanly as decimals. While, 1/5 or 1/10 would be repeating decimals. So 0.1 and 0.2 (1/10 and 1/5) while clean decimals in a base 10 system, are repeating decimals in the base 2 system the computer is operating in. When you do math on these repeating decimals, you end up with leftovers which carry over when you convert the computer's base 2 (binary) number into a more human readable base 10 number.
Below are some examples of sending .1 + .2
to standard output in a variety of languages.
read more:  wikipedia  IEEE 754  Stack Overflow  What Every Computer Scientist Should Know About FloatingPoint Arithmetic
Language  Code  Result 

ABAP 
And

0.30000000000000004 And0.3 
Ada 

3.00000E01 
APL 

0.30000000000000004 
AutoHotkey 

0.300000 
awk 

0.3 
bc 

0.3 
C 

0.30000000000000004 
Clojure 

0.30000000000000004 
Clojure supports arbitrary precision and ratios. 

ColdFusion 

0.3 
Common Lisp 
And
And

0.3 And3/10 And0.20000005 
C++ 

0.30000000000000004 
Crystal 

0.30000000000000004 
C# 
And

0.30000000000000004 And0.3 
C# has support for 128bit decimal numbers, with 2829 significant digits of precision. Their range, however, is smaller than that of both the single and double precision floating point types. Decimal literals are denoted with the 

D 

0.29999999999999999 
Dart 

0.30000000000000004 
dc 

.3 
Delphi XE5 

3.00000000000000E0001 
Elixir 

0.30000000000000004 
Elm 

0.30000000000000004 
elvish 

0.30000000000000004 
elvish uses Go’s 

Emacs Lisp 

0.30000000000000004 
Erlang 

0.30000000000000004 
FORTRAN 

0.300000012 
Gforth 

0.3 
GHC (Haskell) 
And


Haskell supports rational numbers. To get the math right, 

Go 

0.3 
Groovy 

0.3 
Literal decimal values in Groovy are instances of java.math.BigDecimal 

Hugs (Haskell) 

0.3 
Io 

0.3 
Java 
And

0.30000000000000004 And0.3 
Java has builtin support for arbitrary precision numbers using the BigDecimal class. 

JavaScript 

0.30000000000000004 
The decimal.js library provides an arbitraryprecision Decimal type for JavaScript. 

Julia 

0.30000000000000004 
Julia has builtin rational numbers support and also a builtin arbitraryprecision BigFloat data type. To get the math right, 

K (Kona) 

0.3 
Lua 
And

0.3 And0.30000000000000004 
Mathematica 

0.3 
Mathematica has a fairly thorough internal mechanism for dealing with numerical precision and supports arbitrary precision. 

Matlab 
And

0.3 And0.30000000000000004 
MySQL 

0.3 
Nim 

0.3 
ObjectiveC 

0.30000000000000004 
OCaml 

float = 0.300000000000000044 
Perl 5 
And

0.3 And0.30000000000000004 
Perl 6 
And
And
And

0.3 And0.3 And0.3 And0.30000000000000004 
Perl 6, unlike Perl 5, uses rationals by default, so .1 is stored something like { numerator => 1, denominator => 10 }. To actually trigger the behavior, you must force the numbers to be of type Num (double in C terms) and use the base function instead of the sprintf or fmt functions (since those functions have a bug that limits the precision of the output). 

PHP 

0.3 float(0.30000000000000004441) 
PHP 

PicoLisp 
And

(/ 3 10) 
You must load file “frac.min.l”. 

Postgres 

0.3 
Powershell 

0.3 
Prolog (SWIProlog) 

X = 0.30000000000000004. 
Pyret 
And

0.3 And~0.30000000000000004 
Pyret has builtin support for both rational numbers
and floating points. Numbers written
normally are assumed to be exact. In contrast,
RoughNums are represented by floating
points, and are written with
a 

Python 2 
And
And

0.3 And0.3 And0.30000000000000004 
Python 2’s “print” statement converts 0.30000000000000004 to a string and shortens it to “0.3”. To achieve the desired floating point result, use print(repr(.1 + .2)). This was fixed in Python 3 (see below). 

Python 3 
And

0.30000000000000004 And0.30000000000000004 
Python (both 2 and 3) supports decimal arithmetic with the decimal module, and true rational numbers with the fractions module. 

R 
And

0.3 And0.30000000000000004 
Racket (PLT Scheme) 
And

0.30000000000000004 And3/10 
Ruby 
And

0.30000000000000004 And3/10 
Ruby supports rational numbers in syntax with version 2.1 and newer directly. For older versions use Rational. 

Rust 

0.30000000000000004 1/10 + 2/10 = 3/10 
Rust has rational number support from the num crate. 

SageMath 
And
And
And

0.3 And0.30000000000000004 And[“0.300000000000000 +/ 1.64e16”] And3/10 
SageMath supports various fields for arithmetic: Arbitrary Precision Real Numbers, RealDoubleField, Ball Arithmetic, Rational Numbers, etc. 

scala 
And
And

0.30000000000000004 And0.3 And0.3 
Smalltalk 

0.30000000000000004 
Swift 
And

0.3 And0.30000000000000004 
TCL 

0.30000000000000004 
Turbo Pascal 7.0 

3.0000000000E01 
Vala 

0.30000000000000004 
Visual Basic 6 

0.0000000000000001 
Appending the identifier type character 

zsh 

0.30000000000000004 
I am Erik Wiffin. You can contact me at: erik.wiffin.com or erik.wiffin@gmail.com.
This project is on github. If you think this page could be improved, send me a pull request.