Your language isn't broken, it's doing floating point math.
Computers can only natively store integers,
so they need some way of representing decimal numbers.
This representation comes with some degree of inaccuracy.
That's why, more often than not, .1 + .2 != .3
.
It's actually pretty simple. When you have a base 10 system (like ours), it can only express fractions that use a prime factor of the base. The prime factors of 10 are 2 and 5. So 1/2, 1/4, 1/5, 1/8, and 1/10 can all be expressed cleanly because the denominators all use prime factors of 10. In contrast, 1/3, 1/6, and 1/7 are all repeating decimals because their denominators use a prime factor of 3 or 7. In binary (or base 2), the only prime factor is 2. So you can only express fractions cleanly which only contain 2 as a prime factor. In binary, 1/2, 1/4, 1/8 would all be expressed cleanly as decimals. While, 1/5 or 1/10 would be repeating decimals. So 0.1 and 0.2 (1/10 and 1/5) while clean decimals in a base 10 system, are repeating decimals in the base 2 system the computer is operating in. When you do math on these repeating decimals, you end up with leftovers which carry over when you convert the computer's base 2 (binary) number into a more human readable base 10 number.
Below are some examples of sending .1 + .2
to standard output in a variety of languages.
read more:  wikipedia  IEEE 754  Stack Overflow
Language  Code  Result 

C 

0.30000000000000004 
C++ 

0.30000000000000004 
PHP  echo .1 + .2; 
0.3 
PHP converts 0.30000000000000004 to a string and shortens it to "0.3". To achieve the desired floating point result, adjust the precision ini setting: ini_set("precision", 17).  
MySQL  SELECT .1 + .2; 
0.3 
Postgres  SELECT select 0.1::float + 0.2::float; 
0.3 
Delphi XE5  writeln(0.1 + 0.2); 
3.00000000000000E0001 
Erlang  io:format("~w~n", [0.1 + 0.2]). 
0.30000000000000004 
Elixir  IO.puts(0.1 + 0.2) 
0.30000000000000004 
Ruby 
puts 0.1 + 0.2 And puts 1/10r + 2/10r

0.30000000000000004 And 3/10 
Ruby supports rational numbers in syntax with version 2.1 and newer directly. For older versions use
Rational.
Ruby also has a library specifically for decimals: BigDecimal. 

Python 2 
print(.1 + .2) And float(decimal.Decimal(".1") + decimal.Decimal(".2"))
And.1 + .2

0.3 And 0.3 And 0.30000000000000004 
Python 2's "print" statement converts 0.30000000000000004 to a string and shortens it to "0.3". To achieve the desired floating point result, use print(repr(.1 + .2)). This was fixed in Python 3 (see below).  
Python 3 
print(.1 + .2) And .1 + .2

0.30000000000000004 And 0.30000000000000004 
Lua  print(.1 + .2) 
0.3 0.30000000000000004 
JavaScript  document.writeln(.1 + .2); 
0.30000000000000004 
Java 
System.out.println(.1 + .2); And System.out.println(.1F + .2F);

0.30000000000000004 And 0.3 
Julia  .1 + .2 
0.30000000000000004 
Julia has builtin
rational numbers support
and also a builtin
arbitraryprecision BigFloat data type.
To get the math right,
1//10 + 2//10 returns 3//10 .


Clojure  (+ 0.1 0.2) 
0.30000000000000004 
Clojure supports arbitrary precision and ratios.
(+ 0.1M 0.2M) returns 0.3M , while
(+ 1/10 2/10) returns 3/10 .


C#  Console.WriteLine("{0:R}", .1 + .2); 
0.30000000000000004 
GHC (Haskell) 
0.1 + 0.2

0.30000000000000004 
Haskell supports rational numbers. To get the math right,
(1 % 10) + (2 % 10) returns 3 % 10 .


Hugs (Haskell)  0.1 + 0.2 
0.3 
bc  0.1 + 0.2 
0.3 
Nim  echo(0.1 + 0.2) 
0.3 
Gforth  0.1e 0.2e f+ f. 
0.3 
dc  0.1 0.2 + p 
.3 
Racket (PLT Scheme) 
(+ .1 .2) And (+ 1/10 2/10)

0.30000000000000004 And 3/10 
Rust 

0.30000000000000004 3/10 
Rust has rational number support from the num crate.  
Emacs Lisp  (+ .1 .2) 
0.30000000000000004 
Turbo Pascal 7.0  writeln(0.1 + 0.2); 
3.0000000000E01 
Common Lisp 
* (+ .1 .2) And * (+ 1/10 2/10)

0.3 And 3/10 
Go 

0.3 0.30000000000000004 0.299999999999999988897769753748434595763683319091796875 
Go numeric constants have arbitrary precision.  
ObjectiveC  0.1 + 0.2; 
0.300000012 
OCaml 
0.1 +. 0.2;;
 float = 0.300000000000000044 
Powershell  PS C:\>0.1 + 0.2 
0.3 
Prolog (SWIProlog)  ? X is 0.1 + 0.2. 
X = 0.30000000000000004. 
Perl 5 
perl E 'say 0.1+0.2' perl e 'printf q{%.17f}, 0.1+0.2'

0.3 0.30000000000000004 
Perl 6 
perl6 e 'say 0.1+0.2' perl6 e 'say sprintf(q{%.17f}, 0.1+0.2)' perl6 e 'say 1/10+2/10'

0.3 0.30000000000000000 0.3 
Perl 6, unlike Perl 5, uses rationals by default, so .1 is stored something like { numerator => 1, denominator => 10 }..  
R  print(.1+.2) 
0.3 0.300000000000000044 
scala 
scala e 'println(0.1 + 0.2)' And scala e 'println(0.1F + 0.2F)'
Andscala e 'println(BigDecimal("0.1") + BigDecimal("0.2"))'

0.30000000000000004 And 0.3 And 0.3 
Smalltalk  0.1 + 0.2. 
0.30000000000000004 
Swift 
0.1 + 0.2 NSString(format: "%.17f", 0.1 + 0.2)

0.3 0.30000000000000004 
D 

0.29999999999999999 0.30000001192092896 0.30000000000000000 
ABAP 
WRITE / CONV f( '.1' + '.2' ). And WRITE / CONV decfloat16( '.1' + '.2' ).

3.0000000000000004E01 And 0.3 
I am Erik Wiffin. You can contact me at: erik.wiffin.com or erik.wiffin@gmail.com.
This project is on github. If you think this page could be improved, send me a pull request.