How do computers deal with decimal points ? This question was asked by a student during a recent class on binary calculations.

Computer by in large use a "floating point" system which converts a number into a string multiplied by a given base to a power; mathematically,

Computer by in large use a "floating point" system which converts a number into a string multiplied by a given base to a power; mathematically,

*x*=

*f*x

*b*

^{e}where f is a real number and e is an integer

Using this system, 110.5 in decimal becomes 1.0105 x 10

^{2}. If the base is predicided and we express the real number part (called the "mantissa") as less than 1 than we can represent the number as a string of digits with the last number being the integer. Thus, 110.5 becomes 11053 in this system. Of course, computers operate in binary but the logic of the nomenclature is the same. This type of "floating point" system (and many variations) facilitate the very fast calculations performed by modern computers. Fixed point systems, which operate on a predetermined setting of the decimal or binary point are far less common. This floating point system does present problems with rounding errors and representation of irrational numbers (which have to be approximated as real numbers in this system) but still is a powerful method of storing and handling large sets of numbers.

^{}

This isn't something I've ever looked in to.

ReplyDeleteHow would getting two different numbers (say, 110.5 and 1208.3), taking them to 1.1105x10^2 and 1.2083x10^3, then 11053 (11052 shouldn't it be?) and 120833 respectively, how does that enable it to do faster calculations?

I'd imagine having them with a different power of 10 as their base would make it harder?

(again, not something I've looked in to. I have a reasonably good idea of how the internals work, but not of the maths side)

-Chris H