Martin Kochanski’s web site / Encryption / The FAP4 chip

 

Why I'm speaking decimal

Purists may object to to the way that I use ordinary decimal arithmetic when talking about the design of an arithmetic chip that evidently (being a chip) works in binary.  The reason is that I have been brought up decimal, I think in decimal, and I do sums in decimal, and so have all my human readers.  All the techniques that I have described work just as well in binary and in decimal, so there is no point in making them harder to understand by launching into an unfamiliar and impractical notation.

The only natural way to write numbers: base 1

If you want to be really pedantic, there is only one completely natural way to write numbers, and that is unary (or base 1).  In unary there is only one possible digit, which you could use any symbol you like for, but which I'll write as a dot.  So here are a few numbers written in unary:

Zero   Four .... One hundred and twenty-three
One . Five .....
Two .. Ten ..........
Three ... Twelve ............

Unary is simple and fundamental.  In cartoons, prisoners always use it when chalking up how many days they have been in their cell.  Roman numerals use it at the beginning: I, II, III, IIII.

I'm not seriously proposing using unary for anything but I want to point out that snobbery about which number system is more fundamental, binary or decimal, is pointless because neither of them is.

Counting on your fingers: base 10

The Romans got bored after IIII: they didn't represent five as IIIII but as V.  This was probably a shorthand for IIII (IIII with a line through it), which is also what the cartoon prisoner writes when he gets to five.  The basic principle is that once the things you are counting get too many, you group them into clumps and count the clumps instead.  A prisoner seeing IIII IIII III will know that he's been inside for two fives and a three, or thirteen days.

For historical and biological reasons, we like to group things in tens.  So here we go.  Starting with:

One hundred and twenty-three

we make as many clumps of ten as we can. We can make twelve of them, and there are three dots left over:

Twelve Three
:::::: ...

Twelve is still rather a big number, so we make as many clumps as we can out of it: one, actually.  There are two dots left over:

One Two Three
. .. ...

The only remaining thing to do is to give separate symbols to these numbers, quicker to write than dots and less confusing when written next to each other, and there we are:

One Two Three
1 2 3

The reason why people sometimes find this difficult to understand is that it's too simple: our language has also evolved using tens (although not entirely: think of the English "threescore years and ten" or the French "quatre vingts") so we never really pay attention to what lies behind the way we write numbers.

Decimal notation fits our brains quite well.  It requires us to invent ten distinct symbols and tell the difference between them easily.  It requires us to memorise a table with a hundred entries for addition and another table with a hundred entries for multiplication, which is something that most of us can manage.  And it is compact: we can represent numbers up to a million with a sequence of just six digits.

On and off: base 2

When you set about doing things digitally, the rule is to make them as simple as you possibly can because then you can build a circuit to do these simple things very fast.  For this purpose, the fewer symbols you have, the better.  If we stuck to ten possible symbols, every element in a chip would have to distinguish reliably between them (just try to identify ten distinct brightnesses of light and you'll see that this isn't trivial) and arithmetic would be horrible because every adder would have to have its own hundred-entry table and we'd would soon run out of space on the chip. 

This is why digital devices work with the smallest possible number of states, which is two.  Thus the numbers that they manipulate are expressed in binary, or base 2.

We can convert unary to base 2 in just the same way that we converted it to base 10.  I'll show just a few of the steps:

One hundred and twenty-three
Sixty-one One
.
Thirty One One
. .
Fifteen Zero One One
  . .
One One One One Zero One One
. . . .   . .
1 1 1 1 0 1 1

We call the symbols ones and zeroes but they could be anything.  On a chip they could be voltages above or below a certain level, or the presence or absence of a magnetic field; on a CD they could be the presence or absence of an indentation in the plastic.

Binary is very good for calculations: both addition and multiplication of single digits are so simple that they don't even need tables.

Moreover, multiplying any binary number by a single digit is very easy: either the digit is 0, in which case the result is 0, or the digit is 1, in which case the result is the same as the original number.  This is why, when describing long multiplication, I kept quiet about the work involved in making the single-digit multiples of the number being multiplied: in binary, this is no work at all.

Another good thing is that when you are doing long division in binary, you don't have to choose which multiple of the divisor to subtract, because the only possible multiples are 0 and 1.

Binary has one drawback: it is ridiculously cumbersome.  To represent numbers up to a million, you would need twenty binary digits ("bits") whereas with decimal digits you only needed six.  This is why people don't use binary.

A simple phrasebook

In terms of expressive power, a decimal digit is worth 3.3 bits (or a bit is worth 0.301 decimal digits).  For example, a 1024-bit number is about 308 decimal digits long.  In my descriptions, I've said "300 digits" because that's easier to pronounce.

When you subtract using the nines complement of a number, the equivalent in binary is the ones complement: 0 becomes 1 and 1 becomes 0.

Shifting a number to the left one place (as we did when doing modular multiplication) multiplies it by 10 when you are working in decimal but doubles it when you are working in binary.  Shifting it to the right halves it.

In decimal arithmetic, delayed-carry integers use the eleven symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 and X.  In binary arithmetic, they use the three symbols 0, 1, and 2: on an actual chip, two bits are used to represent each digit of a delayed-carry integer: 00, 01, and 10.