First of all, let’s see how computers store numbers. We’ll assume that our machine has a word size of 4 bits. In that case, possible combinations of the ON/OFF states in a word are:
BBBB 
BBBA 
BBAB 
BBAA 
BABB 
BABA 
BAAB 
BAAA 
ABBB 
ABBA 
ABAB 
ABAA 
AABB 
AABA 
AAAB 
AAAA 
Where A represents ON and B represents OFF. Right now, it’s just the collection of all possible states of a four bit word. You can count that there are 16 of them. Let’s assign numbers 0 through 15 to them.
How do we decide which state should get which number?
It’s up to you. But you should remember that our final goal is to use these states to represent integers. Hence, it would make the life of circuit designers very easy if we map our states to match the binary number system. If we do that, the states (and their numeric counterparts) will change to this:
Note: This transformation also introduces an ordering of that set
0000  0 
0001  1 
0010  2 
0011  3 
0100  4 
0101  5 
0110  6 
0111  7 
1000  8 
1001  9 
1010  10 
1011  11 
1100  12 
1101  13 
1110  14 
1111  15 
This makes the implementation of arithmetic operations in digital circuits a piece of cake. Integer addition circuit in computers follows method similar to the one we were taught in primary school, the one that involves carry over.
What would happens if we try to add 1 to the number at the higher end in our ordering (i.e. 1111)?
All the digits will become 0 and a carry of 1 at the end will be ignored by the adder circuit. So we’ll get 0000 as the result. The sum overflows and loops over our range.
Note: The overflow bit isn’t exactly ignored by the CPU. If any form of (over/under)flow occurs during an arithmetic operation, the ALU sets the corresponding bit in the system status register. Compilers provide facilities to check the status of that bit and reset it if needed.
Now with that out of our way, let’s talk about signed numbers, i.e. numbers with a sign. How can we represent negative numbers in our system? These are some of the methods that could be used:
 Use the most significant bit as the sign bit, and rest to store the magnitude ( 0001 will be 1 and 1001 will be 1).
In that case, our mapping (from bit states to numbers) will become:
0000  0 
0001  1 
0010  2 
0011  3 
0100  4 
0101  5 
0110  6 
0111  7 
1000  0 
1001  1 
1010  2 
1011  3 
1100  4 
1101  5 
1110  6 
1111  7 
 This one is just horrible (You’ll know why by the end of this post)
 Use one’s complement ( 1110 will map to 1).
Mapping:
0000  0 
0001  1 
0010  2 
0011  3 
0100  4 
0101  5 
0110  6 
0111  7 
1000  7 
1001  6 
1010  5 
1011  4 
1100  3 
1101  2 
1110  1 
1111  0 
This one is a bit better, but not perfect. What if we want to add two numbers in this system? We could try to use the circuit we used for unsigned additions, it works perfectly for unsigned numbers (except when the result is out of range, which is an unsolvable problem). wouldn’t it be great if we could just use it for signed arithmetic as well?
Note: Remember that the computer only understands the representation on the left column of the table. The one on the right is just our mapping of integers onto the bit states.
 3+2, 2 steps ahead of 3 is 5, it’s the correct answer.
 3+2, 2 steps ahead of 3 is 1, correct.
 3+4, what do we get if we move 4 steps ahead of 3? 3 steps ahead of 3 take us to 0( 1111), and one more step takes us to 0( 0000), wrong answer!
Why did we get the wrong answer?
The result overflowed when we tried adding 4 to 3, and the numeric mapping introduced by one’s complement doesn’t change linearly over the ends. The ends of this ordering are connected when it comes to arithmetic operations (because of how the ALU does arithmetic), this results in the introduction of a double zero in the ordering.
How do we fix this problem?
I don’t know. Why not just offset the negative numbers by 1. That might cause the two zeros to overlap.
Doing that would change our mapping to this:
0000  0 
0001  1 
0010  2 
0011  3 
0100  4 
0101  5 
0110  6 
0111  7 
1000  8 
1001  7 
1010  6 
1011  5 
1100  4 
1101  3 
1110  2 
1111  1 
There you have it, a mapping from bit states to signed integers which allows arithmetic circuits for unsigned numbers to be used for signed numbers as well. And how do you get this mapping? You offset the one’s complement by 1. Oh wait, that’s the two’s complement. What a coincidence!
The reason this system works so nicely is that there’s a continuous and linear transition in the integer mapping as we move step by step along the ordering of bit states, except at the numeric endpoints ( 7and 8).
Corollary:

Negative Integers Don’t Exist (in CPU)
They’re basically a compiletime construct. The negative numbers you type in your source file are converted to the binary form using two’s complement rules, and after that, they’re treated just like unsigned numbers. Thanks to the elegance of two’s complement, the computer hardware doesn’t need to treat negative numbers specifically.
In future I might do an analysis of the floating point representation as well. Subscribe to recieve email notification of future posts.
See you later