What is the absolute minimum that a programmer needs to know about binary numbers and arithmetic?

advertisements

Although I know the basic concepts of binary representation, I have never really written any code that uses binary arithmetic and operations.

I want to know

  • What are the basic concepts any programmer should know about binary numbers and arithmetic ? , and

  • In what "practical" ways can binary operations be used in programming. I have seen some "cool" uses of shift operators and XOR etc. but are there some typical problems where using binary operations is an obvious choice.

Please give pointers to some good reference material.


If you are developing lower-level code, it is critical that you understand the binary representation of various types. You will find this particularly useful if you are developing embedded applications or if you are dealing with low-level transmission or storage of data.

That being said, I also believe that understanding how things work at a low level is useful even if you are working at much higher levels of abstraction. I have found, for example, that my ability to develop efficient code is improved by understanding how things are represented and manipulated at a low level. I have also found such understanding useful in working with debuggers.

Here is a short-list of binary representation topics for study:

  • numbering systems (binary, hex, octal, decimal, ...)
  • binary data organization (bits, nibbles, bytes, words, ...)
  • binary arithmetic
  • other binary operations (AND,OR,XOR,NOT,SHL,SHR,ROL,ROR,...)
  • type representation (boolean,integer,float,struct,...)
  • bit fields and packed data

Finally...here is a nice set of Bit Twiddling Hacks you might find useful.