When a machine deals with numbers, it uses a binary system. Binary (BIN) literally means consisting of two things or parts. In this case, these two things are false and true, often represented as 0 and 1. In this context, they are called bits. The binary system has a base of 2, meaning each bit can have one of two possible values: 0 or 1.
Bits form another unit of digital information known as a Byte. A Byte contains 8 bits and represents Decimal (DEC) values between 0 and 255. But why do we use Bytes, and what is their purpose? Since computers work with a series of ones and zeroes, it’s crucial to have a fixed unit to indicate where one value starts and ends. Without such a unit, it would be impossible to differentiate between the various values in a sequence.
Let’s take a number from the previous lesson and see how it appears in both Decimal and Binary forms.
Binary to Decimal Conversion
Converting a Binary number to a Decimal number is straightforward, especially when we deal with Bytes. Before showing you the “typical” conversion method, let’s first look at the representation of a Decimal number. The first row shows a series of decimal numbers represented as powers of 2, which is our base.
To convert from Binary to Decimal, sum up all the Decimal values corresponding to the bits that have a value of 1 (true). The bits set to 0 are ignored.
Decimal to Binary Conversion
We’ve just converted a Binary number to a Decimal number, but what about the reverse? Let’s look at the “typical” method for converting from Decimal to Binary. Let’s take the number 13 as an example.
The conversion is simple: we repeatedly divide the number by 2 until we get 1 as a quotient. The remainder of each division gives us the Binary representation.
Now, write down all the remainders from bottom to top to get the Binary representation of 13, which is 1101. That wasn’t too hard!
Bit Numbering
When you look at the Byte representation, you’ll notice that the values corresponding to 0s and 1s are arranged in ascending order from right to left. The right-most bit is called the Least Significant Bit (LSB) and represents the smallest possible value (1).
On the other hand, the left-most bit is called the Most Significant Bit (MSB) and represents the highest value (128 for an 8-bit Byte).
These terms - MSB and LSB - indicate the order of bits in a byte when transmitted over a network. Bits are transmitted in one of two modes:
- Most Significant Bit first: This means the MSB is sent first.
- Least Significant Bit first: This means the LSB is sent first.
It’s important to remember the order of bits during transmission, as different systems may use different conventions. This can vary depending on the hardware or vendor.
Bigger Numbers
So far, we’ve been working with relatively small numbers, no bigger than 255. But what about really large numbers, like:
- 4,294,967,295 (maximum value for a 32-bit integer)
- 1,701,411,735,336,854,404,000 (maximum value for a big integer)
Binary representation for these large numbers becomes difficult to manage and understand. Converting such large values between different numbering systems would be challenging. But fear not, there are better ways to represent large integers than a long sequence of 0s and 1s. We’ll explore this in future lessons.
Summary
Now you understand how computers interpret numbers, convert Binary to Decimal, and vice versa. With this foundational knowledge, you can proceed with exploring other interesting aspects of computer science. See you in the next lesson!