Omni Calculator logo

Using our Hamming codes calculator, you can encode, decode and detect errors in binary messages! Since the beginning of the information age, the communication of data at a machine level has required particular attention. Errors can and will happen, and this can cause a program to fail or your Windows computer to show a blue screen of death. The detection and correction of errors required the introduction of linear codes. What are those? Keep reading!

Here you will discover

  • What errors are, and why they matter;
  • What error correction is;
  • What we can do about errors with linear codes;
  • What a Hamming code is;
  • Why Hamming codes are so special;
  • How to encode, correct, and decode a message;
  • How Hamming codes work β€” the Hamming code algorithm;
  • How to use our Hamming codes calculator; and
  • A lot of math. But if you are here, you are likely ready for that!

What are errors?

In every form of communication, you can find errors. Imagine talking to your friend while driving with your windows open or talking on the phone with poor signal: "can you repeat that, please?" This is the most basic form of error correction: repetition codes!

Errors appear as a corruption of a message. Let's think in terms of letters β€” it will be easier. An accidental substitution of one of the characters in a word for another is an instance of a single error. Think of something like "I like math" and "I like meth".

Computers suffer from errors too β€” all the time. The alphabet of a computer is the simplest possible, with size 2 (the number of symbols/letters needed to build every word). We are talking of binary code, and even if you don't notice, it's of fundamental importance. Every message used by computers, smartphones, and everything digital to speak, both internally and in networks, is made of 0s and 1s.

Whether by interference from the outside world, bad data, or bad luck β€” you don't need much to make a bit flip! We need a solution to this problem.

πŸ™‹ A bit is the smallest unit of information in a binary message. This term comes from binary information digit. Be careful not to mistake bit with byte: the latter is a set of 8 bits used to encode characters in your computer!

What are error correction codes?

Machines are pretty rigid! When they meet an error, they don't bother to try to solve it by themselves: they either skip it or crash after a short while. Have you ever had the pleasure of getting a blue screen of death?

At the dawn of computation, engineers managed to make communication between computers reliable by introducing techniques that allowed to detect an error and correct it β€” at the price of some more bits in every message.

πŸ”Ž Error correction routines were introduced when computers still used punched cards. It was only 70 years ago β€” since then, technology has made truly giant leaps. However, the working principles remained the same: the mathematics of computation fits both vacuum tubes and i9 CPUs.

The mathematical theory lying behind error correction calls messages where you can identify and correct bit flips codes. Many error correcting codes exist, but we will focus only on two types: parity codes and Hamming codes.

What is parity?

The simplest (and the first adopted) way to possibly detect an error in a binary message is to add an extra bit for any given amount of data bits. Then, according to the quantity of 1s in the message, we assign a value to that bit.

Two types of parity exist out there:

  • Even parity: if the number of ones in the message is even, add a bit at the end of the message with value 11. If the number of ones is odd, assign 00. For example:

    1100β†’110011100 \rightarrow 11001

  • Odd parity: if the number of ones is odd, the parity bit has value 1. For example:

    1100β†’110001100 \rightarrow 11000

πŸ”Ž In mathematical terms, the parity checks correspond to a binary addition. In the case of even parity, it is additionally necessary to apply the NOT logic gate to the result. You can learn more from our dedicated tool, the parity bit calculator.

The method is not foolproof. The addition of a parity bit allows only to know if single bit errors happened during the transmission. Multiple bit errors remain undetected, and in general, there is no indication of the position of the flipped bit. That's where Hamming codes come in.

What are Hamming codes?

Hamming codes are advanced error correction codes that allow for detecting and correcting single bit errors in messages of various lengths.

Hamming codes use combinations of parity bits to identify the position of the errors, and they do so in the most efficient way possible.

In layman's words, Hamming codes have the highest ratio of transmitted message bits to their length. They achieve it with a distance of 3 (which is the number of possible errors before encountering a new "valid" message). That's pretty high compared to many other error correction codes.

We explored the concept of "distance" in depth in our Hamming distance calculator!

πŸ”Ž The scientist R. Hamming came up with Hamming codes in 1950. He launched a computation on an early computer, then left for the weekend. However, the program failed because of an error shortly after, only to be discovered on Monday (this is a memory shared by many developers). He then decided to develop a mechanism that would allow computers to correct errors independently.

Hamming codes come in many sizes. As we will see in more detail later, there are two sets of bits in such a code: parity bits and code bits. Two numbers identify Hamming codes:

  • The number of parity bits + number of code bits; and
  • The number of code bits.

The first one corresponds to the length of the encoded message, while the second one to the decoded message (the bits effectively containing your data).

The simplest Hamming code is the 3–1 code, while the most common is the 7–4 Hamming code. Other sizes are less common. The last used is usually the 255–247. This is an impressive rate, though!

How do Hamming codes work?

The language of error correction is linear algebra: all of the processes of encoding, detection, correction, and decoding happen using a set of different matrices. The good news is that it's possible to explain the process without dwelling in dot products and identities, but we will analyze both approaches.

In the next section, we will discover how Hamming codes work. Let's go!

⚠️ The mathematical foundations of the error correction theory are not the easiest ones! We are going to try to keep our language simple and understandable. The symbol ☑ (dangerous bend symbol) will warn you of incoming difficult passages!

First, we need to analyze how to create the code.

  • Write down a series of numbers, let's say:

    1 2 3 4 5 6 7 8 9 10 11 121\ 2\ 3\ 4\ 5\ 6\ 7\ 8\ 9\ 10\ 11\ 12 .

    We truncated it for convenience, but you can extend it as you like.

  • Translate them in binary code (learn how to do it from the binary converter!):

    0001 0010 0011 0100 0101 01100111 1000 1001 1010 1011 1100 0001\ 0010\ 0011\ 0100\ 0101\ 0110\\\\ 0111\ 1000\ 1001\ 1010\ 1011\ 1100

  • Assign the parity bits to the position equaling powers of 2 (11, 22, 44, 88... or, in binary 11, 1010, 100100, 10001000...):

    0001 0010 0011 0100 0101 01100111 1000 1001 1010 1011 1100 \textcolor{red}{\textbf{0001}}\ \textcolor{red}{\textbf{0010}}\ 0011\ \textcolor{red}{\textbf{0100}}\ 0101\ 0110\\\\ 0111\ \textcolor{red}{\textbf{1000}}\ 1001\ 1010\ 1011\ 1100

  • Breathe! We now have two sets of bits: parity bits and code bits:

    • The parity bits are 11, 22, 44, and 88;
    • The code bits are 33, 55, 66, 77, 99, 1010, 1111, and 1212.

Now it's time to introduce the genial idea of Hamming: assign to every parity bit a specific set of code bits in such a way that this assignment is unique. This mechanism will allow to cross-check the values of parity bits and pinpoint the position where an error occurs. Let's do it!

☑ General assignment rule:
Every code bit with the nn-th digit equalling 11 is assigned to the parity bit in the nn-th power of two (in binary) position.

We are going to illustrate this step by step! The relevant parity bit will be marked in red (while greying out the others) and every assigned code bit in blue.

Step 1: The code bits with the last digit (in the first position) equaling 11 are assigned to the first parity bit : 20=12^0 = 1 (or in binary, 11):

0001 0010 0011 0100 0101 01100111 1000 1001 1010 1011 1100\textcolor{red}{\textbf{0001}}\ \textcolor{grey}{0010}\ \textcolor{blue}{\textbf{0011}}\ \textcolor{grey}{0100}\ \textcolor{blue}{\textbf{0101}}\ \textcolor{black}{0110}\\\\ \textcolor{blue}{\textbf{0111}}\ \textcolor{grey}{1000}\ \textcolor{blue}{\textbf{1001}}\ \textcolor{black}{1010}\ \textcolor{blue}{\textbf{1011}}\ \textcolor{black}{1100}

thus encoding the bits numbered 33, 55, 77, 99, and 1111.

Step 2: The code bits with second to last digit (position 2) equaling 11 are assigned to the second parity bit:

0001 0010 0011 0100 0101 01100111 1000 1001 1010 1011 1100 \textcolor{grey}{0001}\ \textcolor{red}{\textbf{0010}}\ \textcolor{blue}{\textbf{0011}}\ \textcolor{grey}{0100}\ \textcolor{black}{0101}\ \textcolor{blue}{\textbf{0110}}\\\\ \textcolor{blue}{\textbf{0111}}\ \textcolor{grey}{1000}\ \textcolor{black}{1001}\ \textcolor{blue}{\textbf{1010}}\ \textcolor{blue}{\textbf{1011}}\ \textcolor{black}{1100}

which allows it to encode the bits 33, 66, 77, 1010, 1111.

Step 3: The code bits with third to last digit (position 3) equaling 11 are assigned to the third parity bit:

0001 0010 0011 0100 0101 01100111 1000 1001 1010 1011 1100 \textcolor{grey}{0001}\ \textcolor{grey}{0010}\ \textcolor{black}{0011}\ \textcolor{red}{\textbf{0100}}\ \textcolor{blue}{\textbf{0101}}\ \textcolor{blue}{\textbf{0110}}\\\\ \textcolor{blue}{\textbf{0111}}\ \textcolor{grey}{1000}\ \textcolor{black}{1001}\ \textcolor{black}{1010}\ \textcolor{black}{1011}\ \textcolor{blue}{\textbf{1100}}

encoding the bits 55, 66, 77, 1212.

Step 4: The code bits with fourth to last digit (position 4) equaling 11 are assigned to the fourth parity bit:

0001 0010 0011 0100 0101 01100111 1000 1001 1010 1011 1100 \textcolor{grey}{0001}\ \textcolor{grey}{0010}\ \textcolor{black}{0011}\ \textcolor{grey}{0100}\ \textcolor{black}{0101}\ \textcolor{black}{0110}\\\\ \textcolor{black}{0111}\ \textcolor{red}{\textbf{1000}}\ \textcolor{blue}{\textbf{1001}}\ \textcolor{blue}{\textbf{1010}}\ \textcolor{blue}{\textbf{1011}}\ \textcolor{blue}{\textbf{1100}}

that is associated to bits 99, 1010, 1111, 1212.

We can write it from the perspective of the data bit too:

  • Bit #3 (00110011 in binary) is encoded by parity bits #1 and #2.
  • Bit #5 (01010101 in binary) is encoded by parity bits #1 and #3.
  • Bit #6 (01100110 in binary) is encoded by parity bits #2 and #3.
  • Bit #7 (01110111 in binary) is encoded by parity bits #1, #2, and #3.
  • Bit #9 (10011001 in binary) is encoded by parity bits #1 and #4.

We can go on, but you can trust us: no combination of parity bits repeats!

The fully encoded message is 10101010011010101001.

πŸ”Ž The number of parity bits for bit #7 is 3, while for the other message bits is just two. Mistake? No! We chose a number of bits not used in any Hamming code for our example (12): if we had used 7 or 15, for example, we would have had respectively two and three parity bits for all code bits.

Now that we have the rules for the encoding, the following steps are pretty straightforward. We can try with a simple example:

  • Take the message 1101011011010110.
  • Make space for the parity bits: ??1?101?0110??1?101?0110.
  • Apply the rules: each parity bit equals the sum of the corresponding code bits, modulus 2 (meaning odd parity).
    • First parity bit:
      (1+1+1+0+1) mod 2=4 mod 2=0{(1+1+1+0+1)}\ \textrm{mod}\ 2=4\ \textrm{mod}\ 2=0

    • Second parity bit:
      (1+0+1+1+1) mod 2=4 mod 2=0{(1+0+1+1+1)}\ \textrm{mod}\ 2=4\ \textrm{mod}\ 2=0

    • Third parity bit:
      (1+0+1+0) mod 2=4 mod 2=0{(1+0+1+0)}\ \textrm{mod}\ 2=4\ \textrm{mod}\ 2=0

    • Fourth parity bit:
      (0+1+1+0) mod 2=2 mod 2=0{(0+1+1+0)}\ \textrm{mod}\ 2=2\ \textrm{mod}\ 2=0

  • The encoded message is then 001010100110001010100110.

Now it's time for some linear algebra (we are sorry).

We can also write the relations between data and parity bits in matrix form. To build the matrices we need, we add some leading zeros (in front of the numbers) to uniform their length: for example, if we have four parity bits, we need to have always four digits. 1111 then becomes 00110011.

☑ We then simply write down the numbers corresponding to the positions of the bits in the matrix as its column. The resulting rows are the encoding rules for every parity bit:

A=[110110110111]A=\begin{bmatrix} 1&1&0&1\\ 1&0&1&1\\ 0&1&1&1\\ \end{bmatrix}

πŸ”Ž Take a second look at the matrix. The first column is 1111 if you read it from below (011011). That number is encoded by parity bits #1 and #2. The second column is 101101, encoded by bits #1 and #3... and so on! Writing this matrix is not hard at all.

What about the parity bits? They are encoded in an identity matrix II, and you can decide where to insert them when building the other matrices.

  • If you insert them in their original position, you are performing a systematic encoding.
  • If you keep the two blocks apart, inserting the identity matrix as a whole, you will use a non systematic encoding.

We used the systematic encoding in our Hamming codes calculator, but it's not hard to do it the other way around.

The matrix AA is used to find the matrix GG, which is also called the generator matrix. We writeG=(Inβˆ₯AT)G=\left( I_n \| A^T\right), with nn being the number of the code bits and ATA^T being the transpose of the matrix AA.

G=[1000110010010100100110001111]G=\begin{bmatrix} 1&0&0&0&1&1&0\\ 0&1&0&0&1&0&1\\ 0&0&1&0&0&1&1\\ 0&0&0&1&1&1&1\\ \end{bmatrix}

Take a look at the dimensions of the matrix. The number of rows equals the number of data bits entirely covered by the considered parity bits. But, the number of columns is the total of the number of code bits and the number of parity bits.

It's a minor spoiler (we are going to cover it in the next section) but right now, we are developing the matrices for the 7–4 Hamming code. 7 is the number of rows (thus, the number of total bits in a complete message) and 4 is the number of data bits (in positions 3, 5, 6, and 7).

We can now build the matrix HH, also called parity check matrix. The rules for its construction are H=(Aβˆ₯Ip)H=\left(A \| I_p\right), where pp is the number of parity bits.

H=[110110010110100111001]H=\begin{bmatrix} 1&1&0&1&1&0&0\\ 1&0&1&1&0&1&0\\ 0&1&1&1&0&0&1\\ \end{bmatrix}

Encoding, error detection and correction: the Hamming code algorithm

Here we will analyze how, through some basic operations, Hamming codes allow for the encoding, error detection and correction, and eventually decoding of a message. We will guide you through the various steps of the Hamming codes algorithm. Buckle in because it will be quite an adventure. πŸ˜ƒ

The initial step is the encoding of a message. First, let's choose a binary message a⃗\vec{a}, which we consider to be a vector. Its length should equal the number of code bits in the chosen Hamming code.

In order to encode it, we simply apply the matrix multiplication rules to the message and the GG matrix. The resulting message x⃗\vec{x} is x⃗=a⃗G\vec{x} = \vec{a}G.

The resulting message's length is equal to the number of columns of the GG matrix. Notice how the dimensions of GG are the numbers identifying a Hamming code!

The error detection routine involves the HH matrix. Given an encoded message x⃗\vec{x}, we calculate the product of it with the parity check matrix:

s⃗=x⃗HT\vec{s} = \vec{x}H^T

The result is a vector with length equal to the number of parity bits used in the code. We call this vector the syndrome vector, and it is the key to the process of detection of errors in Hamming codes.

The syndrome vector contains information on the parity of the message. If no error occurred, the syndrome vector is equal to the null vector ([0,0,0,...][0,0,0,...] ) since the parity of the message and the information carried by the parity bits coincide.

However, if bad luck occurred and the message suffered from bit flips, we would no longer get a null syndrome vector.

We have two possible situations:

  • The error occurs on a code bit. The resulting syndrome vector contains two or more bits with value 11, each combination corresponding uniquely to the position where the error occurred.
  • The error in the transmission occurs on a parity bit. In this case, the syndrome vector will contain a single bit, which will be valued one in the position corresponding to the affected parity bit.

⚠️ Hamming codes work perfectly for single bit errors. However, if two or more bits flip in the message, the resulting vector would be a valid codeword, hence passing undetected. You can try this on our calculator. Our calculator can't understand if a double error occurred in the transmission, so... don't trust it entirely! Adding an overall parity bit allows for the detection but not the correction of double bit errors.

We are now at the error correction routine. We need to invoke the parity check matrix:

H=[110110010110100111001]H=\begin{bmatrix} 1&1&0&1&1&0&0\\ 1&0&1&1&0&1&0\\ 0&1&1&1&0&0&1\\ \end{bmatrix}

Every column of the HH matrix corresponds to a possible syndrome vector. By checking which one is identical to s⃗\vec{s}, it is possible to extrapolate the position of the error.

Once we know the position of the error, it is only necessary to switch the value of the affected bit from 00 to 11, or vice versa.

And we finally get to decoding! 😁 Now that we have an error-free message (unless multiple errors occurred), we simply need to recover the encoded string. To do so, we use a weird matrix that will erase the parity bits. How do we do that?

We take an identity matrix with size equal to the number of code bits, and in the position of the parity bits that we chose in the construction of the HH matrix, we add null columns containing only zeros.

Here is an example:

R=[1000000010000000100000001000]R=\begin{bmatrix} 1&0&0&0&0&0&0\\ 0&1&0&0&0&0&0\\ 0&0&1&0&0&0&0\\ 0&0&0&1&0&0&0\\ \end{bmatrix}

Now, multiply the encoded and corrected messages and the RR matrix.

a⃗=x⃗R\vec{a} = \vec{x}R

And it's done!

The 7–4 Hamming code in action

The Hamming code 7–4 uses three parity bits every 7 bits of the message. This leaves space for four data bits. We will use systematic encoding, thus cramming the parity bits at the end of the encoded message.

Let's apply the Hamming code algorithm! Assume we want to encode the message a⃗=[1,1,0,1]\vec{a} = [1,1,0,1]. We need to use the generator matrix, hence:

x⃗=a⃗G\vec{x} = \vec{a}G

where:

a⃗=[1,1,0,1]\small \vec{a}=[1,1,0,1]

and:

G=[1000110010010100100110001111]\small G=\begin{bmatrix} 1&0&0&0&1&1&0\\ 0&1&0&0&1&0&1\\ 0&0&1&0&0&1&1\\ 0&0&0&1&1&1&1\\ \end{bmatrix}

Thus we get:

x⃗=[1,1,0,1,1,0,0]\vec{x} = [1,1,0,1,1,0,0]

The parity is easily computed. The first parity bit encodes the code bits #3, #5, and #7. The total sum of their values is 11, hence the value of the first parity bit is set to one.

The second parity bit encodes the bits #3, #6, and #7, which together sum to 22, and its modulus 22 equals 00.

Finally, the third parity bit, in the fourth position, encodes #5, #6, and #7, with sum 11. The third parity bit has value 11.

Now let's assume a transmission without errors. Take the encoded message [1,1,0,1,1,0,0][1,1,0,1,1,0,0], and multiply it with the transpose of matrix HH, like this:

s⃗=x⃗HT\vec{s} = \vec{x}{H}^T

with:

x⃗=[1,1,0,1,1,0,0]\small \vec{x} = [1,1,0,1,1,0,0]

and:

HT=[110101011111100010001]\small H^T=\begin{bmatrix} 1&1&0\\ 1&0&1\\ 0&1&1\\ 1&1&1\\ 1&0&0\\ 0&1&0\\ 0&0&1 \end{bmatrix}

This results in:

s⃗=[0,0,0]\vec{s} =[0,0,0]

See? The syndrome vector is null: no error has been detected.

What if an error occurred instead? Let's take the same message of the example before, and flip the third bit:

[1,1,0,1,1,0,0]β†’[1,1,1,1,1,0,0][1,1,\textbf{0},1,1,0,0]\rightarrow[1,1,\textbf{1},1,1,0,0]

Now let's apply the same procedure of error detection:

The resulting syndrome vector is:

s⃗=[0,1,1]\vec{s} =[0,1,1]

To correct we first have to identify the column of H{H} corresponding to the vector s⃗=[0,1,1]\vec{s} = [0,1,1].

It's the third column! This means that the error is in the third bit.
Let's correct it.

[1,1,1,1,1,0,0]β†’[1,1,0,1,1,0,0]\footnotesize [1,1,\textbf{1},1,1,0,0]\rightarrow[1,1,\textbf{0},1,1,0,0]

And what about decrypting? We have to apply the last matrix, R{R} to the corrected vector, x⃗\vec{x}:

a⃗=x⃗R\vec{a}=\vec{x}{R}

where x⃗\vec{x} and R{R} are, respectively:

x⃗=[1,1,0,1,1,0,0]\small \vec{x}=[1,1,0,1,1,0,0]
R=[1000000010000000100000001000]\small {R}=\begin{bmatrix} 1&0&0&0&0&0&0\\ 0&1&0&0&0&0&0\\ 0&0&1&0&0&0&0\\ 0&0&0&1&0&0&0\\ \end{bmatrix}

The result is:

a⃗=[1,1,0,1]\vec{a}=[1,1,0,1]

As you can see, the resulting vector is exactly the original message. Aren't you happy? We started by encoding it, corrupting it, and without any difficulty we corrected the error. Then as easy as possible, we got it back. That's the usefulness of Hamming codes in action!

How to use our Hamming codes calculator?

With our Hamming codes calculator, you will be able to perform all the possible operations on a linear code: encoding, error detection, correction, and decoding.

The first choice you have to make is the size of the code you want to use. Just select it; we give you the 7–4 as a default.

πŸ”Ž Remember to choose the size of your code wisely! The second number is the one you have to look at. Choose a value for which your message's length is an integer multiple: if your message is 11 digits long, then choose the 15–11 code; if it's 20 bits long, a good choice would be the 7–4 code, that will repeat five times over the entire length.

Then choose if you want to encode, decode, or perform error correction/detection. Every message box has a maximum length; if needed, others will appear πŸ˜‰. If you choose error correction or decoding, remember to use the correct length of the message: for example, 7 in a 7–4 code.

πŸ’‘ If you chose decoding, you will see a third choice. If you have already inserted a message in the error detection routine, our calculator will ask you if you want to decode it. If yes, just choose copy. If you want to start decoding from scratch, then choose insert.

Insert the message! A new one will appear as soon as you type in the apposite field. If you are writing a longer message, break it into sections and gradually fill the fields.

⚠️ You can only insert 00s and 11s β€” any other character will stop the Hamming codes calculator!

A final word

If you made it here, it means that you know a lot about Hamming codes already! We could have said more, but we thought this was already too much!

Error correcting codes are extremely specific tools, but also a beautiful application of the most theoretical math possible in a problem that affects everyday life even if unnoticed. If you are interested in other small things that help your computer run, check our lfsr calculator and our bit shift calculator!

Explore our Hamming codes calculator to learn more about this relatively obscure mathematical discipline or to get some help if you meet this type of error correction code on your way.

We don't know if you have messages to encrypt or errors to detect, but we are sure you enjoyed this journey in the space of codes! πŸ˜‰

FAQ

What is a Hamming code?

A Hamming code is an error correction code that allows detecting and correcting single bit errors in a binary message.

What is error correction?

Error correction is the branch of mathematics and computer science that studies the occurrence and prevention of errors in a message. In the last decades, researchers developed an entire theory around it, allowing for the almost flawless operation of electronic devices.

How do Hamming codes work?

A Hamming code assigns a unique combination of parity bits to each bit in a message. Each time a single bit error occurs in a message, the parity of the message will show a pattern that is unequivocally traceable to a specific position in the message.

How do I calculate Hamming codes?

Hamming codes use a set of matrices to perform the various steps of the Hamming code algorithm: encoding, error detection and correction, and decoding.

Each code size has a defined set of matrices, and by simple multiplication, it is possible to go through the entire process with a vector. However, a calculator makes things easier. Try the Hamming codes calculator at omnicalculator.com!

Davide Borchia
Code size (total bits-data bits)
7–4
Encoding length: multiples of 4 bits.
Error correction and decoding length: multiples of 7 bits.
Encode, correct or decode?
Encode
Message
Check out 32 similar tech and electronics calculators πŸ’»
3D printing costAmdahl's lawBattery capacity… 29 more
People also viewed…

Grams to cups

The grams to cups converter converts between cups and grams. You can choose between 20 different popular kitchen ingredients or directly type in the product density.

Guitar string tension

Ever wonder how much tension your guitar strings experience? Use this guitar string tension calculator to find out.

Ideal egg boiling

Quantum physicist's take on boiling the perfect egg. Includes times for quarter and half-boiled eggs.

Note frequency

Choose the name of a musical note and the note frequency calculator will output its frequency.
Copyright by Omni Calculator sp. z o.o.
Privacy, Cookies & Terms of Service