Using multiple redundancy is expensive. Can we adapt the idea of parity bits (which was cheaper) to not only detect but correct errors? Suppose we arrange the stream of bits we want to send in a two dimensional MxN array.

N --------------------- | 1 1 0 1 .... 0 | 1 | 0 0 0 1 .... 1 | 1 M | ... | . | ... | . |________________|___ | 1 1 0 0 .... 0 | 1We make the same assumption as before, namely that errors are rare, so that the chance of two errors occurring the same row or column is extremely low. In this scheme, we add a parity sum for each row and column. Now, if an error occurs, we detect a parity error in both a row and a column. This allows us to localize the bit which is in error, using far fewer bits. Since it is only a bit which is in error, we can simply flip the bit to correct the error.

This suffers from the same problems as with ordinary parity, i.e. that we cannot detect even numbers of errors in a row or column. On the other hand it has a huge advantage in efficiency compared to the multiple redundancy approach. If we define redundancy by

R = Number of bits used in full --------------------------- Number of bits in messagethen, as the length of the rows increases the redundancy becomes small. But we can't make the rows too large, or the change of errors increases.