Turbo code
From Academic Kids

Turbo codes are a class of recentlydeveloped highperformance error correction codes finding use in deepspace satellite communications and other applications where designers seek to achieve maximal information transfer over a limitedbandwidth communication link in the presence of datacorrupting noise. Of all practical error correction methods known to date, turbo codes, together with Lowdensity paritycheck codes, come closest to approaching the Shannon limit, the theoretical limit of maximum information transfer rate over a noisy channel.
The method was introduced by Berrou, Glavieux, and Thitimajshima in their 1993 paper: "Near Shannon Limit errorcorrecting coding and decoding: Turbocodes" published in the Proceedings of IEEE International Communications Conference [1] (http://wwwelec.enstbretagne.fr/equipe/berrou/Near%20Shannon%20Limit%20Error.pdf). Turbo code refinements and implementation are an area of active research at a number of universities.
Turbo codes make it possible to increase available bandwidth without increasing the power of a transmission, or they can be used to decrease the amount of power used to transmit at a certain data rate. Its main drawback is a relatively high latency, which makes it unsuitable for some applications. For satellite use, this is not of great concern, since the transmission distance itself introduces latency due to the limited speed of light. Turbo codes are used extensively in 3G mobile telephony standards.
Prior to Turbo codes, the best known technique combined a ReedSolomon error correction block code with a Viterbi algorithm convolutional code.
Contents 
How turbo codes work
There are two related features of turbo codes that make them different from the more traditional errorcorrecting codes of the 20th century:
 The key insight is the realization that instead of producing a stream of binary digits from the signal it receives, the frontend of the decoder can be designed to produce a likelihood measure for each bit.
 The nittygritty of turbo codes is the design of the decoder (and the coder) so that it can exploit this additional information.
The encoder
The encoder sends three subblocks of bits. The first subblock is the mbit block of payload data. The second subblock is n/2 parity bits for the payload data, computed using a convolutional code. The third subblock is n/2 parity bits for a known permutation of the payload data, again computed using a convolutional code. That is, two redundant but different subblocks of parity bits for the payload are sent. The complete block has m+n bits of data with a code rate of m/n.
The decoder
The decoder frontend produces an integer for each bit in the data stream. This integer is a measure of how likely it is that the bit is a 0 or 1 and is also called soft bit. The integer could be drawn from the range [127, 127], where:
 127 means "certainly 0"
 100 means "very likely 0"
 0 means "it could be either 0 or 1"
 100 means "very likely 1"
 127 means "certainly 1"
 etc
This introduces a probabilistic aspect to the datastream from the front end, but it conveys more information about each bit than just 0 or 1.
 For example, for each bit, the front end of a traditional wirelessreceiver has to decide if an internal analog voltage is above or below a given threshold voltage level. For a turbocode decoder, the front end would provide a integer measure of how far the internal voltage is from the given threshold.
To decode the m+nbit block of data, the decoder frontend creates a block of likelihood measures, with one likelihood measure for each bit in the data stream. There are two parallel decoders, one for each of the n/2bit parity subblocks. Both decoders use the subblock of m likelihoods for the payload data. The decoder working on the second parity subblock knows the permutation that the coder used for this subblock.
Solving hypotheses to find bits
The nitty gritty of turbo codes is how they use the likelihood data to reconcile differences between the two decoders. Each of the two convolutional decoders generates a hypothesis (with derived likelihoods) for the pattern of m bits in the payload subblock. The hypothesis bitpatterns are compared, and if they differ, the decoders exchange the derived likelihoods they have for each bit in the hypotheses. Each decoder incorporates the derived likelihood estimates from the other decoder to generate a new hypothesis for the bits in the payload. Then they compare these new hypotheses. This iterative process continues until the two decoders come up with the same hypothesis for the mbit pattern of the payload, typically in 4 to 10 cycles.
External links
 Jet Propulsion Laboratory webpage on Turbo codes (http://www331.jpl.nasa.gov/public/JPLtcodes.html)
 Major IEEE Article: Closing in on the perfect code (http://www.spectrum.ieee.org/WEBONLY/publicfeature/mar04/0304code.html)fr:Turbo code