Error Correction Laboratory

Research1

N. Raveendran, D. Declercq, and B. Vasić, "A Sub-Graph Expansion-Contraction Method for Error Floor Computation," IEEE Transactions on Communications, 2020.

We developed a computationally efficient method for estimating error floors of low-density parity-check (LDPC) codes over the binary symmetric channel (BSC) without any prior knowledge of its trapping sets. In some cases our method provides a million-fold improvement in computational complexity over standard Monte-Carlo simulation.

ResearchPhoto2

X. Xiao, B. Vasić , R. Tandon, and S. Lin, "Designing Finite Alphabet Iterative Decoders of LDPC codes via Recurrent Quantized Neural Networks," IEEE Transactions on Communications, 2020.

We proposed an approach to design finite alphabet iterative decoders for Low-Density Parity Check (LDPC) codes over the binary symmetric channel (BSC) via recurrent quantized neural networks (RQNN). We use RQNNs to optimize the message update look-up tables by jointly training their message levels and QNN parameters. Our method automatically designs low precision linear FAIDs with superior performance, lower complexity, and faster convergence than the floating-point belief propagation algorithms in waterfall region. We exploit symmetry of look-up tables and the channel output symmetry to improve numerical efficiency of training. Our method requires training on error patterns only, not the entire code space.

Research3

D. V. Nguyen and B. Vasić, "Two-Bit Bit Flipping Algorithms for LDPC Codes and Collective Error Correction," IEEE Transactions on Communications, 2014.

We introduced a new class of bit flipping algorithms for low-density parity-check codes over the binary symmetric channel. Additional bit at a variable node that to represents its strength makes the algorithm more powerful with negligible increase of complexity. The two bit messages from a check node is generated by a finite state machine tracking whether that check node was satisfied in the current and previous iteration. The algorithm permits a systematic search procedure to determine all uncorrectable error patterns - trapping sets. We believe that this algorithm is now widely used in flash memory systems.

ResearchPhoto4

S. K. Planjery, D. Declercq, L. Danjean, and B. Vasić, "Finite Alphabet Iterative Decoders, Part I: Decoding Beyond Belief Propagation on the Binary Symmetric Channel," IEEE Transactions on Communications, 2013.

We introduced a new paradigm for iterative decoding on low-density parity-check codes using finite precision messages. The message update functions are designed using the knowledge of harmful subgraphs that present in a given code, thereby rendering these decoders capable of outperforming the belief propagation and other existing algorithms in the error floor region. These are the most powerful yet hardware friendly iterative decoders today.

ResearchPhoto5

D. Declercq, B. Vasić and . K. Planjery and E. Li "Guaranteed Error Correction of LDPC Codes via Iterative Decoder Diversity," IEEE Transactions on Communications, 2013.

We introduced a framework of decoding diversity to decode low-density parity check codes. Our scheme uses a plurality of iterative decoders which collectively correct more error patterns than a single decoder on a given code. Decoders utilized in the scheme are judiciously chosen to ensure that individual decoders have different decoding dynamics and correct different sets of error patterns. Switching from one decoder to another in a sequential manner greatly improves frame error performance in both waterfall and error floor regions. Depending on a code, the error floor improvement can be several orders of magnitude.

ResearchPhoto6

S. K. Chilappagari, D. V. Nguyen, B. Vasić, and M. W. Marcellin, "On Trapping Sets and Guaranteed Error Correction Capability of LDPC codes and GLDPC Codes,“ IEEE Transactions on Information Theory, 2010.

We established lower and upper bounds on the guaranteed error correction capability of iterative bit-flipping decoding algorithms for low-density parity-check(LDPC) codes and generalized low-density parity-check (GLDPC) codes by finding bounds on the number of nodes that have the required expansion. The problem of guaranteed error correction capability is hard in general, and for LDPC codes is further complicated by the complex dynamics of iterative decoding algorithms.

ResearchPhoto7

S. K. Chilappagari and B. Vasić, "Error-Correction Capability of Column-Weight-Three LDPC Codes," IEEE Transactions on Information Theory, 2008

We derived bounds on the error-correction capability of the ensemble variable-degree three regular low-density parity check (LDPC) codes over the binary symmetric channel (BSC) when decoded using the Gallager A algorithm. We analyzed decoding failures using the notions of trapping sets and inducing sets, and proved that a code with a Tanner graph of girth not smaller than 10 cannot correct all error patterns with up to half-girth errors. Using this result, we proved that for any positive crossover probability of the BSC, for sufficiently large block length, no code in this ensemble can correct fraction of errors equal to crossover probability. This is in contrast to the case of higher column weights for which such error correction capability can be proven by using expander graph arguments. This result settles the problem of error-correction capability of column-weight-three LDPC codes.

ResearchPhoto8

B. Vasić and S. K. Chilappagari, "An Information Theoretical Framework for Analysis and Design Of Nano- Scale Fault-Tolerant Memories Based on Low-Density Parity-Check Codes," IEEE Transactions on Circuits and Systems, 2007.

We developed a theoretical framework for the analysis and design of fault-tolerant memory architectures. In the seventies, Taylor and Kuznetsov showed that memory systems have nonzero computational (storage) capacity, i.e., the redundancy necessary to ensure memory reliability grows asymptotically only linearly with the memory size. The equivalence of the restoration phase in the Taylor-Kuznetsov method and the faulty Gallager B algorithm enabled us to establish a systematic methods for analysis of memory systems using unreliable media using the large body of knowledge in codes on graphs and iterative decoding.

ResearchPhoto9

B. Vasić and P. Ivaniš, "Error Errore Eicitur: A Stochastic Resonance Paradigm for Reliable Storage of Information on Unreliable Media," IEEE Transactions on Communications, 2016.

We gave an architecture of a storage system consisting of a storage medium made entirely of noisy memory elements and an error correction circuit made of a combination of noisy and noiseless logic gates. Surprisingly, our architecture is capable of retaining the stored information with lower probability of error than a storage system with a correction circuit made completely of noiseless logic gates. Our correction circuit is based on iterative decoding of low-density parity check codes, and uses the positive effect of errors in logic gates to correct errors in memory elements. The randomness present in the logic gates helps decoders escape from trapping sets and makes these classes of decoders superior to their noiseless counterparts. Moreover, random perturbations do not require any additional computational resources as they are inherent to nosy hardware itself.

ResearchPhoto10

B. Vasić and O. Milenković, "Combinatorial Constructions of Low-Density Parity-Check Codes for Iterative Decoding," IEEE Transactions on Information Theory, 2004.

This is the first paper that uses the combinatorial design theory to construct well-structured and hardware-friendly low-density parity-check (LDPC) codes. While it was well known at that time that random, irregular LDPC codes could in principle approach fundamental limits, at finite block lengths such codes exhibit error-floors and their irregularity makes efficient hardware decoder implementation very challenging. Resolvability of a combinatorial design imposes quasi-cyclic structure of the resulting code, and results in a class of codes that are both powerful and well-structured thus lending themselves to low-complexity implementations. Codes in standards that followed this work developed for variety of communications systems relied exactly on these features.

ResearchPhoto10

B. Vasić, S. K. Chilappagari, D. V. Nguyen, and S. K. Planjery, "Trapping Set Ontology," Allerton Conference 2009.

The failures of iterative message passing for decoding low-density parity-check codes on the additive white Gaussian noise channel and the binary symmetric channel can be understood in terms of combinatorial objects known as trapping sets. We have derived a systematic method to identify the most relevant trapping sets for decoding over the BSC in the error oor region. We have also elaborated on the notion of the critical number of a trapping set as the most relevant factor determining the effect of a trapping set to the frame error rate in the error floor regime. We derived a classification of trapping sets based on their topological structure. We also developed the trapping set ontology, a database of trapping sets that summarizes the topological relations and hierarchy among trapping sets