Refined Belief Propagation Decoding of Sparse-Graph Quantum Codes

Submitted by admin on Tue, 06/11/2024 - 01:30
Quantum stabilizer codes constructed from sparse matrices have good performance and can be efficiently decoded by belief propagation (BP). A conventional BP decoding algorithm treats binary stabilizer codes as additive codes over GF(4). This algorithm has a relatively complex process of handling check-node messages, which incurs higher decoding complexity. Moreover, BP decoding of a stabilizer code usually suffers a performance loss due to the many short cycles in the underlying Tanner graph.

Welcome to the IEEE Journal on Selected Areas in Information Theory (JSAIT)

Submitted by admin on Tue, 06/11/2024 - 01:30

I would like to warmly welcome our readers to this inaugural special issue of JSAIT, the Information Theory Society’s first new journal since the IRE Transactions on Information Theory launched in 1953. The society’s desire to expand its technical scope, incubate new research directions, catalyze connections with other disciplines, and highlight new and emerging applications formed the impetus for the new journal.

Guest Editorial

Submitted by admin on Tue, 06/11/2024 - 01:30

Welcome to the first issue of the Journal on Selected Areas in Information Theory (JSAIT) focusing on Deep Learning: Mathematical Foundations and Applications to Information Science.

Functional Error Correction for Robust Neural Networks

Submitted by admin on Tue, 06/11/2024 - 01:30

When neural networks (NeuralNets) are implemented in hardware, their weights need to be stored in memory devices. As noise accumulates in the stored weights, the NeuralNet's performance will degrade. This paper studies how to use error correcting codes (ECCs) to protect the weights. Different from classic error correction in data storage, the optimization objective is to optimize the NeuralNet's performance after error correction, instead of minimizing the Uncorrectable Bit Error Rate in the protected bits.

Extracting Robust and Accurate Features via a Robust Information Bottleneck

Submitted by admin on Tue, 06/11/2024 - 01:30

We propose a novel strategy for extracting features in supervised learning that can be used to construct a classifier which is more robust to small perturbations in the input space. Our method builds upon the idea of the information bottleneck, by introducing an additional penalty term that encourages the Fisher information of the extracted features to be small when parametrized by the inputs. We present two formulations where the relevance of the features to output labels is measured using either mutual information or MMSE.

Physical Layer Communication via Deep Learning

Submitted by admin on Tue, 06/11/2024 - 01:30

Reliable digital communication is a primary workhorse of the modern information age. The disciplines of communication, coding, and information theories drive the innovation by designing efficient codes that allow transmissions to be robustly and efficiently decoded. Progress in near optimal codes is made by individual human ingenuity over the decades, and breakthroughs have been, befittingly, sporadic and spread over several decades. Deep learning is a part of daily life where its successes can be attributed to a lack of a (mathematical) generative model.