Addressing GAN Training Instabilities via Tunable Classification Losses

Submitted by admin on Thu, 06/20/2024 - 08:45
Generative adversarial networks (GANs), modeled as a zero-sum game between a generator (G) and a discriminator (D), allow generating synthetic data with formal guarantees. Noting that D is a classifier, we begin by reformulating the GAN value function using class probability estimation (CPE) losses. We prove a two-way correspondence between CPE loss GANs and f-GANs which minimize f-divergences. We also show that all symmetric f-divergences are equivalent in convergence.

Information Velocity of Cascaded Gaussian Channels With Feedback

Submitted by admin on Wed, 06/19/2024 - 08:45
We consider a line network of nodes, connected by additive white noise channels, equipped with local feedback. We study the velocity at which information spreads over this network. For transmission of a data packet, we give an explicit positive lower bound on the velocity, for any packet size. Furthermore, we consider streaming, that is, transmission of data packets generated at a given average arrival rate. We show that a positive velocity exists as long as the arrival rate is below the individual Gaussian channel capacity, and provide an explicit lower bound.

Long-Term Fairness in Sequential Multi-Agent Selection with Positive Reinforcement

Submitted by admin on Wed, 06/19/2024 - 08:45

While much of the rapidly growing literature on fair decision-making focuses on metrics for one-shot decisions, recent work has raised the intriguing possibility of designing sequential decision-making to positively impact long-term social fairness. In selection processes such as college admissions or hiring, biasing slightly towards applicants from under-represented groups is hypothesized to provide positive feedback that increases the pool of under-represented applicants in future selection rounds, thus enhancing fairness in the long term.

Controlled privacy leakage propagation throughout overlapping grouped learning

Submitted by admin on Wed, 06/19/2024 - 08:45
Federated Learning (FL) is the standard protocol for collaborative learning. In FL, multiple workers jointly train a shared model. They exchange model updates calculated on their data, while keeping the raw data itself local. Since workers naturally form groups based on common interests and privacy policies, we are motivated to extend standard FL to reflect a setting with multiple, potentially overlapping groups.

Neural Distributed Source Coding

Submitted by admin on Sat, 06/15/2024 - 08:45
We consider the Distributed Source Coding (DSC) problem concerning the task of encoding an input in the absence of correlated side information that is only available to the decoder. Remarkably, Slepian and Wolf showed in 1973 that an encoder without access to the side information can asymptotically achieve the same compression rate as when the side information is available to it. This seminal result was later extended to lossy compression of distributed sources by Wyner, Ziv, Berger, and Tung.

Exploring the Symbiotic Relationship Between Information Theory and Machine Learning

Submitted by admin on Tue, 06/11/2024 - 04:46

Title: Exploring the Symbiotic Relationship Between Information Theory and Machine Learning

In the vast realm of artificial intelligence, two pillars stand prominently: Information Theory and Machine Learning. At first glance, they might seem like distinct fields with little in common, but upon closer inspection, their connection runs deep, forming a symbiotic relationship that underpins many modern AI advancements.