Multi-Group Fairness Evaluation via Conditional Value-at-Risk Testing

Submitted by admin on Tue, 06/11/2024 - 01:30
Machine learning (ML) models used in prediction and classification tasks may display performance disparities across population groups determined by sensitive attributes (e.g., race, sex, age). We consider the problem of evaluating the performance of a fixed ML model across population groups defined by multiple sensitive attributes (e.g., race and sex and age).

Contraction of Locally Differentially Private Mechanisms

Submitted by admin on Tue, 06/11/2024 - 01:30
We investigate the contraction properties of locally differentially private mechanisms. More specifically, we derive tight upper bounds on the divergence between Pand Qoutput distributions of an -LDP mechanism in terms of a divergence between the corresponding input distributions P and Q, respectively. Our first main technical result presents a sharp upper bound on the χ2-divergence χ2(P||Q) in terms of χ2(P||Q) and . We also show that the same result holds for a large family of divergences, including KL-divergence and squared Hellinger distance.

Detection of Sparse Mixtures With Differential Privacy

Submitted by admin on Tue, 06/11/2024 - 01:30
Detection of sparse signals arises in many modern applications such as signal processing, bioinformatics, finance, and disease surveillance. However, in many of these applications, the data may contain sensitive personal information, which is desirable to be protected during the data analysis. In this article, we consider the problem of (,δ)-differentially private detection of a general sparse mixture with a focus on how privacy affects the detection power.

Efficient and Robust Classification for Sparse Attacks

Submitted by admin on Tue, 06/11/2024 - 01:30
Over the past two decades, the rise in adoption of neural networks has surged in parallel with their performance. Concurrently, we have observed the inherent fragility of these prediction models: small changes to the inputs can induce classification errors across entire datasets. In the following study, we examine perturbations constrained by the $\ell _{0}$ –norm, a potent attack model in the domains of computer vision, malware detection, and natural language processing.

Noisy Computing of the OR and MAX Functions

Submitted by admin on Tue, 06/11/2024 - 01:30
We consider the problem of computing a function of n variables using noisy queries, where each query is incorrect with some fixed and known probability $p \in (0,1/2)$ . Specifically, we consider the computation of the $\textsf {OR}$ function of n bits (where queries correspond to noisy readings of the bits) and the $\textsf {MAX}$ function of n real numbers (where queries correspond to noisy pairwise comparisons).

LightVeriFL: A Lightweight and Verifiable Secure Aggregation for Federated Learning

Submitted by admin on Tue, 06/11/2024 - 01:30
Secure aggregation protects the local models of the users in federated learning, by not allowing the server to obtain any information beyond the aggregate model at each iteration. Naively implementing secure aggregation fails to protect the integrity of the aggregate model in the possible presence of a malicious server forging the aggregation result, which motivates verifiable aggregation in federated learning.

Learning Algorithm Generalization Error Bounds via Auxiliary Distributions

Submitted by admin on Tue, 06/11/2024 - 01:30

Generalization error bounds are essential for comprehending how well machine learning models work. In this work, we suggest a novel method, i.e., the Auxiliary Distribution Method, that leads to new upper bounds on expected generalization errors that are appropriate for supervised learning scenarios.

Neural Distributed Compressor Discovers Binning

Submitted by admin on Tue, 06/11/2024 - 01:30
We consider lossy compression of an information source when the decoder has lossless access to a correlated one. This setup, also known as the Wyner-Ziv problem, is a special case of distributed source coding. To this day, practical approaches for the Wyner-Ziv problem have neither been fully developed nor heavily investigated. We propose a data-driven method based on machine learning that leverages the universal function approximation capability of artificial neural networks.

Differentially Private Stochastic Linear Bandits: (Almost) for Free

Submitted by admin on Tue, 06/11/2024 - 01:30
In this paper, we propose differentially private algorithms for the problem of stochastic linear bandits in the central, local and shuffled models. In the central model, we achieve almost the same regret as the optimal non-private algorithms, which means we get privacy for free. In particular, we achieve a regret of $\tilde {O}\left({\sqrt {T}+{}\frac {1}{\varepsilon }}\right)$ matching the known lower bound for private linear bandits, while the best previously known algorithm achieves $\tilde {O}\left({{}\frac {1}{\varepsilon }\sqrt {T}}\right)$ .