jsait_cover_art_september_22_v3
2024
Information-Theoretic Methods for Trustworthy and Reliable Machine Learning
Guest editors
Lalitha Sankar
Oliver Kosut
Alon Orlitsky
Flavio Calmon
Lele Wang
Ayfer Ozgur
Ofer Shayevitz

(To be done)

Antonious M. Girgis    Suhas Diggavi

We study the distributed mean estimation (DME) problem under privacy and communication constraints in the local differential privacy (LDP) and multi-message shuffled (MMS) privacy frameworks. The DME has wide applications in both federated learning and analytics. We propose a communication-efficient and differentially private algorithm for DME of bounded $\ell _{2}$ -norm and $\ell _{\infty }$ -norm vectors. We analyze our proposed DME schemes showing that our algorithms have order-optimal privacy-communication-performance trade-offs. Our algorithms are designed by giving unequal privacy assignments at different resolutions of the vector (through binary expansion) and appropriately combining it with coordinate sampling. These results are directly applied to give guarantees on private federated learning algorithms. We also numerically evaluate the performance of our private DME algorithms.

Anand Jerry George    Lekshmi Ramesh    Aditya Vikram Singh    Himanshu Tyagi

We consider the problem of continually releasing an estimate of the population mean of a stream of samples that is user-level differentially private (DP). At each time instant, a user contributes a sample, and the users can arrive in arbitrary order. Until now these requirements of continual release and user-level privacy were considered in isolation. But, in practice, both these requirements come together as the users often contribute data repeatedly and multiple queries are made. We provide an algorithm that outputs a mean estimate at every time instant t such that the overall release is user-level $\varepsilon $ -DP and has the following error guarantee: Denoting by $m_{t}$ the maximum number of samples contributed by a user, as long as $\tilde {\Omega }(1/\varepsilon)$ users have $m_{t}/2$ samples each, the error at time t is $\tilde {O}(1/\sqrt {t}+\sqrt {m}_{t}/t\varepsilon)$ . This is a universal error guarantee which is valid for all arrival patterns of the users. Furthermore, it (almost) matches the existing lower bounds for the single-release setting at all time instants when users have contributed equal number of samples.

Matteo Zecchin    Sangwoo Park    Osvaldo Simeone

In many real-world problems, predictions are leveraged to monitor and control cyber-physical systems, demanding guarantees on the satisfaction of reliability and safety requirements. However, predictions are inherently uncertain, and managing prediction uncertainty presents significant challenges in environments characterized by complex dynamics and forking trajectories. In this work, we assume access to a pre-designed probabilistic implicit or explicit sequence model, which may have been obtained using model-based or model-free methods. We introduce probabilistic time series-conformal risk prediction (PTS-CRC), a novel post-hoc calibration procedure that operates on the predictions produced by any pre-designed probabilistic forecaster to yield reliable error bars. In contrast to existing art, PTS-CRC produces predictive sets based on an ensemble of multiple prototype trajectories sampled from the sequence model, supporting the efficient representation of forking uncertainties. Furthermore, unlike the state of the art, PTS-CRC can satisfy reliability definitions beyond coverage. This property is leveraged to devise a novel model predictive control (MPC) framework that addresses open-loop and closed-loop control problems under general average constraints on the quality or safety of the control policy. We experimentally validate the performance of PTS-CRC prediction and control by studying a number of use cases in the context of wireless networking. Across all the considered tasks, PTS-CRC predictors are shown to provide more informative predictive sets, as well as safe control policies with larger returns.

Osama Hanna    Antonious M. Girgis    Christina Fragouli    Suhas Diggavi

In this paper, we propose differentially private algorithms for the problem of stochastic linear bandits in the central, local and shuffled models. In the central model, we achieve almost the same regret as the optimal non-private algorithms, which means we get privacy for free. In particular, we achieve a regret of $\tilde {O}\left({\sqrt {T}+{}\frac {1}{\varepsilon }}\right)$ matching the known lower bound for private linear bandits, while the best previously known algorithm achieves $\tilde {O}\left({{}\frac {1}{\varepsilon }\sqrt {T}}\right)$ . In the local case, we achieve a regret of $\tilde {O}\left({{}\frac {1}{\varepsilon }{\sqrt {T}}}\right)$ which matches the non-private regret for constant $\varepsilon $ , but suffers a regret penalty when $\varepsilon $ is small. In the shuffled model, we also achieve regret of $\tilde {O}\left({\sqrt {T}+{}\frac {1}{\varepsilon }}\right)$ while the best previously known algorithm suffers a regret of $\tilde {O}\left({{}\frac {1}{\varepsilon }{T^{3/5}}}\right)$ . Our numerical evaluation validates our theoretical results. Our results generalize for contextual linear bandits with known context distributions.

Bhagyashree Puranik    Ozgur Guldogan    Upamanyu Madhow    Ramtin Pedarsani

While much of the rapidly growing literature on fair decision-making focuses on metrics for one-shot decisions, recent work has raised the intriguing possibility of designing sequential decision-making to positively impact long-term social fairness. In selection processes such as college admissions or hiring, biasing slightly towards applicants from under-represented groups is hypothesized to provide positive feedback that increases the pool of under-represented applicants in future selection rounds, thus enhancing fairness in the long term. In this paper, we examine this hypothesis and its consequences in a setting in which multiple agents are selecting from a common pool of applicants. We propose the Multi-agent Fair-Greedy policy, that balances greedy score maximization and fairness. Under this policy, we prove that the resource pool and the admissions converge to a long-term fairness target set by the agents when the score distributions across the groups in the population are identical. We provide empirical evidence of existence of equilibria under non-identical score distributions through synthetic and adapted real-world datasets. We then sound a cautionary note for more complex applicant pool evolution models, under which uncoordinated behavior by the agents can cause negative reinforcement, leading to a reduction in the fraction of under-represented applicants. Our results indicate that, while positive reinforcement is a promising mechanism for long-term fairness, policies must be designed carefully to be robust to variations in the evolution model, with a number of open issues that remain to be explored by algorithm designers, social scientists, and policymakers.