江南体育

Strategic Successive Refinement With Interdependent Decoders Cost Functions

Submitted by admin on Wed, 10/23/2024 - 01:52

In decentralized and decision-oriented communication paradigms, autonomous devices strategically implement information compression policies. In this work, we study a strategic communication game between an encoder and two decoders. An i.i.d. information source, observed by the encoder, is transmitted to the decoders via two perfect links, one reaching the first decoder only and the other reaching both decoders, as in the successive refinement setup.

Universal Gaussian Quantization With Side-Information Using Polar Lattices

Submitted by admin on Wed, 10/23/2024 - 01:52

We consider universal quantization with side information for Gaussian observations, where the side information is a noisy version of the sender鈥檚 observation with noise variance unknown to the sender. In this paper, we propose a universally rate optimal and practical quantization scheme for all values of unknown noise variance.

Compression for Multi-Arm Bandits

Submitted by admin on Wed, 10/23/2024 - 01:52

The multi-armed bandit (MAB) problem is one of the most well-known active learning frameworks. The aim is to select the best among a set of actions by sequentially observing rewards that come from an unknown distribution. Recently, a number of distributed bandit applications have become popular over wireless networks, where agents geographically separated from a learner collect and communicate the observed rewards. In this paper we propose a compression scheme, that compresses the rewards collected by the distributed agents.

Universal and Succinct Source Coding of Deep Neural Networks

Submitted by admin on Wed, 10/23/2024 - 01:52

Deep neural networks have shown incredible performance for inference tasks in a variety of domains, but require significant storage space, which limits scaling and use for on-device intelligence. This paper is concerned with finding universal lossless compressed representations of deep feedforward networks with synaptic weights drawn from discrete sets, and directly performing inference without full decompression.

Lossy Compression of Noisy Data for Private and Data-Efficient Learning

Submitted by admin on Wed, 10/23/2024 - 01:52

Storage-efficient privacy-preserving learning is crucial due to increasing amounts of sensitive user data required for modern learning tasks. We propose a framework for reducing the storage cost of user data while at the same time providing privacy guarantees, without essential loss in the utility of the data for learning. Our method comprises noise injection followed by lossy compression.