江南体育

Secure MISO Broadcast Channel: An Interplay Between CSIT and Network Topology

Submitted by admin on Mon, 10/28/2024 - 01:24

We study the problem of secure transmission over a Gaussian two-user multi-input single-output (MISO) broadcast channel (BC) under the assumption that links connecting the transmitter to the two receivers may have unequal strength statistically. In addition to this, the state of the channel to each receiver is conveyed in a delayed manner to the transmitter. We focus on a two state topological setting of strong v.s. weak links.

Editorial

Submitted by admin on Mon, 10/28/2024 - 01:24

Welcome to the third issue of the Journal on Selected Areas in Information Theory (JSAIT), focusing on 鈥淓stimation and Inference鈥 in modern information sciences.

Recovering Data Permutations From Noisy Observations: The Linear Regime

Submitted by admin on Mon, 10/28/2024 - 01:24

This article considers a noisy data structure recovery problem. The goal is to investigate the following question: given a noisy observation of a permuted data set, according to which permutation was the original data sorted? The focus is on scenarios where data is generated according to an isotropic Gaussian distribution, and the noise is additive Gaussian with an arbitrary covariance matrix. This problem is posed within a hypothesis testing framework.

Global Multiclass Classification and Dataset Construction via Heterogeneous Local Experts

Submitted by admin on Mon, 10/28/2024 - 01:24

In the domains of dataset construction and crowdsourcing, a notable challenge is to aggregate labels from a heterogeneous set of labelers, each of whom is potentially an expert in some subset of tasks (and less reliable in others). To reduce costs of hiring human labelers or training automated labeling systems, it is of interest to minimize the number of labelers while ensuring the reliability of the resulting dataset.

Generalization Bounds via Information Density and Conditional Information Density

Submitted by admin on Mon, 10/28/2024 - 01:24

We present a general approach, based on an exponential inequality, to derive bounds on the generalization error of randomized learning algorithms. Using this approach, we provide bounds on the average generalization error as well as bounds on its tail probability, for both the PAC-Bayesian and single-draw scenarios. Specifically, for the case of sub-Gaussian loss functions, we obtain novel bounds that depend on the information density between the training data and the output hypothesis.