To be announced
As part of the internet of things (IoT), the number of sensor nodes that wish to communicate with each other has exploded and is expected to further increase dramatically. Such an increase of communication devices inherently leads to involved communication and hypothesis testing scenarios, and thus calls for new coding and testing strategies. The talk presents new strategies and corresponding error exponents for different network scenarios, and it proves information-theoretic optimality of the proposed strategies in some cases. Special attention is given to scenarios where information collected at a sensor is desired at multiple decision centres and where communication is multi-hop involving sensor nodes as relays. In these networks, sensors generally compete for network resources, and relay sensors can process received information with sensed information or forward intermediate decisions to other nodes. Depending on the studied error exponents, some of these intermediate decisions require special protection mechanisms when sent over the network. The talk is based on joint work with Sadaf Salehkalaibar, Roy Timo, and Ligong Wang.
The downlink of a cloud radio accessnetwork (C-RAN) architecture can be modeled as a diamond network. The baseband unit (BBU) is connected to remote radio heads (RRHs) via fiber links that are modeled as rate-limited bit pipes. Bounds on the rates for reliable communication are evaluated for single-antenna RRHs. A lower bound is based on Marton’s coding, which facilitates dependence across the RRH signals. An upper bound uses Ozarow’s technique to augment the system with an auxiliary random variable. The bounds are studied over scalar Gaussian C-RANs and are shown to meet and characterize the capacity for interesting regimes of operation. The bounds are also evaluated for an abstract model: a noise-free binary adder channel (BAC). The capacity of the BAC is established for all ranges of bit-pipe capacities, which seems to yield a new combinatorial result on sum sets. This work is based on joint work with Shirin Saeedi Bidokhti and Shlomo Shamai.
We connect the information flow in a neural network to sufficient statistics; and show how techniques that are rooted in information theory, such as the source-coding based information bottleneck method can lead to improved architectures, as well as a better understanding of the theoretical foundation of neural networks, viewed as a cascade compression network. We illustrate our results and view through some numerical examples.