Article

The rate-distortion function for source coding with side information at the decoder

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The main result of this paper is the determination of R*(d), for d greater than equivalent to 0, in the general case, where R*(d) is defined as the infimum of rates R such that (with epsilon greater than 0 arbitrarily small and with suitably large n, where n is the block length) communication is possible in the special setting at an average distortion level not exceeding d plus epsilon .

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Given that the received signals at the relay and destination are correlated, the relay can leverage distributed compression techniques to reduce the compression rate without requiring explicit knowledge of the received signal at the destination. As such, it can utilize Wyner-Ziv (WZ) source coding [10], also known as source coding with decoder-only side information, to efficiently describe its received signal. Unlike DF, CF relaying consistently outperforms direct transmission since the relay always aids in communication, even when the source-torelay channel is poor. ...
... where maximization is with respect to the distribution ( ) ( | ). Here, corresponds to the relay's compressed description of , and the rate constraint in (3) coincides with the one that emerges in WZ rate-distortion function [10]. Recall that in CF, the relay regards its received signal as an unstructured random process jointly distributed with the signal received at the destination . ...
... Recall that in CF, the relay regards its received signal as an unstructured random process jointly distributed with the signal received at the destination . This enables the relay to exploit WZ compression [10], to efficiently describe its received signal. We note that the capacity of the PRC without oblivious relaying constraint is still not fully characterized [15]. ...
Preprint
The relay channel, consisting of a source-destination pair along with a relay, is a fundamental component of cooperative communications. While the capacity of a general relay channel remains unknown, various relaying strategies, including compress-and-forward (CF), have been proposed. In CF, the relay forwards a quantized version of its received signal to the destination. Given the correlated signals at the relay and destination, distributed compression techniques, such as Wyner--Ziv coding, can be harnessed to utilize the relay-to-destination link more efficiently. Leveraging recent advances in neural network-based distributed compression, we revisit the relay channel problem and integrate a learned task-aware Wyner--Ziv compressor into a primitive relay channel with a finite-capacity out-of-band relay-to-destination link. The resulting neural CF scheme demonstrates that our compressor recovers binning of the quantized indices at the relay, mimicking the optimal asymptotic CF strategy, although no structure exploiting the knowledge of source statistics was imposed into the design. The proposed neural CF, employing finite order modulation, operates closely to the rate achievable in a primitive relay channel with a Gaussian codebook. We showcase the advantages of exploiting the correlated destination signal for relay compression through various neural CF architectures that involve end-to-end training of the compressor and the demodulator components. Our learned task-oriented compressors provide the first proof-of-concept work toward interpretable and practical neural CF relaying schemes.
... Distributed multi-terminal lossy coding is an extension of the lossless case; the rate-distortion (RD) functions in this case are analyzed in [23]- [25]. Wyner and Ziv derived the ratedistortion region for the distributed lossy source coding problem with side information at the decoder [23]. ...
... Distributed multi-terminal lossy coding is an extension of the lossless case; the rate-distortion (RD) functions in this case are analyzed in [23]- [25]. Wyner and Ziv derived the ratedistortion region for the distributed lossy source coding problem with side information at the decoder [23]. Berger [24] and Tung [25] determined inner and outer bounds of the achievable RD region of multi-terminal source coding with two sources. ...
... Ref. [27] proposed a modified Berger-Tung coding by changing the constraints of codebook design for Federated Learning applications where data sharing is replaced with model sharing. As in the lossless case, the information theoretic results [23]- [26] are utilized in [28] where the LF technique is used in E2E lossy cases. The trade off between tolerable maximum distortion, rate constraints, and outage probability of the system is analyzed in [28]. ...
Article
Full-text available
This paper presents in-depth rate-distortion and outage probability analyses for two-stage successive Wyner-Ziv (WZ) wireless communication networks. The system model assumes Lossy Forward (LF) cooperative communication where lossless reconstruction is not necessarily required at the relay. This paper aims to quantitatively derive the relationship in distortions between the Source-to-Destination and the Source-to-Relay links. Hence, the design parameters are the distortion levels at the relay and destination. The admissible rate-distortion regions are first analyzed for the two stages separately, where the relay is referred to as Helper. The rate constraints with the links involved in the end-to-end (E2E) communications are then derived. Distortion Transfer Function (DTF) is introduced as a mathematical tool for analyzing the distortions of networks having multiple stages. It is shown that the higher the correlation between the Source and Helper observations, as well as the larger the E2E tolerable distortion, the larger the admissible rate region. The outage probability of the two-stage successive WZ system is evaluated, assuming that the second stage suffers from block Rayleigh fading while the first stage performs over a static wireless channel. The E2E outage probability is also analyzed with the distortion requirements at Helper and Destination as parameters in independent and correlated fading variations. It is demonstrated that the decay of the outage probability curve exhibits a second-order diversity in a low-to-medium value range of average signal-to-noise ratios (SNRs) when the helper distortion is relatively low. It is shown, however, that as long as the reconstruction at Helper is lossy, the outage probability curve asymptotically converges to the decay corresponding to the first-order diversity at high average SNRs.
... Shannon characterized the optimal rate-distortion trade-off under an additive distortion measure, i.e. d(x n , y n ) = (1/n) n i=1 d(x i , y i ). In [1], Wyner and Ziv generalized this result to the case where a side information Z n , correlated with X n , is available either only at the decoder or at both the encoder and the decoder. Recently, there has been a renewed interest in compression algorithms since methods based on deep neural networks (DNNs) have been shown to outperform traditional image and video compression codecs [2]- [14] under different distortion measures. ...
... 2) Construction of Q (2) and comparison to Q (1) : Using definition (16) of Q (1) and the Markov property X − Z, V − Y from (4) we get: ...
... With this in mind, for every positive integer n we define the following distribution which differs from Q (1) in that Y n is sampled usingM instead of M : As a consequence and by construction of Q (2) and its similarity to that of Q (1) we have by Lemma 12 with L = Y n : ...
Preprint
Full-text available
In image compression, with recent advances in generative modeling, the existence of a trade-off between the rate and the perceptual quality has been brought to light, where the perception is measured by the closeness of the output distribution to the source. This leads to the question: how does a perception constraint impact the trade-off between the rate and traditional distortion constraints, typically quantified by a single-letter distortion measure? We consider the compression of a memoryless source $X$ in the presence of memoryless side information $Z,$ studied by Wyner and Ziv, but elucidate the impact of a perfect realism constraint, which requires the output distribution to match the source distribution. We consider two cases: when $Z$ is available only at the decoder or at both the encoder and the decoder. The rate-distortion trade-off with perfect realism is characterized for sources on general alphabets when infinite common randomness is available between the encoder and the decoder. We show that, similarly to traditional source coding with side information, the two cases are equivalent when $X$ and $Z$ are jointly Gaussian under the squared error distortion measure. We also provide a general inner bound in the case of limited common randomness.
... Here, we investigate the setup characterized by Wyner and Ziv [2] (WZ), which is both more general than SW as it encompasses lossy compression, and a simpler special case, as it assumes the decoder has access to a correlated source, the side information, losslessly. For WZ coding, there has been vast prior work considering synthetic setups and specific This work is supported in part by NYU Wireless and Google. ...
... Since our choice of objective functions is inspired by the rate-distortion function of the case where side information is only available at the decoder, we briefly recap the WZ theorem and the accompanying information theoretic concepts. For the complete proof, refer to the original paper [2] and to [12]. ...
... This choice keeps the parametric families as general as possible and does not unnecessarily impose any structure. Specifically, this allows the model p θ (u|x) to learn, if needed, quantization schemes that involve discontiguous bins, akin to the random binning operation in the achievability part of the WZ theorem [2], and resembling the systematic partitioning of the quantized source space with cosets in DISCUS [5]. ...
Preprint
We consider lossy compression of an information source when the decoder has lossless access to a correlated one. This setup, also known as the Wyner-Ziv problem, is a special case of distributed source coding. To this day, real-world applications of this problem have neither been fully developed nor heavily investigated. We propose a data-driven method based on machine learning that leverages the universal function approximation capability of artificial neural networks. We find that our neural network-based compression scheme re-discovers some principles of the optimum theoretical solution of the Wyner-Ziv setup, such as binning in the source space as well as linear decoder behavior within each quantization index, for the quadratic-Gaussian case. These behaviors emerge although no structure exploiting knowledge of the source distributions was imposed. Binning is a widely used tool in information theoretic proofs and methods, and to our knowledge, this is the first time it has been explicitly observed to emerge from data-driven learning.
... It remains to explain the compression and state estimation in more details. In our scheme, the index J * k,(b−1) is obtained by means of a Wyner-Ziv compression [32] that lossily compresses the tuple ( ...
... ). In order for the decoder to be able to correctly reconstruct the compression codeword, the Wyner-Ziv codes need to be of rates at least [32] ...
... Wyner-Ziv coding [32]. Instead, decoder side-information is taken into account via the joint typicality check in (59). ...
Preprint
This paper considers information-theoretic models for integrated sensing and communication (ISAC) over multi-access channels (MAC) and device-to-device (D2D) communication. The models are general and include as special cases scenarios with and without perfect or imperfect state-information at the MAC receiver as well as causal state-information at the D2D terminals. For both setups, we propose collaborative sensing ISAC schemes where terminals not only convey data to the other terminals but also state-information that they extract from their previous observations. This state-information can be exploited at the other terminals to improve their sensing performances. Indeed, as we show through examples, our schemes improve over previous non-collaborative schemes in terms of their achievable rate-distortion tradeoffs. For D2D we propose two schemes, one where compression of state information is separated from channel coding and one where it is integrated via a hybrid coding approach.
... C OMPRESSION and processing of large amount of data is a challenge in various applications. From an information theory perspective, there are asymptotic optimal approaches to the distributed source compression problem that can achieve arbitrarily small decoding error probability for large blocklengths, such as noiseless distributed coding of correlated sources as proposed by Slepian-Wolf [2], and their extensions [3], [4], [5], which are based on orthogonal binning of typical sequences. Practical Slepian-Wolf encoding schemes include coset codes [4], and turbo codes [6]. ...
... Despite these approaches, the exact achievable rate region for the function compression problem is, in general, an open problem. To the best of our knowledge, it is only solved for special scenarios, including general tree networks [10], linear functions [14], identity function [2], and rate-distortion characterization with decoder side information [3]. However, there do not exist tractable approaches that approximate the information-theoretic limits to perform functional compression in general topologies. ...
... The technique differs from traditional vector quantization for data compression and brings together techniques from information theory, such as distributed source encoding, functional compression, and optimization of mutual information, to the area of signal processing via function quantization inspired by hyperplane-based vector quantizers. Hyper binning does not rely on the NP-hard nature of graph coloring [10] and the asymptotically optimal information-theory-based models [2], [3] which are impractical for finite blocklengths. Hyper binning is an intuitive generalization using linear hyperplanes for encoding continuous functions through a vector quantization of the high dimensional codebook space. ...
Article
We design a distributed function-aware quantization scheme for distributed functional compression. We consider 2 correlated sources X <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> and X <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> and a destination that seeks an estimate $\hat{f}$ for the outcome of a continuous function f ( X <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> , X <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> ). We develop a compression scheme called hyper binning in order to quantize f via minimizing the entropy of joint source partitioning. Hyper binning is a natural generalization of Cover's random code construction for the asymptotically optimal Slepian-Wolf encoding scheme that makes use of orthogonal binning. The key idea behind this approach is to use linear discriminant analysis in order to characterize different source feature combinations. This scheme captures the correlation between the sources and the function's structure as a means of dimensionality reduction. We investigate the performance of hyper binning for different source distributions and identify which classes of sources entail more partitioning to achieve better function approximation. Our approach brings an information theory perspective to the traditional vector quantization technique from signal processing.
... Instead, we rely on arguments showing that certain Markov chains hold in an asymptotic regime of infinite blocklengths. Notice that our method to circumvent variational characterizations, or hypercontractivity, or blowing-up arguments [38], seems to extend also to other converse proofs, see for example the simplified proof of the well-known strong converses for lossless and lossy compression with side-information at the decoder [39], [40] presented in [41]. ...
... Remark 5 (Rate-boosts when 1 = 2 ): Notice that in our two-hop system with expected-rate constraints, exponents θ 1,max and θ 2,max defined in (39) and (37), are the largest possible exponents achievable at the two decision centers, irrespective of the ordering of 1 and 2 . By Theorem 3, they coincide with the optimal exponents under maximum-rate constraints R 1 /(1 − 1 ) and R 2 /(1 − 1 ) for the two links in case of (39), and maximum-rate constraints R 1 /(1 − 2 ) and R 2 /(1 − 2 ) in case of (37). ...
... Remark 5 (Rate-boosts when 1 = 2 ): Notice that in our two-hop system with expected-rate constraints, exponents θ 1,max and θ 2,max defined in (39) and (37), are the largest possible exponents achievable at the two decision centers, irrespective of the ordering of 1 and 2 . By Theorem 3, they coincide with the optimal exponents under maximum-rate constraints R 1 /(1 − 1 ) and R 2 /(1 − 1 ) for the two links in case of (39), and maximum-rate constraints R 1 /(1 − 2 ) and R 2 /(1 − 2 ) in case of (37). We thus observe that whenever 1 = 2 , the rate-boosts that expected-rate constraints allow to obtain over maximum-rate constraints depend on the permissible type-I error probabilities and also on the tradeoff between the two exponents θ 1 and θ 2 . ...
Article
Full-text available
We consider a multi-hop distributed hypothesis testing problem with multiple decision centers (DCs) for testing against independence and where the observations obey some Markov chain. For this system, we characterize the fundamental type-II error exponents region, i.e., the type-II error exponents that the various DCs can achieve simultaneously, under expected rate-constraints. Our results show that this fundamental exponents region is boosted compared to the region under maximum-rate constraints, and that it depends on the permissible type-I error probabilities. When all DCs have equal permissible type-I error probabilities, the exponents region is rectangular and all DCs can simultaneously achieve their optimal type-II error exponents. When the DCs have different permissible type-I error probabilities, a tradeoff between the type-II error exponents at the different DCs arises. New achievability and converse proofs are presented. For the achievability, a new multiplexing and rate-sharing strategy is proposed. The converse proof is based on applying different change of measure arguments in parallel and on proving asymptotic Markov chains. For the special cases K ∈ {2, 3}, and for arbitrary K ≥ 2 when all permissible type-I error probabilities at the various DCs are equal, we provide simplified expressions for the exponents region; a similar simplification is conjectured for the general case.
... There have been a volume of representative publications related to lossy cooperative wireless communications, which are summarized in Table I 2 . Among them, this paper specifically utilizes the landmark results of the fundamental limit analyses established by Wyner and Ziv [15]. ...
... However, the analysis with CCC is out of the scope of this paper and left as future work. Lin and Matsumoto [11], [12] Lin et al. [13] Oohama [14] Wyner and Ziv [15] Timo et al. [16] Sechelea et al. [17] Berger [ From the aforementioned literatures, we can identify that the problem, performance analysis of lossy cooperative wireless communications over MACs with a practical structure of helper, is worth being solved. Up to the authors' best knowledge, though, neither the achievable rate-distortion region nor the outage probability is analyzed for cooperative lossy communications with a bit-flipping (BF) helper, which can be also interpreted as a lossy-forward (LF) [28] helper. ...
Article
Full-text available
The primary objective of this paper is to establish an analytical framework for evaluating the rate-distortion and the outage probability performance in Internet-of-Things (IoT) systems based on lossy cooperative wireless communications. Two correlated sources transmit information through fading multiple access channels (MACs) with the assistance of a bit-flipping (BF) helper. To begin with, we derive a closed-form expression of the inner bound on the achievable rate-distortion region. To reduce computational complexity, we then propose an approximate method for calculating the outage probability based on the lossy source-channel separation theorem. Moreover, Monte-Carlo methods are adopted to evaluate the outage probability in MAC and orthogonal transmission schemes. Theoretically approximate results are also compared with the exact results obtained by Monte-Carlo methods. It is shown that the gap between the approximate and exact performance curves decreases as the distortion requirement becomes smaller. Especially when the distortion requirement reduces to zero, the approximated outage probability is exactly the same as the results obtained by Monte-Carlo methods. In addition, we also present performance comparisons in terms of the outage probability between Rayleigh and Nakagami-m fading, and between BF helper and optimal helper.
... Drawing on ideas from distributed quantization problem in information theory (cf. Wyner and Ziv (1976)), specifically the Wyner-Ziv problem, we present Wyner-Ziv estimators for distributed mean estimation. In the known ∆ setting, for a fixed ∆, and the low-precision setting of r ≤ d, we propose an r-bit SMP protocol π * k which satisfies 3 ...
... In the classic information theoretic setting, related problems of quantization with side information at the decoder have been considered in rate-distortion theory starting with the seminal work of Wyner and Ziv (Wyner and Ziv, 1976). Practical codes for settings where the observations are generated from known distributions have been constructed using channel codes; see, for instance, Korada and Urbanke (2010); Ling et al. (2012); Liu and Ling (2015); Pradhan and Ramchandran (2003); Zamir et al. (2002). ...
Conference Paper
Full-text available
Communication efficient distributed mean estimation is an important primitive that arises in many distributed learning and optimization scenarios such as federated learning. Without any probabilistic assumptions on the underlying data, we study the problem of distributed mean estimation where the server has access to side information. We propose Wyner-Ziv estimators, which are communication and computationally efficient and near-optimal when an upper bound for the distance between the side information and the data is known. In a different direction, when there is no knowledge assumed about the distance between side information and the data, we present an alternative Wyner-Ziv estimator that uses correlated sampling. This latter setting offers universal recovery guarantees, and perhaps will be of interest in practice when the number of users is large and keeping track of the distances between the data and the side information may not be possible.
... This inspires the development of distributed compression with side information, where the sensors encode their data, independently, and the decoders recover the data with the help of side information from another source only available at the decoder. The Distributed Source Coding (DSC) theorem (Slepian and Wolf 1973;Cover 1975;Wyner and Ziv 1976) reveals that the same compression rate can be asymptotically achieved by using side information (SI) only ...
... Evaluation Metrics We use bits per pixel (bpp) to measure the compression ratio. Both peak signal-to-noise ratio (PSNR) and multi-scale structural similarity (MS-SSIM) (Wang, Simoncelli, and Bovik 2003) are used to measure image quality/signal distortion. Moreover, we use BD-Rate, negative numbers to indicate the percentage of average bit savings at the same image quality across the rate-distortion curve compared with some chosen baselines. ...
Article
Beyond achieving higher compression efficiency over classical image compression codecs, deep image compression is expected to be improved with additional side information, e.g., another image from a different perspective of the same scene. To better utilize the side information under the distributed compression scenario, the existing method only implements patch matching at the image domain to solve the parallax problem caused by the difference in viewing points. However, the patch matching at the image domain is not robust to the variance of scale, shape, and illumination caused by the different viewing angles, and can not make full use of the rich texture information of the side information image. To resolve this issue, we propose Multi-Scale Feature Domain Patch Matching (MSFDPM) to fully utilizes side information at the decoder of the distributed image compression model. Specifically, MSFDPM consists of a side information feature extractor, a multi-scale feature domain patch matching module, and a multi-scale feature fusion network. Furthermore, we reuse inter-patch correlation from the shallow layer to accelerate the patch matching of the deep layer. Finally, we find that our patch matching in a multi-scale feature domain further improves compression rate by about 20% compared with the patch matching method at image domain.
... Han improves upon Ahlswede-Csiszár by adding a typicality check by the sender: if (x n , u n ) are not jointly typical, then the sender tells the receiver to ignore y n and simply to declare H 1 . When the communication rate R is not sufficient to describe u n , SHA use random binning as in Wyner-Ziv coding [4]. Rahman and Wagner [5] show that SHA's scheme is optimal for a scenario called "testing against conditional independence"; see their work for more details. ...
... If, for the chosen P U |X , (7) is not satisfied, then the sender cannot send the exact index of the chosen u n to the receiver. However, one can think of y n as side information for the receiver and use binning as in Wyner-Ziv coding [4]. Indeed, under H 0 , in order for the receiver to recover the correct u n with high probability under H 0 , a rate of I(U ; X|Y ) computed under P U XY would suffice. ...
Preprint
Shimokawa, Han, and Amari proposed a "quantization and binning" scheme for distributed binary hypothesis testing. We propose a simple improvement on the receiver's guessing rule in this scheme. This attains a better exponent of the error probability of the second type.
... These functions reflect the fundamental trade-offs between communication resources and other considerations. Important special cases include the Gelfand-Pinsker channel problem [4] and the Wyner-Ziv lossy compression problem [5]. ...
... As an extension of [5], some work [6] considered a lossy computing problem with decoder side information and obtained the rate-distortion function. However, the expression is in terms of an auxiliary random variable, for which however the intuitive meaning is not clear. ...
Preprint
We consider the point-to-point lossy coding for computing and channel coding problems with two-sided information. We first unify these problems by considering a new generalized problem. Then we develop graph-based characterizations and derive interesting reductions through explicit graph operations, which reduce the number of decision variables. After that, we design alternating optimization algorithms for the unified problems, so that numerical computations for both the source and channel problems are covered. With the help of extra root-finding techniques, proper multiplier update strategies are developed. Thus our algorithms can compute the problems for a given distortion or cost constraint and the convergence can be proved. Also, extra heuristic deflation techniques are introduced which largely reduce the computational time. Numerical results show the accuracy and efficiency of our algorithms.
... A distributed video coding (DVC) scheme is based on two important theorems, Slepian-Wolf [1] and Wyner-Ziv [2]. This video coding paradigm follows the principle of distributed source coding (DSC) and is becoming a prominent video coding paradigm due to shifting the high computational complexity to the decoder. ...
... The QUATRID makes use of the quantization metric deployed in [35] for R frame quantization. However, the quantization step (Wq) is computed differently and is presented in Equation (2). In Equation (2), the | | define the maximum absolute coefficient value of the corresponding transform band and 2 defines the quantization level of the transformed band . ...
Article
Full-text available
Distributed video coding (DVC) is based on distributed source coding (DSC) concepts in which video statistics are used partially or completely at the decoder rather than the encoder. The rate-distortion (RD) performance of distributed video codecs substantially lags the conventional predictive video coding. Several techniques and methods are employed in DVC to overcome this performance gap and achieve high coding efficiency while maintaining low encoder computational complexity. However, it is still challenging to achieve coding efficiency and limit the computational complexity of the encoding and decoding process. The deployment of distributed residual video coding (DRVC) improves coding efficiency, but significant enhancements are still required to reduce these gaps. This paper proposes the QUAntized Transform ResIdual Decision (QUATRID) scheme that improves the coding efficiency by deploying the Quantized Transform Decision Mode (QUAM) at the encoder. The proposed QUATRID scheme’s main contribution is a design and integration of a novel QUAM method into DRVC that effectively skips the zero quantized transform (QT) blocks, thus limiting the number of input bit planes to be channel encoded and consequently reducing both the channel encoding and decoding computational complexity. Moreover, an online correlation noise model (CNM) is specifically designed for the QUATRID scheme and implemented at its decoder. This online CNM improves the channel decoding process and contributes to the bit rate reduction. Finally, a methodology for the reconstruction of the residual frame (R^) is developed that utilizes the decision mode information passed by the encoder, decoded quantized bin, and transformed estimated residual frame. The Bjøntegaard delta analysis of experimental results shows that the QUATRID achieves better performance over the DISCOVER by attaining the PSNR between 0.06 dB and 0.32 dB and coding efficiency, which varies from 5.4 to 10.48 percent. In addition to this, results determine that for all types of motion videos, the proposed QUATRID scheme outperforms the DISCOVER in terms of reducing the number of input bit-planes to be channel encoded and the entire encoder’s computational complexity. The number of bit plane reduction exceeds 97%, while the entire Wyner-Ziv encoder and channel coding computational complexity reduce more than nine-fold and 34-fold, respectively.
... If even the changing time is not measured at all while the state transition only triggers the generation and transmission of a new packet, then we call it measurement-free TSI. In particular, we borrow the ideas of source coding with side information [14], [15], [16] at the encoder and decoder or at the decoder only to derive the minimum source data rate for lossless compression and the rate-distortion function for lossy compression with full or partial TSI respectively. For measurement-free TSI, we apply techniques of statistical signal processing [17] to obtain optimal estimates of the changing or holding time. ...
... Proof: The lossy compression of T based on future partial TSI can be modeled as distributed source coding with noncausal side information informed to the decoder. Thereby, this lossy compression problem can be formulated into the Wyner-Ziv problem [15], where the side information is expressed in the form of T + D at the decoder only. The rate-distortion function is given by ...
Article
Full-text available
Real-time monitoring plays a pivotal role in the Industrial Internet of Things (IIoT) with potential applications in factory automation, automated driving, and telesurgery, thereby attracting considerable recent attention in anticipation of the development of the sixth-generation (6G) of wireless networks. In this paper, we present a paradigm-shift data compression method that makes use of timing side information (TSI) obtained by observing two synchronized clocks at a remote sensor and a monitor. In particular, the TSI is found to allow the transmitter to send fewer bits consumed in characterizing the changing or holding time of a piecewise-constant stochastic process. We borrow the idea of source coding with side information to reveal the performance limits of both TSI-based lossless and lossy compression, and to develop practical low-complexity source coding schemes. To further reduce the implementation complexity and the hardware cost, we also present a real-time monitoring scheme where the sensor does not necessarily measure the state transition time. A statistical signal processing algorithm is adopted to estimate the changing time accurately. Our theoretical and numerical results show that the compression gain owing to the TSI is quite substantial, especially when the communication latency and the delay jitter are limited.
... [?] How to implement the Wyner-Ziv theorem [1] in practical coding design? Academia and industry have struggled with this question for several decades, although the Wyner-Ziv theorem provides a prospect to enhance the information quality by providing side information at the decoder only. ...
Article
Full-text available
This letter proposes a novel anti-interference technique, semantic interference cancellation (SemantIC), for enhancing information quality towards the sixth-generation (6G) wireless networks. SemantIC only requires the receiver to concatenate the channel decoder with a semantic auto-encoder. This constructs a turbo loop which iteratively and alternately eliminates noise in the signal domain and the semantic domain. From the viewpoint of network information theory, the neural network of the semantic auto-encoder stores side information by training, and provides side information in iterative decoding, as an implementation of the Wyner-Ziv theorem. Simulation results verify the performance improvement by SemantIC without extra channel resource cost.
... In the successive processing scheme, as the second choice regarding the set of imposed constraints, unlike the parallel scheme, from a compression perspective, side-information is used when handling the signal, y j . The main idea behind this scheme is fully aligned with the well-known Wyner-Ziv [27], [28] setup for source coding, where a statistically correlated signal is used as side-information at the decoder. The design problem is formulated in [26] for the optimal set of compressors, P * = {p * (z 1 |y 1 ), . . . ...
Conference Paper
Full-text available
Consider a user equipment in a Cell-Free massive Multiple-Input Multiple-Output (CF-mMIMO) system that is served by several Radio Access Points (RAPs). In the uplink of this setup, these RAPs receive noisy observations of the user/source signal and must locally compress their signals before forwarding them to the Central Processing Unit (CPU) through multiple rate-limited fronthaul channels. To retrieve the source signal at CPU, we are interested in maximizing the Mutual Information (MI) between the received signals at CPU and the user/source signal, and purposefully choose the Information Bottleneck (IB)-based compression techniques to design the quantizers at RAPs. We consider both separate and joint designs of the local compressors by establishing basic trade-offs between the informativity and compactness of the outcomes. For the joint design, two different schemes are presented, based on whether to leverage the side-information at CPU. Finally, the effectiveness of both compression schemes will be shown as well by means of numerical investigations over typical digital data transmission scenarios.
... Similarly, the authors of [11] characterized the remote rate-distortion trade-off when correlated side information is available both at the encoder and decoder. Our problem for M = 1 can be solved by combining the remote rate-distortion problem with the classical Wyner-Ziv rate-distortion function [12], [13]. ...
Preprint
This paper studies a variant of the rate-distortion problem motivated by task-oriented semantic communication and distributed learning problems, where $M$ correlated sources are independently encoded for a central decoder. The decoder has access to a correlated side information in addition to the messages received from the encoders, and aims to recover a latent random variable correlated with the sources observed by the encoders within a given distortion constraint rather than recovering the sources themselves. We provide bounds on the rate-distortion region for this scenario in general, and characterize the rate-distortion function exactly when the sources are conditionally independent given the side information.
... For example, if both the encoder and the decoder know the side information, the problem becomes Gray's conditional RD problem [9], [10]. If only the decoder knows the side information, the problem becomes the one studied by Wyner and Ziv in [11]. If the side information is chosen by some adversary, the problem becomes Berger's source coding game [3, Section 6.1.2] ...
Preprint
Full-text available
A composite source, consisting of multiple subsources and a memoryless switch, outputs one symbol at a time from the subsource selected by the switch. If some data should be encoded more accurately than other data from an information source, the composite source model is suitable because in this model different distortion constraints can be put on the subsources. In this context, we propose subsource-dependent fidelity criteria for composite sources and use them to formulate a rate-distortion problem. We solve the problem and obtain a single-letter expression for the rate-distortion function. Further rate-distortion analysis characterizes the performance of classify-then-compress (CTC) coding, which is frequently used in practice when subsource-dependent fidelity criteria are considered. Our analysis shows that CTC coding generally has performance loss relative to optimal coding, even if the classification is perfect. We also identify the cause of the performance loss, that is, class labels have to be reproduced in CTC coding. Last but not least, we show that the performance loss is negligible for asymptotically small distortion if CTC coding is appropriately designed and some mild conditions are satisfied.
... Moreover, Distributed Source Coding (DSC) in Information Theory is used to reduce the data loss in image compression of computer vision applications [156]. In DSC, side information, used in encoder and decoder parts of the compression algorithm, conditional rate distortion system, RX|Y(D) [157], or only in the decoder part for reconstructing the data, Wyner-Zic coding, RWZ(D) [158]. Theoretically, side information can improve the compression performance and obtain lower bitrates with less data loss [159]. ...
Thesis
In the era of unprecedented climatic, geomorphologic, environmental, and anthropogenic changes in the Earth, a global-scale, long-term, and continues monitoring via Earth Observation (EO) sensors is imperative. Among EO sensors, Synthetic Aperture Radar (SAR) systems stand out due to their day and night observation capability and immunity to atmospheric conditions, and play an essential role in ensuring uninterrupted worldwide monitoring. However, SAR data have a high degree of complexity; they are Complex-Valued (CV) multidimensional signals with particular properties induced by the coherent imaging mode and the observed scene scattering process and the inherent adversarial effect. Deep learning has emerged as a remarkably potent and widely adopted technique across diverse fields, showcasing its unparalleled effectiveness in tackling complex challenges, including remote sensing. This thesis is dedicated to explore novel deep learning-based solutions for SAR applications, considering unique characteristics of SAR data and capability of deep networks to learn and model data distribution of SAR data, to unveil new perspectives in this field. We delve into the CV deep architectures for this purpose to fully exploit the amplitude and phase components of SAR data. The research presented in this thesis can be classified into three parts: In the first part, we investigate the Bayesian generative model, Latent Dirichlet Allocation, for big EO data mining of the semantic content and generate a CV semantically annotated dataset from Sentinel-1 (S1) Single Look Complex (SLC) StripMap (SM) mode products, called S1SLC_CVDL, for training CV deep networks. Moving forward, the second part of the thesis is dedicated to the implementation of the CV networks and comprehensive analyses of these models, with respect to the particular characteristics of CV-SAR data. In this part, a wide range of operators, layers, and functions are converted into the complex domain for CV network’s implementation. Later, extensive investigations are carried out on CV deep architectures for various SAR applications, illuminating the supremacy of CV models for semantic land cover classification, data distribution modelling, complex coherence preservation, and physical attributes interpretation and retrieval from SAR data. Acknowledging their enormous potential, in the last part, we venture into the practical and more complicated applications of the CV networks. We employ CV networks to engineer a novel data compression approach utilizing CV autoencoders, tailored for compressing SAR raw data. The demonstrated capabilities of the CV deep architectures in this thesis unravel new perspectives in the field of CV deep architectures for SAR applications and pave the way for the future development of physics-aware CV deep networks with data distribution modelling capability for various remote sensing applications.
... Actually, quantization using LDGM codes and noiseless binning using LDPC codes play important role in these problems, if these part are efficiently designed then the total performance of structure will be efficient too. Source coding with side information available at the decoder scenario, which is known as asymmetric Slepian-Wolf problem in lossless case [5], and the Wyner-Ziv problem in lossy case have own theoretical bounds [6], which determine the lowest achievable rates. These theoretical bounds depend on the structure of problem and its parameters. ...
... Furthermore, considering that the side information is available at joint DEC but unavailable at ENC 1, the structure is a Wyner-Ziv problem [11] because the side information provided by the semantic knowledge is noncausal. Hence, the rate is reduced by condition on V , as ...
Article
Full-text available
This letter proposes a novel relaying framework, semantic-forward (SF), for cooperative communications towards the sixth-generation (6G) wireless networks. The SF relay extracts and transmits the semantic features, which reduces forwarding payload, and also improves the network robustness against intra-link errors. Based on the theoretical basis for cooperative communications with side information and the turbo principle, we design a joint source-channel coding algorithm to iteratively exchange the extrinsic information for enhancing the decoding gains at the destination. Surprisingly, simulation results indicate that even in bad channel conditions, SF relaying can still effectively improve the recovered information quality.
... In particular, there is much more focus on bounded-round communication, and significantly less focus on techniques for obtaining specific lower bounds on the communication complexity of specific functions such as the disjointness function. The most relevant work to our current discussion is a more recent line of work by Ishwar and Ma, which studied interactive amortized communication and obtained characterizations closely related to the ones discussed below [73,74], building on earlier works of Wyner and Ziv [104] from the 1970s. ...
... [46,47,49]). Similar to the Wyner-Ziv protocol [58][59][60][61], one also considers the rate-distortion scenario in the future. ...
Article
Full-text available
In this paper, we consider the classical Slepian–Wolf coding with quantum side information, corresponding to the compression of two correlated classical parts, using their quantum parts as side information at the decoder. By quantum Feinstein’s lemma, we give the achievable rate region. We then extend to the multiple classical–quantum sources case. We also consider the classical Slepian–Wolf coding with full (local) quantum helper. Using the measure compression theorem, we get the achievable rate region and extend to the multiple classical–quantum sources case.
... The scheme of Mittal and Phamdo was subsequently improved by Reznic et al. [6] (see also [13,14], [15] (Chapter 11.1)) by replacing the successive refinement layers with lattice-based Wyner-Ziv coding [16,17], [4] (Chapter 11.3) which, in contrast to the digital layers of the scheme of Mittal and Phamdo, enjoys an improvement in each of the layers with the ENR. ...
Article
Full-text available
We consider the problem of transmitting a Gaussian source with minimum mean square error distortion over an infinite-bandwidth additive white Gaussian noise channel with an unknown noise level and under an input energy constraint. We construct a universal joint source–channel coding scheme with respect to the noise level, that uses modulo-lattice modulation with multiple layers. For each layer, we employ either analog linear modulation or analog pulse-position modulation (PPM). We show that the designed scheme with linear layers requires less energy compared to existing solutions to achieve the same quadratically increasing distortion profile with the noise level; replacing the linear layers with PPM layers offers an additional improvement.
... The assistance produced in Block j is t (j) = h(s (j) , z (j) ). A v-sequence, v (j) , is then chosen based on t (j) , and is binned as in Wyner-Ziv coding [14], treating the outputs y (j) as side information that is available to the decoder. The bin index determines the cloud center in Block (j + 1). ...
Article
Full-text available
A memoryless state sequence governing the behavior of a memoryless state-dependent channel is to be described causally to an encoder wishing to communicate over said channel. Given the maximal-allowed description rate, we seek the description that maximizes the Shannon capacity. It is shown that the maximum need not be achieved by a memoryless (symbol-by-symbol) description. Such descriptions are, however, optimal when the receiver is cognizant of the state sequence or when the description is allowed to depend on the message. For other cases, a block-Markov scheme with backward decoding is proposed.
... Distributed source and functional compression. Other attempts have been inspired from the seminal work of Slepian-Wolf [19] on distributed source compression, the ratedistortion coding models of Wyner-Ziv with side information [20], and for lossy source coding [21], toward function computation. These works include [22]- [25] that consider function computation over networks, as well as [25] and [26], considering the generalization to functional rate-distortion, and [27] and [28], focusing on hypergraph-based source coding and function approximation under maximal distortion. ...
Preprint
We consider the problem of distributed lossless computation of a function of two sources by one common user. To do so, we first build a bipartite graph, where two disjoint parts denote the individual source outcomes. We then project the bipartite graph onto each source to obtain an edge-weighted characteristic graph (EWCG), where edge weights capture the function's structure, by how much the source outcomes are to be distinguished, generalizing the classical notion of characteristic graphs. Via exploiting the notions of characteristic graphs, the fractional coloring of such graphs, and edge weights, the sources separately build multi-fold graphs that capture vector-valued source sequences, determine vertex colorings for such graphs, encode these colorings, and send them to the user that performs minimum-entropy decoding on its received information to recover the desired function in an asymptotically lossless manner. For the proposed EWCG compression setup, we characterize the fundamental limits of distributed compression, verify the communication complexity through an example, contrast it with traditional coloring schemes, and demonstrate that we can attain compression gains higher than $\% 30$ over traditional coloring.
... Note that in Block the decoder has the knowledge about 0, +1 from the decoding of Block + 1, so it tries to find a unique +1 in C +1 such that ( ( 0, +1 , +1 ), ) ∈ , . By Wyner-Ziv Theorem [41], it follows that ...
Preprint
Strong secrecy communication over a discrete memoryless state-dependent multiple access channel (SD-MAC) with an external eavesdropper is investigated. The channel is governed by discrete memoryless and i.i.d. channel states and the channel state information (CSI) is revealed to the encoders in a causal manner. Inner and outer bounds are provided. To establish the inner bound, we investigate coding schemes incorporating wiretap coding and secret key agreement between the sender and the legitimate receiver. Two kinds of block Markov coding schemes are proposed. The first one is a new coding scheme using backward decoding and Wyner-Ziv coding and the secret key is constructed from a lossy description of the CSI. The other one is an extended version of the existing coding scheme for point-to-point wiretap channels with causal CSI. A numerical example shows that the achievable region given by the first coding scheme can be strictly larger than the second one. However, these two schemes do not outperform each other in general and there exists some numerical examples that in different channel models each coding scheme achieves some rate pairs that cannot be achieved by another scheme. Our established inner bound reduces to some best-known results in the literature as special cases. We further investigate some capacity-achieving cases for state-dependent multiple access wiretap channels (SD-MAWCs) with degraded message sets. It turns out that the two coding schemes are both optimal in these cases.
... There have also been recent works in designing distributed neural compression schemes [22]- [25]. These works are inspired by the information-theoretic results on compression with side information [26], [27], which say that if one has side information on a source to be compressed, an encoder that does not observe side information can perform just as well as one that does (in both cases the decoder observes the side information). However, this is slightly different from the federated setting. ...
Preprint
Full-text available
We discuss a federated learned compression problem, where the goal is to learn a compressor from real-world data which is scattered across clients and may be statistically heterogeneous, yet share a common underlying representation. We propose a distributed source model that encompasses both characteristics, and naturally suggests a compressor architecture that uses analysis and synthesis transforms shared by clients. Inspired by personalized federated learning methods, we employ an entropy model that is personalized to each client. This allows for a global latent space to be learned across clients, and personalized entropy models that adapt to the clients' latent distributions. We show empirically that this strategy outperforms solely local methods, which indicates that learned compression also benefits from a shared global representation in statistically heterogeneous federated settings.
... With the help of the two techniques, for the non-entangled and the entangled side information setting, we have obtained the coding rate limit in the asymptotic limit, which is given by the maximally conditional entropy between the classical source and side information. A natural question is whether one can consider the rate-distortion problem for quantum Sgarro's coding followed the Wyner-Ziv protocol [37]. ...
Article
Full-text available
We consider the task of the classical source coding with quantum side information at several decoders. This is a quantum generalization of classical Sgarro’s three correlated information sources. We focus on classical–quantum sources, which involve the classical coding part, using the quantum part as side information at the decoder. We consider two models: non-entangled and entangled side information. To obtain optimal coding rate, we develop a quantum version of maximal simultaneous codes. The achievable rate is found to be determined by the maximally quantum conditional entropy between the classical source and side information. In particular, our result shows that the more the size of the quantum part, the smaller the coding rate.
... Here, the compression capacity is defined as the maximum average number of times that the function f can be compressed with zero error for one use of the system, which measures the efficiency of using the system, analogous to Shannon zero-error capacity [10]- [13], network coding [22]- [30] and network function computation [14]- [20]. In the current paper, we focus on this compression capacity, rather than the notion of compression capacity considered in many previously studied source coding models in which how to efficiently establish a system is investigated, e.g., lossless source coding models [1]- [6], zero-error source coding models [7]- [9], and lossy source coding models [31]- [36]. ...
Preprint
In this paper, we put forward the model of zero-error distributed function compression system of two binary memoryless sources X and Y, where there are two encoders En1 and En2 and one decoder De, connected by two channels (En1, De) and (En2, De) with the capacity constraints C1 and C2, respectively. The encoder En1 can observe X or (X,Y) and the encoder En2 can observe Y or (X,Y) according to the two switches s1 and s2 open or closed (corresponding to taking values 0 or 1). The decoder De is required to compress the binary arithmetic sum f(X,Y)=X+Y with zero error by using the system multiple times. We use (s1s2;C1,C2;f) to denote the model in which it is assumed that C1 \geq C2 by symmetry. The compression capacity for the model is defined as the maximum average number of times that the function f can be compressed with zero error for one use of the system, which measures the efficiency of using the system. We fully characterize the compression capacities for all the four cases of the model (s1s2;C1,C2;f) for s1s2= 00,01,10,11. Here, the characterization of the compression capacity for the case (01;C1,C2;f) with C1>C2 is highly nontrivial, where a novel graph coloring approach is developed. Furthermore, we apply the compression capacity for (01;C1,C2;f) to an open problem in network function computation that whether the best known upper bound of Guang et al. on computing capacity is in general tight.
... The assistance produced in Block j is t (j) = h(s (j) , z (j) ). A v-sequence, v (j) , is then chosen based on t (j) , and is binned as in Wyner-Ziv coding [14], treating the outputs y (j) as side information that is available to the decoder. The bin index determines the cloud center in Block (j + 1). ...
Preprint
A memoryless state sequence governing the behavior of a memoryless state-dependent channel is to be described causally to an encoder wishing to communicate over said channel. Given the maximal-allowed description rate, we seek the description that maximizes the Shannon capacity. It is shown that the maximum need not be achieved by a memoryless (symbol-by-symbol) description. Such descriptions are, however, optimal when the receiver is cognizant of the state sequence or when the description is allowed to depend on the message. For other cases, a block-Markov scheme with backward decoding is proposed.
... The first alternative involves decoding information issued from other node and then re-encoding them. While, the second one is based on the joint statistics between the data at cooperating nodes through the use of coding with side information, such as DPC [6] or Wyner-Ziv coding [7]. ...
Preprint
Full-text available
Modulation recognition (MR) is well-known as one of the enabling tools, which aims to further afford efficient and secure communications in cognitive radio (CR) context for 5G and beyond networks. In this Paper, we propose a robust MR approach designed for two-way MIMO cooperative relaying network by specifically taking into account two crucial constraints: hardware impairments at relay transceiver and co-channel interference. To mitigate the effect of these impairments, feature-based artificial neural network (ANN) recognizer combined with a nonlinear design of the relay processing matrix with dirty paper coding (DPC) is investigated in this work. Simulations using two receive criterions, namely DPC-ZF and DPC-MMSE, have been carried out to validate the effectiveness of the proposed approach. It has been proved that both DPC-ZF and DPC-MMSE provide high modulation recognition, whatever the impairments level at the relay transceiver.
... One-way reconciliation-aka source coding (or compression) with side informationhas been studied since the 1970s [273,274]. While related to error correcting codes, the idea here is that the data is transmitted over a noisy channel without adding any redundancy. ...
Thesis
Full-text available
Cryptography is considered the strongest technical control to protect data, but state-of-the-art methods suffer from shortcomings when applied in cloud computing or the Internet of Things. New approaches are needed enabling more agile data handling, supporting computations on encrypted data, and providing long-term security, even against quantum computer attacks. In this thesis we present research results in building long-term secure but practically efficient protocols and systems based on cryptography with information-theoretic security (ITS) for modern cloud-based applications. It brings together old and new technologies from the world of information-theoretic cryptography to overcome limitations of standard cryptographic approaches and enable end-to-end security in modern application scenarios. The focus is on secret sharing, multiparty computation and quantum key distribution, which are well known to the cryptographic community but not broadly applied in practice. In essence, we explored the possibilities to build ITS solutions for data storage, data processing and communication. Nevertheless, pure ITS is not always necessary nor possible, thus we also study combinations with computational (but also quantum-safe) symmetric primitives where appropriate for better efficiency. This thesis comprises three main parts each containing individual contributions. Firstly, the problem of secure cloud storage and data sharing is addressed. A novel architecture for a secure distributed multi-cloud storage is presented based on the combination of secret sharing with a Byzantine fault-tolerant (BFT) protocol. To cope with performance problems encountered in the first proof-of-concept, a performance model was developed and results from extensive simulations of the networking layer are presented. We also explored and optimized encoding performance for secret sharing in software and show the potential for hardware acceleration. Additionally, to also support means for data integrity monitoring we present an easy to realize and low-cost auditing approach for the developed storage system. The technique is based on batching which has also been extended further to generic batch verifiable secret sharing. Secondly, we present efficient solutions for privacy preserving data processing based on ITS flavors of secure multiparty computation (MPC). Fortunately, ITS-MPC relies on secret sharing for encoding and thus nicely extends the previous work on secure storage. We compared most relevant software frameworks and did intensive performance testing, revealing only limited scalability of the technology for more advanced computations, especially with respect to the number of MPC nodes. Therefore, we propose the use of verifiable MPC to build privacy preserving data markets. By combining MPC with compatible zero-knowledge protocols (ZKP) we were able to demonstrate an end-to-end verifiable but privacy preserving market platform for smart manufacturing which can efficiently perform auctions with a large number of participants. We also explored the possibility to run more elaborated market mechanisms based on optimization and achieved very favorable results for a use case in air traffic management. Thirdly, regarding secure communication this thesis presents results achieved in researching some particular aspects of quantum key distributions (QKD). A very efficient algorithmic approach for timing synchronization between QKD peers is presented which helped to free an optical channel in a QKD system developed at AIT. To overcome the problem of expensive compute hardware to run QKD post-processing on device, we introduce the novel idea of offloading post-processing from the device in a secure way and prove that it is possible to securely outsource information reconciliation to a single server for the case of direct reconciliation. Additionally, we also show a negative result for an efficient authentication protocol in QKD already proposed in 2004, which we were able to fully break with the method presented in this thesis. Finally, we discuss possibilities to integrate QKD with communication systems and report a real-world demonstration of the combination of secure storage with QKD to achieve information-theoretic security from end-to-end in a medical use case.
... This calls for a radical change in the video coding architecture. Inspired by Slepian-Wolf (SW) [13] and Wyner-Ziv (WZ) [14] theorems 1 developed in the 1970s on distributed source coding, distributed video coding 2 , also known as WZ video coding [15], has emerged as a promising solution to complement existing video compression methods for multimedia applications. ...
Preprint
Full-text available
Prevalent predictive coding-based video compression methods rely on a heavy encoder to reduce the temporal redundancy, which makes it challenging to deploy them on resource-constrained devices. Meanwhile, as early as the 1970s, distributed source coding theory has indicated that independent encoding and joint decoding with side information (SI) can achieve high-efficient compression of correlated sources. This has inspired a distributed coding architecture aiming at reducing the encoding complexity. However, traditional distributed coding methods suffer from a substantial performance gap to predictive coding ones. Inspired by the great success of learning-based compression, we propose the first end-to-end distributed deep video compression framework to improve the rate-distortion performance. A key ingredient is an effective SI generation module at the decoder, which helps to effectively exploit inter-frame correlations without computation-intensive encoder-side motion estimation and compensation. Experiments show that our method significantly outperforms conventional distributed video coding and H.264. Meanwhile, it enjoys 6-7x encoding speedup against DVC [1] with comparable compression performance. Code is released at https://github.com/Xinjie-Q/Distributed-DVC.
... One-way reconciliation (i.e., source coding with side information) has been studied since the 1970s [23,24]. While related to error-correcting codes, the idea here is that the data are transmitted over a noisy channel without adding any redundancy. ...
Article
Full-text available
Quantum key distribution (QKD) has been researched for almost four decades and is currently making its way to commercial applications. However, deployment of the technology at scale is challenging because of the very particular nature of QKD and its physical limitations. Among other issues, QKD is computationally intensive in the post-processing phase, and devices are therefore complex and power hungry, which leads to problems in certain application scenarios. In this work, we study the possibility to offload computationally intensive parts in the QKD post-processing stack in a secure way to untrusted hardware. We show how error correction can be securely offloaded for discrete-variable QKD to a single untrusted server and that the same method cannot be used for long-distance continuous-variable QKD. Furthermore, we analyze possibilities for multi-server protocols to be used for error correction and privacy amplification. Even in cases where it is not possible to offload to an external server, being able to delegate computation to untrusted hardware components on the device itself could improve the cost and certification effort for device manufacturers.
... The scheme of Mittal and Phamdo was subsequently improved by Reznic et al. [10] (see also [11], [12], [13,Ch. 11.1]), by replacing the successive refinement layers with lattice-based Wyner-Ziv coding [14], [15], [2,Ch. 11.3] which, in contrast to the digital layers of the scheme of Mittal and Phamdo, enjoys an improvement of each of the layers with the SNR. ...
Preprint
Full-text available
p>We consider the problem of transmitting a source over an infinite-bandwidth additive white Gaussian noise channel with unknown noise level under an input energy constraint. We construct a universal scheme that uses modulo-lattice modulation with multiple layers; for each layer, we employ either analog linear modulation or analog pulse position modulation (PPM). We show that the designed scheme with linear layers requires less energy compared to existing solutions to achieve the same quadratically increasing distortion profile with the noise level; replacing the linear layers with PPM layers offers an additional improvement. </p
... The scheme of Mittal and Phamdo was subsequently improved by Reznic et al. [10] (see also [11], [12], [13,Ch. 11.1]), by replacing the successive refinement layers with lattice-based Wyner-Ziv coding [14], [15], [2,Ch. 11.3] which, in contrast to the digital layers of the scheme of Mittal and Phamdo, enjoys an improvement of each of the layers with the SNR. ...
Preprint
Full-text available
p>We consider the problem of transmitting a source over an infinite-bandwidth additive white Gaussian noise channel with unknown noise level under an input energy constraint. We construct a universal scheme that uses modulo-lattice modulation with multiple layers; for each layer, we employ either analog linear modulation or analog pulse position modulation (PPM). We show that the designed scheme with linear layers requires less energy compared to existing solutions to achieve the same quadratically increasing distortion profile with the noise level; replacing the linear layers with PPM layers offers an additional improvement. </p
... For instance, an FL server requests for only an averaging aggregation of the gradients calculated and sent by distributed ML nodes. Under these scenarios, the communication rate region established by existing distributed source coding theorems, e.g., the Slepian-Wolf theorem [294] and the Wyner-Ziv coding [295], sometimes [237] Mobile robots Learning process optimization Dealing with unreliable and resource-constrained FL environment [261], [262] UAV-aided MEC Resource allocation Local data training with privacy protection [264]- [267] UAV-aided MEC Edge intelligence Collaborative model training [276]- [279] Over [135], [271] Satellite-and-UAVserved IoE Computation offloading Finding the optimal computation offloading policy [247], [248] Edge caching IoE Caching updating policy Trading off between the average AoI and energy cost for multiple edge nodes [251], [252] Edge caching IoE Caching transmission policy Trading off between interference alignment, caching, and computing [254], [255] Edge caching IoE Caching transmission policy Minimizing the average transmission delay [256] UAV-aided MEC Big data processing Distributed path planning and resource management using single-agent DRL [259], [260] UAV-aided MEC Resource allocation Distributed multi-agent resource allocation in a multi-UAV enabled network becomes much larger than the minimum communication rate needed for these computing tasks. Till today, the theoretical rate limits are available for a few types of computing tasks in very special use cases, e.g., the "µ-sum" computing task of two distributed Gaussian sources in a multiple access channel [296]. ...
Article
To process and transfer large amounts of data in emerging wireless services, it has become increasingly appealing to exploit distributed data communication and learning. Specifically, edge learning (EL) enables local model training on geographically disperse edge nodes and minimizes the need for frequent data exchange. However, the current design of separating EL deployment and communication optimization does not yet reap the promised benefits of distributed signal processing, and sometimes suffers from excessive signalling overhead, long processing delay, and unstable learning convergence. In this paper, we provide an overview on practical distributed EL techniques and their interplay with advanced communication optimization designs. In particular, typical performance metrics for dual-functional learning and communication networks are discussed. Also, recent achievements of enabling techniques for the dual-functional design are surveyed with exemplifications from the mutual perspectives of “communications for learning” and “learning for communications.” The application of EL techniques within a variety of future communication systems are also envisioned for beyond 5G (B5G) wireless networks. For the application in goal-oriented semantic communication, we present a first mathematical model of the goal-oriented source entropy as an optimization problem. In addition, from the viewpoint of information theory, we identify fundamental open problems of characterizing rate regions for communication networks supporting distributed learning-and-computing tasks. We also present technical challenges as well as emerging application opportunities in this field, with the aim of inspiring future research and promoting widespread developments of EL in B5G.
Article
Full-text available
The work here studies the communication cost for a multi-server multi-task distributed computation framework, as well as for a broad class of functions and data statistics. Considering the framework where a user seeks the computation of multiple complex (conceivably non-linear) tasks from a set of distributed servers, we establish the communication cost upper bounds for a variety of data statistics, function classes, and data placements across the servers. To do so, we proceed to apply, for the first time here, Körner’s characteristic graph approach—which is known to capture the structural properties of data and functions—to the promising framework of multi-server multi-task distributed computing. Going beyond the general expressions, and in order to offer clearer insight, we also consider the well-known scenario of cyclic dataset placement and linearly separable functions over the binary field, in which case, our approach exhibits considerable gains over the state of the art. Similar gains are identified for the case of multi-linear functions.
Article
Let X, Y be a pair of discrete random variables with a given joint probability distribution. For 0 leq x leq H(X) , the entropy of X , define the function F(x) as the infimum of H(Ymid W) , the conditional entropy of Y given W , with respect to all discrete random variables W such that a) H(Xmid W) = x , and b) W and Y are conditionally independent given X . This paper concerns the function F , its properties, its calculation, and its applications to several problems in information theory.
Article
Let {(X_k, Y_k, V_k)}_{k=1}^{infty} be a sequence of independent copies of the triple (X,Y,V) of discrete random variables. We consider the following source coding problem with a side information network. This network has three encoders numbered 0, 1, and 2, the inputs of which are the sequences { V_k}, {X_k} , and {Y_k} , respectively. The output of encoder i is a binary sequence of rate R_i, i = 0,1,2 . There are two decoders, numbered 1 and 2, whose task is to deliver essentially perfect reproductions of the sequences {X_k} and {Y_k} , respectively, to two distinct destinations. Decoder 1 observes the output of encoders 0 and 1, and decoder 2 observes the output of encoders 0 and 2. The sequence {V_k} and its binary encoding (by encoder 0) play the role of side information, which is available to the decoders only. We study the characterization of the family of rate triples (R_0,R_1,R_2) for which this system can deliver essentially perfect reproductions (in the usual Shannon sense) of {X_k} and {Y_k} . The principal result is a characterization of this family via an information-theoretic minimization. Two special cases are of interest. In the first, V = (X, Y) so that the encoding of {V_k } contains common information. In the second, Y equiv 0 so that our problem becomes a generalization of the source coding problem with side information studied by Slepian and Wo1f [3].
Article
Correlated information sequences cdots ,X_{-1},X_0,X_1, cdots and cdots,Y_{-1},Y_0,Y_1, cdots are generated by repeated independent drawings of a pair of discrete random variables X, Y from a given bivariate distribution P_{XY} (x,y) . We determine the minimum number of bits per character R_X and R_Y needed to encode these sequences so that they can be faithfully reproduced under a variety of assumptions regarding the encoders and decoders. The results, some of which are not at all obvious, are presented as an admissible rate region mathcal{R} in the R_X - R_Y plane. They generalize a similar and well-known result for a single information sequence, namely R_X geq H (X) for faithful reproduction.