ABSTRACT We study a source-channel coding scheme in which source messages are assigned to differe... more ABSTRACT We study a source-channel coding scheme in which source messages are assigned to different classes and encoded using a channel code that depends on the class index. The performance of this scheme is studied by means of random-coding error exponents and validated by simulation of a low-complexity implementation using existing source and channel codes. While each class code can be seen as a concatenation of a source code and a channel code, the overall performance improves on that of separate source-channel coding and approaches that of joint source-channel coding when the number of classes increases.
EUROPEAN COOPERATION IN THE FIELD OF SCIENTIFIC AND TECHNICAL RESEARCH EURO-COST... more EUROPEAN COOPERATION IN THE FIELD OF SCIENTIFIC AND TECHNICAL RESEARCH EURO-COST ... COST 2100 TD(08)620 Lille, France Oct 6-8, 2008 ... SOURCE: 1Forschungszentrum ...
We study the achievable error exponents in joint source-channel coding by deriving an upper bound... more We study the achievable error exponents in joint source-channel coding by deriving an upper bound on the average error probability using Gallager's techniques. The bound is based on a construction for which source messages are assigned to disjoint subsets (referred to as classes), and codewords are independently generated according to a distribution that depends on the class of the source message. Particularizing the bound to discrete memoryless systems, we show that two optimally chosen classes and product distributions are necessary and sufficient to attain the sphere-packing exponent in those cases where it is tight.Finally, we prove that the very same results extend to lossy joint source-channel coding for sources and distortion measures that make the source reliability function convex.
We prove two alternative expressions for the error probability of Bayesian M-ary hypothesis testi... more We prove two alternative expressions for the error probability of Bayesian M-ary hypothesis testing. The first expression is related to the error probability of binary hypothesis testing, and the second one to a generalization of the Verd\'u-Han lower bound. This result is used to characterize the error probability of the main problems in information theory and identify the steps where previous converse results are loose with respect to the actual probability of error.
ABSTRACT We prove two alternative expressions for the error probability of Bayesian M-ary hypothe... more ABSTRACT We prove two alternative expressions for the error probability of Bayesian M-ary hypothesis testing. The first expression is related to the error probability of binary hypothesis testing, and the second one to a generalization of the Verd\'u-Han lower bound. This result is used to characterize the error probability of the main problems in information theory and identify the steps where previous converse results are loose with respect to the actual probability of error.
2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2012
ABSTRACT Based on the hypothesis-testing method, we derive lower bounds on the average error prob... more ABSTRACT Based on the hypothesis-testing method, we derive lower bounds on the average error probability of finite-length joint source-channel coding. The extension of the meta-converse bound of channel coding to joint source-channel coding depends on the codebook and the decoding rule and thus, it is a priori computationally challenging. Weaker versions of this general bound recover known converses in the literature and provide computationally feasible expressions.
ABSTRACT We study a source-channel coding scheme in which source messages are assigned to differe... more ABSTRACT We study a source-channel coding scheme in which source messages are assigned to different classes and encoded using a channel code that depends on the class index. The performance of this scheme is studied by means of random-coding error exponents and validated by simulation of a low-complexity implementation using existing source and channel codes. While each class code can be seen as a concatenation of a source code and a channel code, the overall performance improves on that of separate source-channel coding and approaches that of joint source-channel coding when the number of classes increases.
EUROPEAN COOPERATION IN THE FIELD OF SCIENTIFIC AND TECHNICAL RESEARCH EURO-COST... more EUROPEAN COOPERATION IN THE FIELD OF SCIENTIFIC AND TECHNICAL RESEARCH EURO-COST ... COST 2100 TD(08)620 Lille, France Oct 6-8, 2008 ... SOURCE: 1Forschungszentrum ...
We study the achievable error exponents in joint source-channel coding by deriving an upper bound... more We study the achievable error exponents in joint source-channel coding by deriving an upper bound on the average error probability using Gallager's techniques. The bound is based on a construction for which source messages are assigned to disjoint subsets (referred to as classes), and codewords are independently generated according to a distribution that depends on the class of the source message. Particularizing the bound to discrete memoryless systems, we show that two optimally chosen classes and product distributions are necessary and sufficient to attain the sphere-packing exponent in those cases where it is tight.Finally, we prove that the very same results extend to lossy joint source-channel coding for sources and distortion measures that make the source reliability function convex.
We prove two alternative expressions for the error probability of Bayesian M-ary hypothesis testi... more We prove two alternative expressions for the error probability of Bayesian M-ary hypothesis testing. The first expression is related to the error probability of binary hypothesis testing, and the second one to a generalization of the Verd\'u-Han lower bound. This result is used to characterize the error probability of the main problems in information theory and identify the steps where previous converse results are loose with respect to the actual probability of error.
ABSTRACT We prove two alternative expressions for the error probability of Bayesian M-ary hypothe... more ABSTRACT We prove two alternative expressions for the error probability of Bayesian M-ary hypothesis testing. The first expression is related to the error probability of binary hypothesis testing, and the second one to a generalization of the Verd\'u-Han lower bound. This result is used to characterize the error probability of the main problems in information theory and identify the steps where previous converse results are loose with respect to the actual probability of error.
2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2012
ABSTRACT Based on the hypothesis-testing method, we derive lower bounds on the average error prob... more ABSTRACT Based on the hypothesis-testing method, we derive lower bounds on the average error probability of finite-length joint source-channel coding. The extension of the meta-converse bound of channel coding to joint source-channel coding depends on the codebook and the decoding rule and thus, it is a priori computationally challenging. Weaker versions of this general bound recover known converses in the literature and provide computationally feasible expressions.
Uploads
Papers by Gonzalo Vazquez-Vilar