Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Gonzalo Vazquez-Vilar

    ABSTRACT We study a source-channel coding scheme in which source messages are assigned to different classes and encoded using a channel code that depends on the class index. The performance of this scheme is studied by means of... more
    ABSTRACT We study a source-channel coding scheme in which source messages are assigned to different classes and encoded using a channel code that depends on the class index. The performance of this scheme is studied by means of random-coding error exponents and validated by simulation of a low-complexity implementation using existing source and channel codes. While each class code can be seen as a concatenation of a source code and a channel code, the overall performance improves on that of separate source-channel coding and approaches that of joint source-channel coding when the number of classes increases.
    EUROPEAN COOPERATION IN THE FIELD OF SCIENTIFIC AND TECHNICAL RESEARCH ———————————————— EURO-COST ———————————————— ... COST 2100 TD(08)620 Lille, France Oct 6-8, 2008 ... SOURCE: 1Forschungszentrum ...
    Research Interests:
    Research Interests:
    We study the achievable error exponents in joint source-channel coding by deriving an upper bound on the average error probability using Gallager's techniques. The bound is based on a construction for which source messages are... more
    We study the achievable error exponents in joint source-channel coding by deriving an upper bound on the average error probability using Gallager's techniques. The bound is based on a construction for which source messages are assigned to disjoint subsets (referred to as classes), and codewords are independently generated according to a distribution that depends on the class of the source message. Particularizing the bound to discrete memoryless systems, we show that two optimally chosen classes and product distributions are necessary and sufficient to attain the sphere-packing exponent in those cases where it is tight.Finally, we prove that the very same results extend to lossy joint source-channel coding for sources and distortion measures that make the source reliability function convex.
    We prove two alternative expressions for the error probability of Bayesian M-ary hypothesis testing. The first expression is related to the error probability of binary hypothesis testing, and the second one to a generalization of the... more
    We prove two alternative expressions for the error probability of Bayesian M-ary hypothesis testing. The first expression is related to the error probability of binary hypothesis testing, and the second one to a generalization of the Verd\'u-Han lower bound. This result is used to characterize the error probability of the main problems in information theory and identify the steps where previous converse results are loose with respect to the actual probability of error.
    ABSTRACT We prove two alternative expressions for the error probability of Bayesian M-ary hypothesis testing. The first expression is related to the error probability of binary hypothesis testing, and the second one to a generalization of... more
    ABSTRACT We prove two alternative expressions for the error probability of Bayesian M-ary hypothesis testing. The first expression is related to the error probability of binary hypothesis testing, and the second one to a generalization of the Verd\'u-Han lower bound. This result is used to characterize the error probability of the main problems in information theory and identify the steps where previous converse results are loose with respect to the actual probability of error.
    ABSTRACT Based on the hypothesis-testing method, we derive lower bounds on the average error probability of finite-length joint source-channel coding. The extension of the meta-converse bound of channel coding to joint source-channel... more
    ABSTRACT Based on the hypothesis-testing method, we derive lower bounds on the average error probability of finite-length joint source-channel coding. The extension of the meta-converse bound of channel coding to joint source-channel coding depends on the codebook and the decoding rule and thus, it is a priori computationally challenging. Weaker versions of this general bound recover known converses in the literature and provide computationally feasible expressions.
    ABSTRACT We derive a random-coding upper bound on the average probability of error of joint source-channel coding that recovers Csiszár's error exponent when used with product distributions over the channel inputs. Our proof... more
    ABSTRACT We derive a random-coding upper bound on the average probability of error of joint source-channel coding that recovers Csiszár's error exponent when used with product distributions over the channel inputs. Our proof technique for the error probability analysis employs a code construction for which source messages are assigned to subsets and codewords are generated with a distribution that depends on the subset.
    ABSTRACT We study a source-channel coding scheme in which source messages are assigned to classes and encoded using a channel code that depends on the class index. While each class code can be seen as a concatenation of a source code and... more
    ABSTRACT We study a source-channel coding scheme in which source messages are assigned to classes and encoded using a channel code that depends on the class index. While each class code can be seen as a concatenation of a source code and a channel code, the overall performance improves on that of separate source-channel coding and approaches that of joint source-channel coding as the number of classes increases. The performance of this scheme is studied by means of random-coding bounds and validated by simulation of a low-complexity implementation using existing source and channel codes.
    ABSTRACT We show that the meta-converse bound derived by Polyanskiy et al. provides the exact error probability for a fixed joint source-channel code and an appropriate choice of the bound parameters. While the expression is not... more
    ABSTRACT We show that the meta-converse bound derived by Polyanskiy et al. provides the exact error probability for a fixed joint source-channel code and an appropriate choice of the bound parameters. While the expression is not computable in general, it identifies the weaknesses of known converse bounds to the minimum achievable error probability.