Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Zellner (1988) modeled statistical inference in terms of information processing and postulated the Information Conservation Principle (ICP) between the input and output of the information processing block, showing that this yielded... more
Zellner (1988) modeled statistical inference in terms of information processing and postulated the Information Conservation Principle (ICP) between the input and output of the information processing block, showing that this yielded Bayesian inference as the optimum information processing rule. Recently, Alemi (2019) reviewed Zellner's work in the context of machine learning and showed that the ICP could be seen as a special case of a more general optimum information processing criterion, namely the Predictive Information Bottleneck Objective. However, Alemi modeled the model training step in machine learning as using training and test data sets only, and did not account for the use of a validation data set during training. The present note is an attempt to extend Alemi's information processing formulation of machine learning, and the predictive information bottleneck objective for model training, to the widely-used scenario where training utilizes not only a training but als...
Communications standards are designed via committees of humans holding repeated meetings over months or even years until consensus is achieved. This includes decisions regarding the modulation and coding schemes to be supported over an... more
Communications standards are designed via committees of humans holding repeated meetings over months or even years until consensus is achieved. This includes decisions regarding the modulation and coding schemes to be supported over an air interface. We propose a way to “automate” the selection of the set of modulation and coding schemes to be supported over a given air interface and thereby streamline both the standards design process and the ease of extending the standard to support new modulation schemes applicable to new higher-level applications and services. Our scheme involves machine learning, whereby a constructor entity submits proposals to an evaluator entity, which returns a score for the proposal. The constructor employs reinforcement learning to iterate on its submitted proposals until a score is achieved that was previously agreed upon by both constructor and evaluator to be indicative of satisfying the required design criteria (including performance metrics for trans...
Zellner (1988) modeled statistical inference in terms of information processing and postulated the Information Conservation Principle (ICP) between the input and output of the information processing block, showing that this yielded... more
Zellner (1988) modeled statistical inference in terms of information processing and postulated the Information Conservation Principle (ICP) between the input and output of the information processing block, showing that this yielded Bayesian inference as the optimum information processing rule. Recently, Alemi (2019) reviewed Zellner's work in the context of machine learning and showed that the ICP could be seen as a special case of a more general optimum information processing criterion, namely the Predictive Information Bottleneck Objective. However, Alemi modeled the model training step in machine learning as using training and test data sets only, and did not account for the use of a validation data set during training. The present note is an attempt to extend Alemi's information processing formulation of machine learning, and the predictive information bottleneck objective for model training, to the widely-used scenario where training utilizes not only a training but als...
We view the Information Bottleneck Principle (IBP: Tishby et al., 1999; Schwartz-Ziv and Tishby, 2017) and Predictive Information Bottleneck Principle (PIBP: Still et al., 2007; Alemi, 2019) as special cases of a family of general... more
We view the Information Bottleneck Principle (IBP: Tishby et al., 1999; Schwartz-Ziv and Tishby, 2017) and Predictive Information Bottleneck Principle (PIBP: Still et al., 2007; Alemi, 2019) as special cases of a family of general information bottleneck objectives (IBOs). Each IBO corresponds to a particular constrained optimization problem where the constraints apply to: (a) the mutual information between the training data and the learned model parameters or extracted representation of the data, and (b) the mutual information between the learned model parameters or extracted representation of the data and the test data (if any). The heuristics behind the IBP and PIBP are shown to yield different constraints in the corresponding constrained optimization problem formulations. We show how other heuristics lead to a new IBO, different from both the IBP and PIBP, and use the techniques from (Alemi, 2019) to derive and optimize a variational upper bound on the new IBO. We then apply the ...
We investigate an ad hoc network where node locations are distributed according to a homogeneous Poisson process with intensity λn. We assume that all the nodes are equipped with an identical wireless transceiver capable of operating... more
We investigate an ad hoc network where node locations are distributed according to a homogeneous Poisson process with intensity λn. We assume that all the nodes are equipped with an identical wireless transceiver capable of operating satisfactorily up to a certain maximal link loss. Our link model depends on the length of the link and on random lognormal fading. Each node functions as a source and destination of data packets, and may also serve as a repeater to transport packets over multi-hop routes as determined by the network router. We focus on the probability distribution of the minimum number of hops between a source and a destination node known to be at distance D from the source. When the distribution of source-todestination distances is known, the distribution of the minimal number of hops between any arbitrary pair of nodes can also be found. Many variations of this same problem have been studied in the literature. However, as far as we know, no exact closedform analytic results for fading environments have been presented before.
ABSTRACT
Research Interests:
We study the asymptotic properties of the sequence of iterates of weight-vector estimates obtained by training a feedforward neural network with a basic gradient-descent method using a fixed learning rate and no batch-processing. Earlier... more
We study the asymptotic properties of the sequence of iterates of weight-vector estimates obtained by training a feedforward neural network with a basic gradient-descent method using a fixed learning rate and no batch-processing. Earlier results based on stochastic approximation techniques (Kuan and Hornik 1991; Finnoff 1993; Bucklew et al. 1993) have established the existence of a gaussian limiting distribution for
Page 1. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 30, NO. 3, APRIL 2012 575 Distribution of Downlink SINR in Heterogeneous Cellular Networks Sayandev Mukherjee, Senior Member, IEEE Abstract ...
... Additional contributions from: Daniel Schultz 1 , Patrick Herhold 2 , Halim Yanikomeroglu 3 , Sayandev Mukherjee 4 , Harish Viswanathan 4 , Matthias Lott 5 , Wolfgang Zirwas 5 , Mischa Dohler 6 , Hamid Aghvami 6 , David D. Falconer 3... more
... Additional contributions from: Daniel Schultz 1 , Patrick Herhold 2 , Halim Yanikomeroglu 3 , Sayandev Mukherjee 4 , Harish Viswanathan 4 , Matthias Lott 5 , Wolfgang Zirwas 5 , Mischa Dohler 6 , Hamid Aghvami 6 , David D. Falconer 3 , Gerhard P. Fettweis 2 ...
ABSTRACT
Research Interests:

And 70 more