Abstract
Obtaining the stable throughput region of a wireless network, and a policy that achieves this throughput, has attracted the interest of the research community in the past years. A major simplifying assumption in this line of research has been to assume that the network control policy has full access to the current channel conditions at every time a decision is made. However, in practice one may only estimate the actual conditions of the wireless channel process, and hence the network control policy can at most have access to an estimate of the channel which can in fact be highly inaccurate. In this work we determine a stationary joint link activation and routing policy based on a weighted version of the “back-pressure” algorithm that maximizes the stable throughput region of time-varying wireless networks with multiple commodities by having access to only a possibly inaccurate estimate of the true channel state. We further show optimality of this policy within a broad class of stationary, non-stationary, and even anticipative policies under certain mild conditions. The only restriction is that policies in this class have no knowledge on the current true channel state, except what is available through its estimate.
Similar content being viewed by others
References
Tassiulas, L. (1997). Scheduling and performance limits of networks with constantly changing topology. IEEE Transactions on Information Theory, 43(3), 1067–1073.
Tassiulas, L., & Ephremides, A. (1992). Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks. IEEE Transactions on Automatic Control, 37(12), 1936–1948.
Pantelidou, A., Ephremides, A., & Tits, A. (2005). Maximum throughput scheduling in time-varying-topology wireless ad-hoc networks. In Conference on Information Sciences and Systems (CISS), Baltimore, March 2005.
Neely, M. J., Modiano, E., & Rohrs, C. E. (2005). Dynamic power allocation and routing for time-varying wireless networks. IEEE Journal on Selected Areas in Communications, 23(1), 89–103.
Lapidoth, A., & Narayan, P. (1998). Reliable communication under channel uncertainty. IEEE Transactions on Information Theory, 44(6), 2148–2177.
Xia, P., Zhou, S., & Giannakis, G. B. (2004). Adaptive MIMO OFDM based on partial channel state information. IEEE Transactions on Signal Processing, 52(1), 202–213.
Baran, P. (1964). On distributed communication networks. IEEE Transactions on Communications, 12(1), 1–9.
Brémaud, P. (1999). Markov chains: Gibbs fields, Monte Carlo simulation and queues. Springer-Verlag.
Author information
Authors and Affiliations
Corresponding author
Appendix: Proof of Theorem 1
Appendix: Proof of Theorem 1
In this section we are going to prove each individual inclusion relationship of Theorem 1. The third inclusion, that is \(\widetilde{{\mathbf{C}}}_{\varvec{\pi}_0^{{\mathbf{w}}}}^{1} \subseteq \widetilde{{\mathbf{C}}}_{{\mathcal E}}^{p},\) follows trivially from the definitions of the sets \(\widetilde{{\mathbf{C}}}_{\varvec{\pi}_0^{{\mathbf{w}}}}^{1},\) and \(\widetilde{{\mathbf{C}}}_{{\mathcal E}}^{p}.\) Next, we prove the three remaining inclusions, namely that (i) \(\hbox{ri}(\varvec{\Uplambda}) \subseteq {\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}},\) (ii) \({\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}} \subseteq\widetilde{{\mathbf{C}}}_{\varvec{\pi}_0^{{\mathbf{w}}}}^{1},\) and (iii) \(\widetilde{{\mathbf{C}}}_{{\mathcal E}}^{p} \subseteq \varvec{\Uplambda}.\)
(i) Proof of \(\hbox{ri}(\varvec{\Uplambda}) \subseteq {\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}}\)
Consider a rate \(\varvec{\lambda}\in\hbox{ri}(\varvec{\Uplambda}).\) We show that \(\varvec{\lambda} \in {\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}},\) i.e., that this rate is stabilized by our proposed policy \(\varvec{\pi}_0^{{\mathbf{w}}}.\) We make use of the Extended Foster’s Theorem [2], which provides a sufficient condition for stability.
Theorem 2 (Extended Foster Theorem)
Consider a Homogenous Markov Chain \(\{Y(t)\}_{t=0}^{\infty}\) with state space \({\mathcal{Y}}.\) Suppose there exists a real valued, function \(V\hbox{:} {\mathcal{Y}} \rightarrow {\mathbb{R}},\) that is bounded from below, such that
and such that for some ε > 0, and some finite subset \({\mathcal{Y}}_{0}\) of \({\mathcal{Y}}\)
Then, \(\{Y(t)\}_{t=0}^{\infty}\) is stable in the sense of Definition 1.
We will show that the process of the queue sizes \(\{{\mathbf{X}}(t)\}_{t=0}^{\infty}\) satisfies the conditions of this theorem. For compactness of notation, we use t + to denote t + 1. Given w > 0, and \({\mathbf{x}}\in{\mathcal{X}}\) , let \(V({\mathbf{x}}):=\sum_{j=1}^{J} w_j {{\mathbf{x}}^j}^{\top} {\mathbf{x}}^j ,\) be a candidate Lyapunov function. We show that, with V(·) thus defined under policy \(\varvec{\pi}_0^{\mathbf{w}},\) and given any process \(\{{\mathbf{A}}(t)\}_{t=1}^{\infty},\) such that \({\mathbb{E}}[{\mathbf{A}}(t)]= \varvec{\lambda},\) the process \(\{{\mathbf{X}}(t)\}_{t=0}^{\infty}\) given by Eq. 6 with \({\mathbf{E}}^j(t)=\varvec{\pi}_0^{{\mathbf{w}} j}({\mathbf{X}}(t-1),{\hat{\mathbf{S}}}(t))\) for all \(j \in {\mathcal{J}}\) satisfies the conditions of Theorem 2.
First, it is immediate that \({\mathbb{E}}[V({\mathbf{X}}(t^+))|{\mathbf{X}}(t)= {\mathbf{x}}] < \infty ,\) \(\forall {\mathbf{x}} \in {\mathcal{X}}.\) To see this, let \({\mathbf{x}} \in {\mathcal{X}},\) and let
Note that for every t the matrix \({\mathbf{Q}}^{\varvec{\pi}({\mathbf{x}},{\hat{\mathbf{S}}}(t))}(t)\) is a function of S(t), and \({\hat{\mathbf{S}}}(t).\) Since by Proposition 1, the variables \({\mathbf{S}}(t^+),\) \({\hat{\mathbf{S}}}(t^+),\) \({\mathbf{A}}(t^+),\) are independent of X(t), Eq. 6 yields
which is finite for all x since from Assumption 1(b) the process \(\{{\mathbf{A}}(t)\}_{t=1}^{\infty}\) is assumed to have finite second moments, and further the policy \(\varvec{\pi}^j({\mathbf{x}},{\hat{\mathbf{S}}}(t^+))\) as well as the process \(\{{\mathbf{Q}}^{\varvec{\pi}({\mathbf{x}},{\hat{\mathbf{S}}}(t))}(t)\}_{t=1}^{\infty}\) take values in finite sets. This in fact holds independently of the choice of stationary policy \(\varvec{\pi},\) and of the arrival rate \(\varvec{\lambda}.\) To complete the proof, we show that, when policy \(\varvec{\pi}_0^{\mathbf{w}}\) is used, there exists a finite set \({\mathcal{X}}_0\) such that Eq. 26 holds. For compactness of notation, we define
We first prove two lemmas that will be useful in proving the desired result.
Lemma 1
Given any policy \(\varvec{\pi},\) arrival rate \(\varvec{\lambda},\) and queue size matrix \({\mathbf{x}} \in {{\mathcal{X}}},\) the Markov Chain \(\{{\mathbf{X}}(t)\}_{t=0}^{\infty}\) given by Eq. 6 satisfies
where B does not depend on x.
Proof
From Eq. 28, and the definition of our candidate Lyapunov function we have
By using Eq. 6 we obtain
Since \(\{{\mathbf{A}}(t)\}_{t=1}^{\infty}\) is stationary, and has finite first and second moments, and the policy \(\varvec{\pi}^j({\mathbf{x}},{\hat{\mathbf{S}}}(t^+)),\) as well as the process \(\{{\mathbf{Q}}^{\varvec{\pi}({\mathbf{x}}, {\hat{\mathbf{S}}}(t))}(t)\}_{t=1}^{\infty},\) where \(\varvec{\pi}({\mathbf{x}},{\hat{\mathbf{S}}}(t)) = \sum_{j=1}^{J}\varvec{\pi}^{j}({\mathbf{x}},{\hat{\mathbf{S}}}(t)),\) take values in finite sets, the second term is finite and bounded for every \(j \in {\mathcal{J}}\) by a quantity independent of the queue size matrix x, and time slot t. Hence for every \({\mathbf{x}} \in {\mathcal{X}},\)
for some B independent of x, and t. Further by making use of Proposition 1, namely that \({\mathbf{A}}(t^+),\) is independent of X(t), and using conditional expectations it follows that
Using Eq. 10, and the fact that \({\mathbf{Q}}^{\varvec{\pi}({\mathbf{x}},{{\mathbf{S}}}^{(k)})}(t^+),\) and \({\hat{\mathbf{S}}}(t^+)\) are independent of \({\mathbf{X}}(t)\) we obtain
Finally, by using Eq. 9, the above equation becomes
which completes the proof. \(\square\)
When an arrival rate \(\varvec{\lambda}\) belongs to \(\hbox{ri}(\varvec{\Uplambda}),\) a useful upper bound can be obtained on the first term in the parenthesis of Eq. 29, by means of the following lemma.
Lemma 2
Let \(\varvec{\lambda} \in \hbox{ri}(\varvec{\Uplambda}).\) Then there exist nonnegative scalars \({\mu^{\prime}}_k^{\mathbf{c}},\) for all \({\mathbf{c}}\in{{\mathcal{T}}}_k,\) \(k \in {\mathcal{K}},\) with \(\sum_{{\mathbf{c}}\in{{\mathcal{T}}}_k}{\mu^{\prime}}_k^{\mathbf{c}} < 1,\) such that, for all \({\mathbf{x}}\in{{\mathcal{X}}},\)
Proof
Let rate \(\varvec{\lambda} \in \hbox{ri}(\varvec{\Uplambda}).\) Then \(\varvec{\lambda} \in \varvec{\Uplambda},\) as \(\hbox{ri}(\varvec{\Uplambda}) \subseteq \varvec{\Uplambda}.\) Hence, with reference to Eq. 23 there exists a scalar δ > 1, and non-negative flow vectors \({\mathbf{f}}^{j}_{k}\in {\mathbb{R}}_+^{L}\) such that
and where \(\delta \sum_{j=1}^J {\mathbf{f}}_k^j \in \hbox{co}(\tilde{{\mathcal{Q}}}_k)\) i.e., for some \(\mu_k^{\mathbf{c}} \ge 0\) such that \(\sum_{{\mathbf{c}} \in {\mathcal{T}}_k} \mu_k^{\mathbf{c}} =1\) we have
Note that from Eq. 33 it follows that, for all \(j \in {\mathcal{J}},\) and \(k \in {\mathcal{K}},\) we have
Using Eq. 32, and the fact each of the vectors f j k are non-negative component-wise we can write
where Eq. 35 follows by making use of Eq. 33. Let \(\mu^{\prime {\mathbf{c}}}_k:=\frac{\mu_k^{{\mathbf{c}}}}{\delta}.\) By definition, \({\mu^{\prime}}_k^{{\mathbf{c}}} \ge 0 .\) Also, since \(\sum_{{\mathbf{c}}\in{\mathcal{T}}_k} \mu_k^{{\mathbf{c}}} =1\) and δ > 1, it follows that \(\sum_{{\mathbf{c}} \in {\mathcal{T}}_k}\mu^{\prime {\mathbf{c}}}_k < 1.\) Further, Eq. 35 can be written as
where Eq. 36 follows by making use of Eqs. 9, 12, and 13. This completes the proof of Lemma 2. \(\square\)
We proceed to finalize the proof of the claim that \(\hbox{ri}(\varvec{\Uplambda}) \subseteq {\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}}.\) From Lemmas 1 and 2 we conclude that, given \(\varvec{\lambda} \in \hbox{ri}(\varvec{\Uplambda}),\) there exist nonnegative scalars \({\mu^{\prime}}_k^{\mathbf{c}},\) for all \({\mathbf{c}}\in{{\mathcal{T}}}_k ,\) and \(k \in {\mathcal{K}},\) with \(\sum_{{\mathbf{c}}\in{{\mathcal{T}}}_k}{\mu^{\prime}}_k^{\mathbf{c}} < 1,\) such that, for all \({\mathbf{x}}\in{{\mathcal{X}}},\) and all stationary policies \(\varvec{\pi},\)
So far \(\varvec{\pi}\) was an arbitrary stationary policy. We now focus on the policy \(\varvec{\pi}_0^{\mathbf{w}}.\) In view of the fact that \(\varvec{\pi}({\mathbf{x}}, {\mathbf{S}}^{(k)}) = \sum_{j=1}^J \varvec{\pi}^j ({\mathbf{x}}, {\mathbf{S}}^{(k)}) \in {\mathcal{T}}_k ,\) from Eq. 17, and of the definition of \(\varvec{\pi}_0^{{\mathbf{w}}},\) we obtain
By substituting into Eq. 37, we get
where from Eq. 7, and the fact that \(\sum_{{\mathbf{c}}\in{{\mathcal{T}}}_k}{\mu^{\prime}}_k^{\mathbf{c}} < 1\)
Now, let \({\mathbf{x}}\in{\mathcal{X}},\) with \({\mathbf{x}}\ne {\mathbf 0},\) and suppose X(t) = x. Choose a node n, and a commodity j such that
The Markov property of \(\{{\mathbf{X}}(t)\}_{t=0}^\infty\) implies that
Hence, without loss of generality, assume that the queue size process at time slot 0 satisfies X(0) = 0. Since \(X_{n j}(t)=x_{n j} > 0,\) and X n j (0) = 0, there must exist a sequence of links in \({\mathcal{L}}\) from some node n′, with \(\lambda_{n^{\prime}j} > 0,\) to node n that satisfy Assumption 2. Further, Assumption 2 then implies that there exist links \(\ell_{i}\in {\mathcal{L}}, \) \(i=1,\ldots, z,\) for some z, satisfying 0 < z < N, such that \(n=s(\ell_{1}),\) and nodes \(n_1, \ldots, n_z\) , such that \(d(\ell_1)=n_1,\) \(s(\ell_{i+1})=n_i ,\) \(d(\ell_{i+1})= n_{i+1},\) \(i=1,\ldots, z-1 ,\) and \(n_z \in V_{j}.\) For notational simplicity, also let \(n_0:=n.\) Since \(x_{{n_z}j}=0,\) whenever \(n_{z} \in V_{j},\) we can write
It follows that there exists some link \(\ell_{i^{\scriptstyle{\star}}}\) for which the above queue size difference through it, is maximized for some commodity \(j^{\scriptstyle{\star}}\in{\mathcal{J}}.\) Let \(n_{i^{\scriptstyle{\star}}-1}=s(\ell_{i^{\scriptstyle{\star}}}),\) and \(n_{i^{\scriptstyle{\star}}}=d(\ell_{i^{\scriptstyle{\star}}}).\) Then, from Eq. 38 we have
Recall that \(\ell_{i}\in {\mathcal{L}}\) for all \(i=1,\ldots, z .\) Further, let \(k^{\scriptstyle{\star}}\) be such that \(\ell_{i^{\scriptstyle{\star}}}\) satisfies Eq. 1 under the estimated channel state \({\hat{\mathbf{S}}}(t)={\mathbf{S}}^{(k^{\scriptstyle{\star}})}.\) Let \(\hbox{e}_{\ell_{i^{\scriptstyle{\star}}}}\in{\mathbb{R}}^{L}\) be a vector with its \({\ell_{i^{\scriptstyle{\star}}}}\hbox{th}\) component equal to 1, and with all other components equal to 0. Then, from the property of the constraint set it follows that \(\hbox{e}_{\ell_{i^{\scriptstyle{\star}}}}\in{\mathcal{T}}_{k^{\scriptstyle{\star}}}.\) Also, it follows from Eqs. 12 and 13 that
where \(\left({\mathbf{D}}_{k^{\scriptstyle{\star}}{\rm e}_{\ell_{i^{\scriptstyle{\star}}}}}^{{\mathbf{w}} j^{\scriptstyle{\star}}}({\mathbf{x}}) \right)_{\ell_{i^{\scriptstyle{\star}}}}\) is the \({\ell_{i^{\scriptstyle{\star}}}}\hbox{th}\) entry of the vector \({\mathbf{D}}_{k^{\scriptstyle{\star}}{\rm e}_{\ell_{i^{\scriptstyle{\star}}}}}^{{\mathbf{w}} j^{\scriptstyle{\star}}}({\mathbf{x}})\). In view of Eqs. 11 and 39, it follows that
where \(({\tilde{\mathbf{Q}}}_{k^{\scriptstyle{\star}}}^{{\rm e}_{\ell_{i^{\scriptstyle{\star}}}}})_{\ell_{i^{\scriptstyle{\star}}}}\) is the \(\ell_{i^{\scriptstyle{\star}}}\hbox{th}\) diagonal entry of the matrix \({\tilde{\mathbf{Q}}}_{k^{{\scriptstyle{\star}}}}^{{\rm e}_{\ell_{i^{\scriptstyle{\star}}}}},\) while
and, in view of Assumption 2,
Note that the entries w min and \(\tilde{q}_{\rm min}\) do not depend on x. Overall, we have
so that, given any ε > 0,
Since vectors in \({\mathcal{X}}\) have integer components, the set \({\mathcal{X}}_0\) is finite, and the proof is complete. \(\square\)
(ii) Proof of \({\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}} \subseteq \widetilde{{\mathbf{C}}}_{\varvec{\pi}_0^{{\mathbf{w}}}}^{1}\)
Consider an arrival rate \(\varvec{\lambda} \in {\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}}.\) In order to prove that \(\varvec{\lambda} \in \widetilde{{\mathbf{C}}}_{\varvec{\pi}_0^{{\mathbf{w}}}}^{1},\) we need to show that stability according to Definition 1 implies intermittent boundedness with probability 1. We proceed by giving a theorem that gives a sufficient condition for intermittent boundedness of a Markov Chain.
Theorem 3
Let \(\{Y(t)\}_{t=0}^{\infty}\) be a Markov Chain, with \({\mathcal{Y}}\) the, possibly empty, set of its transient states. If \(\{Y(t)\}_{t=0}^{\infty}\) almost surely exits the set of transient states in finite time, i.e. if
(which holds vacuously when \({\mathcal{Y}}\) is empty), then \(\{Y(t)\}_{t=0}^{\infty}\) is intermittently bounded with probability 1.
Proof
Consider the Markov Chain \(\{Y(t)\}_{t=0}^{\infty}\) that satisfies Eq. 40. Then with probability 1, the Markov Chain \(\{Y(t)\}_{t=0}^{\infty}\) will be eventually confined within a single recurrent class. It follows (e.g. from Theorem 7.3 in Chapter 2 of [8]) that, with probability 1, some (recurrent) state will be visited infinitely many times. Hence, there exists a set W, that is a subset of the sample space \(\Upomega ,\) i.e. \(W \subseteq \Upomega ,\) with P[W] = 1 such that for every event \({\omega} \in W ,\) there exist a state y, and a sequence \(\{t_i \}_{i=1}^{\infty},\) such that in the sample path ω the process satisfies
Hence, by Definition 2 it follows that \(\{Y(t)\}_{t=0}^{\infty}\) is intermittently bounded with probability 1. \(\square\)
A direct consequence of Theorem 3 is Corollary 1, that we state next.
Corollary 1
Let \(\{Y(t)\}_{t=0}^{\infty}\) be a stable Markov Chain. Then, \(\{Y(t)\}_{t=0}^{\infty}\) is intermittently bounded with probability 1.
From Corollary 1, the desired result follows.
(iii) Proof of \(\widetilde{{\mathbf{C}}}_{{\mathcal E}}^{p} \subseteq \varvec{\Uplambda}\)
We need to show that if \(\varvec{\lambda} \in \widetilde{{\mathbf{C}}}_{{\mathcal E}}^{p}\) then \(\varvec{\lambda} \in \varvec{\Uplambda}.\) We start by introducing the notation required for our proof. We define the random variable \(n_{{\hat{\mathbf{S}}}}(t;k)\) to be the number of time slots τ in the interval [0, t] during which \({\hat{\mathbf{S}}}(\tau)\) takes the value S (k). Moreover, we denote by \(\{n_{{\hat{\mathbf{S}}}}(\omega,t;k)\}_{t =1}^{\infty},\) \(\{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega, t;k, {\mathbf{c}})\}_{t=1}^{\infty} ,\) \(\{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega,t; k,{\mathbf{c}}, {\mathbf{Q}})\}_{t=1}^{\infty}\) the sample path ω of the corresponding processes (Recall that the processes \(\{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(t;k, {\mathbf{c}})\}_{t=1}^{\infty},\) \(\{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(t; k,{\mathbf{c}}, {\mathbf{Q}})\}_{t=1}^{\infty}\) are defined in Sect. 4.). Finally by \(\{{\mathbf{A}}(\omega,t)\}_{t=1}^{\infty},\) \(\{{\hat{\mathbf{S}}}(\omega,t)\}_{t=1}^{\infty},\) \(\{{\mathbf{E}}(\omega,t)\}_{t=1}^{\infty},\) \(\{{\mathbf{Q}}^{\mathbf{c}}(\omega,t)\}_{t=1}^{\infty},\) and \(\{{\mathbf{X}}(\omega,t)\}_{t=1}^{\infty}\) we denote each of the sample paths ω of the respective processes.
Since \(\varvec{\lambda}\in \widetilde{{\mathbf{C}}}_{{\mathcal E}}^{p},\) there exists a policy \(\{{\mathbf{E}}(t)\}_{t=1}^{\infty} \in {\mathcal E},\) and an i.i.d. process \(\{{\mathbf{S}}(t),{\hat{\mathbf{S}}}(t), {\mathbf{A}}(t)\}_{t=1}^{\infty}\) such that \( {\mathbb{E}}[{\mathbf{A}}(t)]=\varvec{\lambda}.\) In particular
Furthermore, from Eq. 22 we have that
Also, since the process \(\{{\mathbf{X}}(t)\}_{t=0}^{\infty}\) is intermittently bounded with positive probability it follows that
Since the events in Eqs. 41–43 have probability 1, and the event in Eq. 44 has a positive probability, their intersection will have a positive probability. Hence, it follows that the 4 events have a non-empty common intersection. We first fix an outcome ω′ that belongs to this common intersection, and once ω′ is selected, we identify an X max, and a sequence \(\{t_i\}_{i=1}^{\infty}\) as specified by Eq. 44. We have
We now proceed to first sum both sides of Eq. 6 from time slot 0 to t i for some \(i=1,2,\ldots ,\) and cancel the identical terms. Then, by dividing both sides of the resulting equation by t i we obtain
From (48), we have
and
Taking the limit in Eq. 49 as \(i \rightarrow \infty ,\) and by using Eqs. 45, 50 and 51 we obtain
where
Thus, for \(k \in \tilde{{\mathcal{K}}},\) and for i large enough it follows that \(n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k) > 0.\) Without loss of generality (by redefining the sequence \(\{t_i\}_{i=1}^{\infty}\) if necessary), assume that \(n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k) > 0\) for all \(k \in \tilde{{\mathcal{K}}},\) and \(i=1,2,\ldots .\) Then, Eq. 52 can be written as
Note that \({\mathbf{E}}^j (\omega^{\prime}, \tau) \in {\mathcal{T}}_k\) whenever \({\hat{\mathbf{S}}}(\omega^{\prime},\tau)= {\mathbf{S}}^{(k)}\). Also, for every time slot τ, the matrix \({\mathbf{Q}}^{{\mathbf{E}}(\omega^{\prime},\tau)}(\omega^{\prime},\tau)\) is a diagonal matrix, whose diagonal entries take values in the set {0,1}. Therefore, it is also true that the product \({\mathbf{Q}}^{{\mathbf{E}}(\omega^{\prime},\tau)}(\omega^{\prime},\tau)\) \({\mathbf{E}}^j (\omega^{\prime}, \tau) \in {\mathcal{T}}_k.\) Also, since
we have that for every \(i\,{\in}\,\{1,\ldots\},\) \(j \in {\mathcal{J}},\) and \(k\in \tilde{{\mathcal{K}}},\)
Since \(\tilde{{\mathcal{K}}}\) is a finite set, and since for every k, the set \(\hbox{co}({\mathcal{T}}_k)\) is a compact set, there exists a subsequence \(\{t_{i_\ell} \}_{\ell=1}^{\infty},\) and vectors f j k such that
for all \(j \in {\mathcal{J}}, k \in \tilde{{\mathcal{K}}}.\) Hence from Eqs. 46, 53 and 54 we obtain
Finally, by letting the corresponding L × 1 vector f j k be the 0-vector, whenever \(k \in K\setminus \tilde{{\mathcal{K}}}\) we conclude that
Clearly, \({\mathbf{f}}_k^j \in {\mathbb{R}}_{+}^{L}\) for every \(k \in {\mathcal{K}}\) and \(j \in {\mathcal{J}}.\) To complete the proof we need to show that \(\sum_{j=1}^{J}{\mathbf{f}}_{k}^{j} \in \hbox{co}(\tilde{{\mathcal{Q}}}_{k})\) for every \(k \in {\mathcal{K}}.\) We consider two cases.
-
1.
\(k \in {\mathcal{K}} \setminus \tilde{{\mathcal{K}}}:\) For every \(k \in {\mathcal{K}} \setminus \tilde{{\mathcal{K}}},\) we have that
$$ \sum_{j=1}^{J}{\mathbf{f}}_{k}^{j} \in \hbox{co}(\tilde{{\mathcal{Q}}}_k), $$(57)since \({\mathbf{0}} \in {\mathcal{T}}_k\) for every \(k \in {\mathcal{K}}.\)
-
2.
\(k \in \tilde{{\mathcal{K}}}:\) From Eq. 54, and since \({\mathbf{E}} (\omega^{\prime}, \tau)= \sum_{j=1}^{J} {\mathbf{E}}^j(\omega^{\prime}, \tau),\) for all \(k \in \tilde{{\mathcal{K}}}\) we have
$$\begin{aligned} \sum_{j=1}^{J} {\mathbf{f}}_k^j &=\lim_{i \rightarrow \infty} \left\{\sum\limits_{\buildrel{\tau \in \{1,\ldots,t_i\}} \over{{\rm s.t.\ }{\hat{\mathbf{S}}}(\omega^{\prime},\tau)= {\mathbf{S}}^{(k)}}} \frac{1}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)} {\mathbf{Q}}^{{\mathbf{E}}(\omega^{\prime},\tau)}(\omega^{\prime},\tau) {\mathbf{E}} (\omega^{\prime}, \tau)\right\}\\ &=\lim_{i \rightarrow \infty} \left\{\frac{1}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)} \sum_{{\mathbf{c}} \in {\mathcal{T}}_k}\sum_{{\mathbf{Q}} \in {\mathcal{Q}}}\sum\limits_{\buildrel{\tau \in \{1,\ldots,t_i\}}\over{{\rm s.t.\ }{\hat{\mathbf{S}}}(\omega^{\prime},\tau)= {\mathbf{S}}^{(k)},}_{\buildrel{{\mathbf{E}}(\omega^{\prime}, \tau)= {\mathbf{c}},}\over{{\mathbf{Q}}^{{\mathbf{c}}}(\omega^{\prime}, \tau) = {\mathbf{Q}}}}} {\mathbf{Q}}\;{\mathbf{c}} \right\}\\ &=\lim_{i \rightarrow \infty} \left \{\sum_{{\mathbf{c}} \in {\mathcal{T}}_k}\sum_{{\mathbf{Q}} \in {\mathcal{Q}}} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime},t_i; k,{\mathbf{c}}, {\mathbf{Q}})}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)} {\mathbf{Q}} \; {\mathbf{c}} \right \} \\ &=\lim_{i \rightarrow \infty} \left \{\sum_{{\mathbf{c}} \in {\mathcal{T}}_k}\sum_{{\mathbf{Q}} \in {\mathcal{Q}}} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime},t_i; k,{\mathbf{c}}, {\mathbf{Q}})}{t_i} \frac{t_i}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)} {\mathbf{Q}} \; {\mathbf{c}} \right \}. \end{aligned}$$(58)
Since each of the terms involved in the sum are non-negative, and since the outer limit exists, it follows that each of the product terms in the limit are bounded. Further, since \(\frac{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)}{t_i}\) converges to a non-zero value, we may extract a converging subsequence such that \(\lim_{i \rightarrow \infty} \left\{\frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime},t_i; k,{\mathbf{c}}, {\mathbf{Q}})}{t_{i}}\right\}\) exists, and therefore
Note also that \(\lim_{i \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega^{\prime}, t_i; k,{\mathbf{c}})}{t_i}\) exists and can be written as a finite sum of existing limits as
where we made use of the fact that the limit \(\lim_{i \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime}, t_i; k,{\mathbf{c}},{\mathbf{Q}})}{t_i}\) exists. As discussed in Sect. 4, for all \({\mathbf{c}} \in {\mathcal{T}}_k,\) the quantity \(n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega^{\prime}, t_i; k,{\mathbf{c}}) \ne 0\) as \(t \rightarrow \infty .\) Hence, we can write
It follows from Eqs. 46 and 60 that
exists. Let this limit be equal to
From Eqs. 46, 47 and 62 it follows that the individual limits in Eq. 61 exist. Hence, it can be written as
By replacing Eq. 63 in Eq. 59 we get
where Eq. 64 follows by employing Eq. 10. Consequently, it follows that
and the proof is complete. \(\square\)
Rights and permissions
About this article
Cite this article
Pantelidou, A., Ephremides, A. & Tits, A.L. A cross-layer approach for stable throughput maximization under channel state uncertainty. Wireless Netw 15, 555–569 (2009). https://doi.org/10.1007/s11276-007-0089-7
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11276-007-0089-7