Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

A cross-layer approach for stable throughput maximization under channel state uncertainty

  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

Obtaining the stable throughput region of a wireless network, and a policy that achieves this throughput, has attracted the interest of the research community in the past years. A major simplifying assumption in this line of research has been to assume that the network control policy has full access to the current channel conditions at every time a decision is made. However, in practice one may only estimate the actual conditions of the wireless channel process, and hence the network control policy can at most have access to an estimate of the channel which can in fact be highly inaccurate. In this work we determine a stationary joint link activation and routing policy based on a weighted version of the “back-pressure” algorithm that maximizes the stable throughput region of time-varying wireless networks with multiple commodities by having access to only a possibly inaccurate estimate of the true channel state. We further show optimality of this policy within a broad class of stationary, non-stationary, and even anticipative policies under certain mild conditions. The only restriction is that policies in this class have no knowledge on the current true channel state, except what is available through its estimate.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Tassiulas, L. (1997). Scheduling and performance limits of networks with constantly changing topology. IEEE Transactions on Information Theory, 43(3), 1067–1073.

    Article  MATH  MathSciNet  Google Scholar 

  2. Tassiulas, L., & Ephremides, A. (1992). Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks. IEEE Transactions on Automatic Control, 37(12), 1936–1948.

    Article  MATH  MathSciNet  Google Scholar 

  3. Pantelidou, A., Ephremides, A., & Tits, A. (2005). Maximum throughput scheduling in time-varying-topology wireless ad-hoc networks. In Conference on Information Sciences and Systems (CISS), Baltimore, March 2005.

  4. Neely, M. J., Modiano, E., & Rohrs, C. E. (2005). Dynamic power allocation and routing for time-varying wireless networks. IEEE Journal on Selected Areas in Communications, 23(1), 89–103.

    Article  Google Scholar 

  5. Lapidoth, A., & Narayan, P. (1998). Reliable communication under channel uncertainty. IEEE Transactions on Information Theory, 44(6), 2148–2177.

    Article  MATH  MathSciNet  Google Scholar 

  6. Xia, P., Zhou, S., & Giannakis, G. B. (2004). Adaptive MIMO OFDM based on partial channel state information. IEEE Transactions on Signal Processing, 52(1), 202–213.

    Article  MathSciNet  Google Scholar 

  7. Baran, P. (1964). On distributed communication networks. IEEE Transactions on Communications, 12(1), 1–9.

    Article  Google Scholar 

  8. Brémaud, P. (1999). Markov chains: Gibbs fields, Monte Carlo simulation and queues. Springer-Verlag.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anna Pantelidou.

Appendix: Proof of Theorem 1

Appendix: Proof of Theorem 1

In this section we are going to prove each individual inclusion relationship of Theorem 1. The third inclusion, that is \(\widetilde{{\mathbf{C}}}_{\varvec{\pi}_0^{{\mathbf{w}}}}^{1} \subseteq \widetilde{{\mathbf{C}}}_{{\mathcal E}}^{p},\) follows trivially from the definitions of the sets \(\widetilde{{\mathbf{C}}}_{\varvec{\pi}_0^{{\mathbf{w}}}}^{1},\) and \(\widetilde{{\mathbf{C}}}_{{\mathcal E}}^{p}.\) Next, we prove the three remaining inclusions, namely that (i) \(\hbox{ri}(\varvec{\Uplambda}) \subseteq {\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}},\) (ii) \({\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}} \subseteq\widetilde{{\mathbf{C}}}_{\varvec{\pi}_0^{{\mathbf{w}}}}^{1},\) and (iii) \(\widetilde{{\mathbf{C}}}_{{\mathcal E}}^{p} \subseteq \varvec{\Uplambda}.\)

(i) Proof of \(\hbox{ri}(\varvec{\Uplambda}) \subseteq {\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}}\)

Consider a rate \(\varvec{\lambda}\in\hbox{ri}(\varvec{\Uplambda}).\) We show that \(\varvec{\lambda} \in {\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}},\) i.e., that this rate is stabilized by our proposed policy \(\varvec{\pi}_0^{{\mathbf{w}}}.\) We make use of the Extended Foster’s Theorem [2], which provides a sufficient condition for stability.

Theorem 2 (Extended Foster Theorem)

Consider a Homogenous Markov Chain \(\{Y(t)\}_{t=0}^{\infty}\) with state space \({\mathcal{Y}}.\) Suppose there exists a real valued, function \(V\hbox{:} {\mathcal{Y}} \rightarrow {\mathbb{R}},\) that is bounded from below, such that

$$ {\mathbb{E}}[V(Y(t+1))|Y(t)= y] < \infty,\;\; \forall y \in {\mathcal{Y}}, $$
(25)

and such that for some ε > 0, and some finite subset \({\mathcal{Y}}_{0}\) of \({\mathcal{Y}}\)

$$ {\mathbb{E}}[V(Y(t+1)) - V(Y(t))|Y(t) = y] < -\epsilon,\quad \forall y \notin {\mathcal{Y}}_{0}. $$
(26)

Then, \(\{Y(t)\}_{t=0}^{\infty}\) is stable in the sense of Definition 1.

We will show that the process of the queue sizes \(\{{\mathbf{X}}(t)\}_{t=0}^{\infty}\) satisfies the conditions of this theorem. For compactness of notation, we use t + to denote t + 1. Given w > 0, and \({\mathbf{x}}\in{\mathcal{X}}\) , let \(V({\mathbf{x}}):=\sum_{j=1}^{J} w_j {{\mathbf{x}}^j}^{\top} {\mathbf{x}}^j ,\) be a candidate Lyapunov function. We show that, with V(·) thus defined under policy \(\varvec{\pi}_0^{\mathbf{w}},\) and given any process \(\{{\mathbf{A}}(t)\}_{t=1}^{\infty},\) such that \({\mathbb{E}}[{\mathbf{A}}(t)]= \varvec{\lambda},\) the process \(\{{\mathbf{X}}(t)\}_{t=0}^{\infty}\) given by Eq. 6 with \({\mathbf{E}}^j(t)=\varvec{\pi}_0^{{\mathbf{w}} j}({\mathbf{X}}(t-1),{\hat{\mathbf{S}}}(t))\) for all \(j \in {\mathcal{J}}\) satisfies the conditions of Theorem 2.

First, it is immediate that \({\mathbb{E}}[V({\mathbf{X}}(t^+))|{\mathbf{X}}(t)= {\mathbf{x}}] < \infty ,\) \(\forall {\mathbf{x}} \in {\mathcal{X}}.\) To see this, let \({\mathbf{x}} \in {\mathcal{X}},\) and let

$$ {\mathbf{G}}^j(t) := {\mathbf{x}}^j + {\mathbf{R}}^{j}{\mathbf{Q}}^{\varvec{\pi}({\mathbf{x}},{\hat{\mathbf{S}}}(t))}(t)\varvec{\pi}^j({\mathbf{x}},{\hat{\mathbf{S}}}(t)) +{\mathbf{A}}^{j}(t). $$
(27)

Note that for every t the matrix \({\mathbf{Q}}^{\varvec{\pi}({\mathbf{x}},{\hat{\mathbf{S}}}(t))}(t)\) is a function of S(t), and \({\hat{\mathbf{S}}}(t).\) Since by Proposition 1, the variables \({\mathbf{S}}(t^+),\) \({\hat{\mathbf{S}}}(t^+),\) \({\mathbf{A}}(t^+),\) are independent of X(t), Eq. 6 yields

$$ {\mathbb{E}}[V({\mathbf{X}}(t^+))|{\mathbf{X}}(t) = {\mathbf{x}}] = \sum_{j=1}^{J} w_j {\mathbb{E}} \left[ {\mathbf{G}}^j(t^+)^\top {\mathbf{G}}^j(t^+)\right], $$
(28)

which is finite for all x since from Assumption 1(b) the process \(\{{\mathbf{A}}(t)\}_{t=1}^{\infty}\) is assumed to have finite second moments, and further the policy \(\varvec{\pi}^j({\mathbf{x}},{\hat{\mathbf{S}}}(t^+))\) as well as the process \(\{{\mathbf{Q}}^{\varvec{\pi}({\mathbf{x}},{\hat{\mathbf{S}}}(t))}(t)\}_{t=1}^{\infty}\) take values in finite sets. This in fact holds independently of the choice of stationary policy \(\varvec{\pi},\) and of the arrival rate \(\varvec{\lambda}.\) To complete the proof, we show that, when policy \(\varvec{\pi}_0^{\mathbf{w}}\) is used, there exists a finite set \({\mathcal{X}}_0\) such that Eq. 26 holds. For compactness of notation, we define

$$ \Updelta V({\mathbf{x}}):={\mathbb{E}}\left[V({\mathbf{X}}(t^+))-V({\mathbf{X}}(t))| {\mathbf{X}}(t)={\mathbf{x}} \right]. $$

We first prove two lemmas that will be useful in proving the desired result.

Lemma 1

Given any policy \(\varvec{\pi},\) arrival rate \(\varvec{\lambda},\) and queue size matrix \({\mathbf{x}} \in {{\mathcal{X}}},\) the Markov Chain \(\{{\mathbf{X}}(t)\}_{t=0}^{\infty}\) given by Eq6 satisfies

$$\Delta V({\mathbf{x}})\le 2 \left( \sum_{j=1}^J w_j {\mathbf{x}}^{j\top} \varvec{\lambda}^j -\sum_{k \in {\mathcal{K}}} p_{{\hat{\mathbf{S}}}}(k) \sum_{j=1}^J {\mathbf{D}}^{{\mathbf{w}} j}_{k \varvec{\pi}({\mathbf{x}},{\mathbf{S}}^{(k)})} ({\mathbf{x}})^{\top} \; \varvec{\pi}^{j}({\mathbf{x}}, {\mathbf{S}}^{(k)}) \right) +B, $$
(29)

where B does not depend on x.

Proof

From Eq. 28, and the definition of our candidate Lyapunov function we have

$$ \begin{aligned} \Updelta V({\mathbf{x}})&= \sum_{j=1}^{J} w_j {\mathbb{E}} \left[ \left( {\mathbf{X}}^j(t^+) - {\mathbf{X}}^j(t) \right)^\top \times \left( {\mathbf{X}}^j(t^+) + {\mathbf{X}}^j(t) \right)|{\mathbf{X}}(t)={\mathbf{x}} \right] \\ & = \sum_{j=1}^{J} w_j {\mathbb{E}} \left[ \left( {\mathbf{X}}^j(t^+) - {\mathbf{X}}^j(t) \right)^\top \times \left( 2{\mathbf{X}}^j(t) +{\mathbf{X}}^j(t^+) - {\mathbf{X}}^j(t) \right)|{\mathbf{X}}(t)={\mathbf{x}} \right] \\ & = 2 \sum_{j=1}^J w_j \left( {{\mathbf{x}}^j}^\top {\mathbb{E}}\left[{\mathbf{X}}^j(t^+)-{\mathbf{X}}^j(t)| {\mathbf{X}}(t)={\mathbf{x}}\right]\right) \\ & \quad +\sum_{j=1}^J w_j {\mathbb{E}} \left[({\mathbf{X}}^j(t^+)-{\mathbf{X}}^j(t))^\top({\mathbf{X}}^j(t^+) -{\mathbf{X}}^j(t))|{\mathbf{X}}(t)={\mathbf{x}}\right]. \end{aligned} $$

By using Eq. 6 we obtain

$$ \begin{aligned} \Updelta V({\mathbf{x}}) = 2\sum_{j=1}^J \left( w_j {\mathbf{x}}^{j\top} {\mathbb{E}} \left[{\mathbf{R}}^j{\mathbf{Q}}^ {\varvec{\pi}({\mathbf{x}},{\hat{\mathbf{S}}}(t^+))}(t^+) \varvec{\pi}^j({\mathbf{x}},{\hat{\mathbf{S}}}(t^+))+ {\mathbf{A}}^j(t^+)|{\mathbf{X}}(t) ={\mathbf{x}} \right] \right) \\ \quad +\sum_{j=1}^{J} w_j {\mathbb{E}}\left[\left({\mathbf{R}}^j{\mathbf{Q}}^{\varvec{\pi}({\mathbf{x}}, {\hat{\mathbf{S}}}(t^+))}(t^+)\varvec{\pi}^j({\mathbf{x}}, {\hat{\mathbf{S}}}(t^+))+{\mathbf{A}}^j(t^+)\right)^\top \times \left({\mathbf{R}}^j{\mathbf{Q}}^{\varvec{\pi}({\mathbf{x}}, {\hat{\mathbf{S}}}(t^+))}(t^+)\varvec{\pi}^j({\mathbf{x}}, {\hat{\mathbf{S}}}(t^+))+ {\mathbf{A}}^j(t^+)\right)|{\mathbf{X}}(t)={\mathbf{x}}\right] \end{aligned} .$$

Since \(\{{\mathbf{A}}(t)\}_{t=1}^{\infty}\) is stationary, and has finite first and second moments, and the policy \(\varvec{\pi}^j({\mathbf{x}},{\hat{\mathbf{S}}}(t^+)),\) as well as the process \(\{{\mathbf{Q}}^{\varvec{\pi}({\mathbf{x}}, {\hat{\mathbf{S}}}(t))}(t)\}_{t=1}^{\infty},\) where \(\varvec{\pi}({\mathbf{x}},{\hat{\mathbf{S}}}(t)) = \sum_{j=1}^{J}\varvec{\pi}^{j}({\mathbf{x}},{\hat{\mathbf{S}}}(t)),\) take values in finite sets, the second term is finite and bounded for every \(j \in {\mathcal{J}}\) by a quantity independent of the queue size matrix x, and time slot t. Hence for every \({\mathbf{x}} \in {\mathcal{X}},\)

$$ \Updelta V({\mathbf{x}}) \le 2\sum_{j=1}^J \left( w_j {\mathbf{x}}^{j\top} {\mathbb{E}} \left[ {\mathbf{R}}^j{\mathbf{Q}}^{\varvec{\pi}({\mathbf{x}},\hat {\mathbf{S}}(t^+))}(t^+)\varvec{\pi}^j({\mathbf{x}},{\hat{\mathbf{S}}}(t^+)) + {\mathbf{A}}^j(t^+)|{\mathbf{X}}(t) ={\mathbf{x}} \right] \right)+B,$$

for some B independent of x, and t. Further by making use of Proposition 1, namely that \({\mathbf{A}}(t^+),\) is independent of X(t), and using conditional expectations it follows that

$$ \begin{aligned} \Updelta V({\mathbf{x}})& \le2 \sum_{j=1}^J w_j {\mathbf{x}}^{j\top} \varvec{\lambda}^j + B \\ & \quad +2 \sum_{j=1}^J w_j {\mathbf{x}}^{j\top} {\mathbf{R}}^j \sum_{k \in {\mathcal{K}}}p_{{\hat{\mathbf{S}}}}(k) {\mathbb{E}}\left[{\mathbf{Q}}^{\varvec{\pi}({\mathbf{x}},{{\mathbf{S}}}^{(k)})}(t^+)| {\mathbf{X}}(t)={\mathbf{x}}, {\hat{\mathbf{S}}}(t^+) = {\mathbf{S}}^{(k)}\right] \varvec{\pi}^{j}({\mathbf{x}},{\mathbf{S}}^{(k)}). \end{aligned} $$

Using Eq. 10, and the fact that \({\mathbf{Q}}^{\varvec{\pi}({\mathbf{x}},{{\mathbf{S}}}^{(k)})}(t^+),\) and \({\hat{\mathbf{S}}}(t^+)\) are independent of \({\mathbf{X}}(t)\) we obtain

$$ \Updelta V({\mathbf{x}})\le2\sum_{j=1}^J w_j {\mathbf{x}}^{j\top} \varvec{\lambda}^j-2 \sum_{j=1}^J \sum_{k \in {\mathcal{K}}} p_{{\hat{\mathbf{S}}}}(k)\left( -w_j {\tilde{\mathbf{Q}}}_{k}^{\varvec{\pi}({\mathbf{x}},{\mathbf{S}}^{(k)})} {\mathbf{R}}^{j\top} {\mathbf{x}}^{j} \right)^{\top} \times \varvec{\pi}^{j}({\mathbf{x}}, {\mathbf{S}}^{(k)}) +B. $$
(30)

Finally, by using Eq. 9, the above equation becomes

$$ \Updelta V({\mathbf{x}})\le 2 \left(\sum_{j=1}^J w_j {\mathbf{x}}^{j\top} \varvec{\lambda}^j-\sum_{k \in {\mathcal{K}}} p_{{\hat{\mathbf{S}}}}(k) \times \sum_{j=1}^J {\mathbf{D}}^{{\mathbf{w}} j}_{k \varvec{\pi}({\mathbf{x}},{\mathbf{S}}^{(k)})} ({\mathbf{x}})^{\top} \; \varvec{\pi}^{j}({\mathbf{x}},{\mathbf{S}}^{(k)}) \right) +B,$$

which completes the proof.   \(\square\)

When an arrival rate \(\varvec{\lambda}\) belongs to \(\hbox{ri}(\varvec{\Uplambda}),\) a useful upper bound can be obtained on the first term in the parenthesis of Eq. 29, by means of the following lemma.

Lemma 2

Let \(\varvec{\lambda} \in \hbox{ri}(\varvec{\Uplambda}).\) Then there exist nonnegative scalars \({\mu^{\prime}}_k^{\mathbf{c}},\) for all \({\mathbf{c}}\in{{\mathcal{T}}}_k,\) \(k \in {\mathcal{K}},\) with \(\sum_{{\mathbf{c}}\in{{\mathcal{T}}}_k}{\mu^{\prime}}_k^{\mathbf{c}} < 1,\) such that, for all \({\mathbf{x}}\in{{\mathcal{X}}},\)

$$ \sum_{j=1}^J w_j {\mathbf{x}}^{j\top}\varvec{\lambda}^j \le \sum_{k \in {\mathcal{K}}} p_{{\hat{\mathbf{S}}}}(k) \sum_{{\mathbf{c}}\in{{\mathcal{T}}}_k} {\mu^{\prime}}_k^{{\mathbf{c}}} {\mathbf{D}}_{k {\mathbf{c}}}^{{\mathbf{w}}}({\mathbf{x}})^{\top} {\mathbf{c}}. $$
(31)

Proof

Let rate \(\varvec{\lambda} \in \hbox{ri}(\varvec{\Uplambda}).\) Then \(\varvec{\lambda} \in \varvec{\Uplambda},\) as \(\hbox{ri}(\varvec{\Uplambda}) \subseteq \varvec{\Uplambda}.\) Hence, with reference to Eq. 23 there exists a scalar δ > 1, and non-negative flow vectors \({\mathbf{f}}^{j}_{k}\in {\mathbb{R}}_+^{L}\) such that

$$ \varvec{\lambda}^j=-{\mathbf{R}}^j \sum_{k \in {\mathcal{K}}} p_{{\hat{\mathbf{S}}}}(k) {\mathbf{f}}^j_{k}, $$
(32)

and where \(\delta \sum_{j=1}^J {\mathbf{f}}_k^j \in \hbox{co}(\tilde{{\mathcal{Q}}}_k)\) i.e., for some \(\mu_k^{\mathbf{c}} \ge 0\) such that \(\sum_{{\mathbf{c}} \in {\mathcal{T}}_k} \mu_k^{\mathbf{c}} =1\) we have

$$ \delta \sum_{j=1}^J {\mathbf{f}}_k^j = \sum_{{\mathbf{c}} \in {\mathcal{T}}_k}\mu_k^{\mathbf{c}} {\tilde{\mathbf{Q}}}_k^{{\mathbf{c}}} {\mathbf{c}}. $$
(33)

Note that from Eq. 33 it follows that, for all \(j \in {\mathcal{J}},\) and \(k \in {\mathcal{K}},\) we have

$$ ({\mathbf{f}}_k^j)_\ell = 0\,, \quad\forall \ell\not\in {\mathbf{S}}^{(k)}. $$
(34)

Using Eq. 32, and the fact each of the vectors f j k are non-negative component-wise we can write

$$ \begin{aligned} \sum_{j=1}^J w_j {\mathbf{x}}^{j \top} \varvec{\lambda}^j&\le \sum_{k \in {\mathcal{K}}} p_{{\hat{\mathbf{S}}}}(k) \sum_{j=1}^{J} \left( \mathop{\hbox{max}}\limits_{j \in {\mathcal{J}}} \left(-w_j {\mathbf{x}}^{j\top}{\mathbf{R}}^{j} \right) {\mathbf{f}}^j_k \right)\\ &=\sum_{k \in {\mathcal{K}}} p_{{\hat{\mathbf{S}}}}(k) \mathop{\hbox{max}}\limits_{j \in {\mathcal{J}}} \left(- w_j {\mathbf{x}}^{j\top}{\mathbf{R}}^{j} \right) \sum_{{\mathbf{c}}\in{{\mathcal{T}}}_k} \frac{\mu_k^{{\mathbf{c}}}}{\delta} {\tilde{\mathbf{Q}}}_k^{{\mathbf{c}}} {\mathbf{c}}, \end{aligned} $$
(35)

where Eq. 35 follows by making use of Eq. 33. Let \(\mu^{\prime {\mathbf{c}}}_k:=\frac{\mu_k^{{\mathbf{c}}}}{\delta}.\) By definition, \({\mu^{\prime}}_k^{{\mathbf{c}}} \ge 0 .\) Also, since \(\sum_{{\mathbf{c}}\in{\mathcal{T}}_k} \mu_k^{{\mathbf{c}}} =1\) and δ > 1, it follows that \(\sum_{{\mathbf{c}} \in {\mathcal{T}}_k}\mu^{\prime {\mathbf{c}}}_k < 1.\) Further, Eq. 35 can be written as

$$ \begin{aligned} \sum_{j=1}^J w_j {\mathbf{x}}^{j \top} \varvec{\lambda}^j&\le\sum_{k \in {\mathcal{K}}} p_{{\hat{\mathbf{S}}}}(k) \sum_{{\mathbf{c}}\in{{\mathcal{T}}}_k} {\mu^{\prime}}_k^{{\mathbf{c}}} \mathop{\hbox{max}}\limits_{j \in {\mathcal{J}}} \left( \left(- w_j {\tilde{\mathbf{Q}}}_k^{{\mathbf{c}}} {\mathbf{R}}^{j\top} {\mathbf{x}}^{j}\right)^{\top} \right){\mathbf{c}} \\ &=\sum_{k \in {\mathcal{K}}} p_{{\hat{\mathbf{S}}}}(k) \sum_{{\mathbf{c}}\in{{\mathcal{T}}}_k} {\mu^{\prime}}_k^{{\mathbf{c}}} {\mathbf{D}}^{{\mathbf{w}}}_{k {\mathbf{c}}} ({\mathbf{x}})^{\top} {\mathbf{c}}, \end{aligned} $$
(36)

where Eq. 36 follows by making use of Eqs. 9, 12, and 13. This completes the proof of Lemma 2.  \(\square\)

We proceed to finalize the proof of the claim that \(\hbox{ri}(\varvec{\Uplambda}) \subseteq {\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}}.\) From Lemmas 1 and 2 we conclude that, given \(\varvec{\lambda} \in \hbox{ri}(\varvec{\Uplambda}),\) there exist nonnegative scalars \({\mu^{\prime}}_k^{\mathbf{c}},\) for all \({\mathbf{c}}\in{{\mathcal{T}}}_k ,\) and \(k \in {\mathcal{K}},\) with \(\sum_{{\mathbf{c}}\in{{\mathcal{T}}}_k}{\mu^{\prime}}_k^{\mathbf{c}} < 1,\) such that, for all \({\mathbf{x}}\in{{\mathcal{X}}},\) and all stationary policies \(\varvec{\pi},\)

$$ \Updelta V({\mathbf{x}}) \le 2\sum_{k \in {\mathcal{K}}} p_{{\hat{\mathbf{S}}}}(k) \left(\sum_{{\mathbf{c}}\in{\mathcal{T}}_k} {\mu^{\prime}}_k^{\mathbf{c}} {\mathbf{D}}_{k {\mathbf{c}}}^{{\mathbf{w}}}({\mathbf{x}})^{\top} {\mathbf{c}} -\sum_{j=1}^J{\mathbf{D}}_{k\varvec{\pi}({\mathbf{x}},{\mathbf{S}}^{(k)})}^{{\mathbf{w}} j}({\mathbf{x}})^\top \varvec{\pi}^j({\mathbf{x}},{\mathbf{S}}^{(k)})\right) +B. $$
(37)

So far \(\varvec{\pi}\) was an arbitrary stationary policy. We now focus on the policy \(\varvec{\pi}_0^{\mathbf{w}}.\) In view of the fact that \(\varvec{\pi}({\mathbf{x}}, {\mathbf{S}}^{(k)}) = \sum_{j=1}^J \varvec{\pi}^j ({\mathbf{x}}, {\mathbf{S}}^{(k)}) \in {\mathcal{T}}_k ,\) from Eq. 17, and of the definition of \(\varvec{\pi}_0^{{\mathbf{w}}},\) we obtain

$$ \begin{aligned} \sum_{j=1}^J {\mathbf{D}}_{k\varvec{\pi}_0^{{\mathbf{w}}}({\mathbf{x}},{\mathbf{S}}^{(k)})}^{{\mathbf{w}} j}({\mathbf{x}})^\top \varvec{\pi}_0^{{\mathbf{w}} j}({\mathbf{x}},{\mathbf{S}}^{(k)}) &= {\mathbf{D}}_{k \varvec{\pi}_0^{{\mathbf{w}}}({\mathbf{x}},{\mathbf{S}}^{(k)})}^{\mathbf{w}}({\mathbf{x}})^\top \sum_{j=1}^J \varvec{\pi}_0^{{\mathbf{w}}j}({\mathbf{x}},{\mathbf{S}}^{(k)}) \\ & = {\mathbf{D}}_{k \varvec{\pi}_0^{{\mathbf{w}}}({\mathbf{x}},{\mathbf{S}}^{(k)})}^{\mathbf{w}}({\mathbf{x}})^\top \varvec{\pi}_0^{\mathbf{w}}({\mathbf{x}},{\mathbf{S}}^{(k)}) \\ & = \mathop{\hbox{max}}\limits_{{\mathbf{c}}\in {{\mathcal{T}}}_{k}} \{{{{\mathbf{D}}_{k{\mathbf{c}}}^{{\mathbf{w}}}}({\mathbf{x}})}^{\top} {\mathbf{c}}\}. \end{aligned} $$

By substituting into Eq. 37, we get

$$ \begin{aligned} \Updelta V({\mathbf{x}})&\le B +2\sum_{k \in {\mathcal{K}}} p_{{\hat{\mathbf{S}}}}(k) \left( \sum_{{\mathbf{c}}\in{{\mathcal{T}}}_k} {\mu^{\prime}}_k^{\mathbf{c}} {\mathbf{D}}_{k{\mathbf{c}}}^{\mathbf{w}}({\mathbf{x}})^{\top} {\mathbf{c}} - \mathop{\hbox{max}}\limits_{{\mathbf{c}} \in {{\mathcal{T}}}_{k}} \{{{{\mathbf{D}}_{k{\mathbf{c}}}^{{\mathbf{w}}}}({\mathbf{x}})}^{\top} {\mathbf{c}} \}\right) \\ &\le B -2\sum_{k \in {\mathcal{K}}} p_{{\hat{\mathbf{S}}}}(k) \mathop{\hbox{max}}\limits_{{\mathbf{c}} \in {{\mathcal{T}}}_{k}} \{{{{\mathbf{D}}_{k{\mathbf{c}}}^{{\mathbf{w}}}}({\mathbf{x}})}^{\top} {\mathbf{c}} \} \left( 1 - \sum_{{\mathbf{c}} \in{{\mathcal{T}}}_k} {\mu^{\prime}}_k^{\mathbf{c}} \right) \\ &\le B - \rho \mathop{\hbox{max}}_{k \in {\mathcal{K}}} \mathop{\hbox{max}}_{{\mathbf{c}}\in{\mathcal{T}}_k} \{{\mathbf{D}}_{k {\mathbf{c}}}^{{\mathbf{w}}}({\mathbf{x}})^\top {\mathbf{c}} \}, \end{aligned} $$

where from Eq. 7, and the fact that \(\sum_{{\mathbf{c}}\in{{\mathcal{T}}}_k}{\mu^{\prime}}_k^{\mathbf{c}} < 1\)

$$ \rho:=2\mathop{\hbox{min}}\limits_{k \in {\mathcal{K}}} \left( p_{{\hat{\mathbf{S}}}}(k) \left(1 - \sum_{{\mathbf{c}}\in{{\mathcal{T}}}_k} {\mu^{\prime}}_k^{\mathbf{c}} \right)\right) > 0. $$

Now, let \({\mathbf{x}}\in{\mathcal{X}},\) with \({\mathbf{x}}\ne {\mathbf 0},\) and suppose X(t) = x. Choose a node n, and a commodity j such that

$$ x_{n j} > 0. $$

The Markov property of \(\{{\mathbf{X}}(t)\}_{t=0}^\infty\) implies that

$$ \Updelta V({\mathbf{x}}) = {\mathbb{E}}\left[V({\mathbf{X}}(t^+))-V({\mathbf{X}}(t))|{\mathbf{X}}(t) = {\mathbf{x}}, {\mathbf{X}}(0)={\mathbf 0} \right]. $$

Hence, without loss of generality, assume that the queue size process at time slot 0 satisfies X(0) = 0. Since \(X_{n j}(t)=x_{n j} > 0,\) and X n j (0) = 0, there must exist a sequence of links in \({\mathcal{L}}\) from some node n′, with \(\lambda_{n^{\prime}j} > 0,\) to node n that satisfy Assumption 2. Further, Assumption 2 then implies that there exist links \(\ell_{i}\in {\mathcal{L}}, \) \(i=1,\ldots, z,\) for some z, satisfying 0 < z < N, such that \(n=s(\ell_{1}),\) and nodes \(n_1, \ldots, n_z\) , such that \(d(\ell_1)=n_1,\) \(s(\ell_{i+1})=n_i ,\) \(d(\ell_{i+1})= n_{i+1},\) \(i=1,\ldots, z-1 ,\) and \(n_z \in V_{j}.\) For notational simplicity, also let \(n_0:=n.\) Since \(x_{{n_z}j}=0,\) whenever \(n_{z} \in V_{j},\) we can write

$$ x_{n j}=\sum_{i=1}^{z} ( x_{n_{i-1} j} - x_{n_{i} j} ) \le z \mathop{\hbox{max}}\limits_{i,j} ( x_{n_{i-1} j} - x_{n_{i} j} ). $$
(38)

It follows that there exists some link \(\ell_{i^{\scriptstyle{\star}}}\) for which the above queue size difference through it, is maximized for some commodity \(j^{\scriptstyle{\star}}\in{\mathcal{J}}.\) Let \(n_{i^{\scriptstyle{\star}}-1}=s(\ell_{i^{\scriptstyle{\star}}}),\) and \(n_{i^{\scriptstyle{\star}}}=d(\ell_{i^{\scriptstyle{\star}}}).\) Then, from Eq. 38 we have

$$ x_{n_{i^{\scriptstyle{\star}}-1} j^{\scriptstyle{\star}}} - x_{n_{i^{\scriptstyle{\star}}} j^{\scriptstyle{\star}}} \ge \frac{x_{n j}}{z} \ge \frac{x_{n j}}{N}. $$
(39)

Recall that \(\ell_{i}\in {\mathcal{L}}\) for all \(i=1,\ldots, z .\) Further, let \(k^{\scriptstyle{\star}}\) be such that \(\ell_{i^{\scriptstyle{\star}}}\) satisfies Eq. 1 under the estimated channel state \({\hat{\mathbf{S}}}(t)={\mathbf{S}}^{(k^{\scriptstyle{\star}})}.\) Let \(\hbox{e}_{\ell_{i^{\scriptstyle{\star}}}}\in{\mathbb{R}}^{L}\) be a vector with its \({\ell_{i^{\scriptstyle{\star}}}}\hbox{th}\) component equal to 1, and with all other components equal to 0. Then, from the property of the constraint set it follows that \(\hbox{e}_{\ell_{i^{\scriptstyle{\star}}}}\in{\mathcal{T}}_{k^{\scriptstyle{\star}}}.\) Also, it follows from Eqs. 12 and 13 that

$$ \begin{aligned} \mathop{\hbox{max}}\limits_{k \in {\mathcal{K}}} \mathop{\hbox{max}}\limits_{{\mathbf{c}}\in{\mathcal{T}}_k} \{{\mathbf{D}}_{k{\mathbf{c}}}^{{\mathbf{w}}}({\mathbf{x}})^{\top} {\mathbf{c}} \}& \ge \mathop{\hbox{max}}\limits_{{\mathbf{c}}\in{\mathcal{T}}_{k^{\scriptstyle{\star}}}} \{{\mathbf{D}}_{k^{\scriptstyle{\star}}{\mathbf{c}}}^{{\mathbf{w}}}({\mathbf{x}})^{\top} {\mathbf{c}} \} \\ & \ge {\mathbf{D}}_{k^{\scriptstyle{\star}} \hbox{e}_{\ell_{i^{\scriptstyle{\star}}}}}^{{\mathbf{w}}} ({\mathbf{x}})^\top \hbox{e}_{\ell_{i^{\scriptstyle{\star}}}} = \left( {\mathbf{D}}_{k^{\scriptstyle{\star}} \hbox{e}_{\ell_{i^{\scriptstyle{\star}}}}}^{\mathbf{w}} ({\mathbf{x}}) \right)_{\ell_{i^{\scriptstyle{\star}}}} \ge \left( {\mathbf{D}}_{k^{\scriptstyle{\star}}{\rm e}_{\ell_{i^{\scriptstyle{\star}}}}}^{{\mathbf{w}} j^{\scriptstyle{\star}}}({\mathbf{x}}) \right)_{\ell_{i^{\scriptstyle{\star}}}}, \end{aligned} $$

where \(\left({\mathbf{D}}_{k^{\scriptstyle{\star}}{\rm e}_{\ell_{i^{\scriptstyle{\star}}}}}^{{\mathbf{w}} j^{\scriptstyle{\star}}}({\mathbf{x}}) \right)_{\ell_{i^{\scriptstyle{\star}}}}\) is the \({\ell_{i^{\scriptstyle{\star}}}}\hbox{th}\) entry of the vector \({\mathbf{D}}_{k^{\scriptstyle{\star}}{\rm e}_{\ell_{i^{\scriptstyle{\star}}}}}^{{\mathbf{w}} j^{\scriptstyle{\star}}}({\mathbf{x}})\). In view of Eqs. 11 and 39, it follows that

$$ \mathop{\hbox{max}}\limits_{k \in {\mathcal{K}} } \mathop{\hbox{max}}\limits_{{\mathbf{c}}\in{\mathcal{T}}_k} \{{\mathbf{D}}_k^{{\mathbf{w}}}({\mathbf{x}})^{\top} \hbox{e}_{\ell_{i^{\scriptstyle{\star}}}} \} \ge w_{j^{\scriptstyle{\star}}} ({\tilde{\mathbf{Q}}}_{k^{\scriptstyle{\star}}}^{{\rm e}_{\ell_{i^{\scriptstyle{\star}}}}})_{\ell_{i^{\scriptstyle{\star}}}} (x_{n_{i^{\scriptstyle{\star}}-1} j^{\scriptstyle{\star}}} - x_{n_{i^{\scriptstyle{\star}}} j^{\scriptstyle{\star}}}) \ge \frac{{w_{\rm min}}\; \tilde{q}_{\rm min} \; x_{n j}}{N}, $$

where \(({\tilde{\mathbf{Q}}}_{k^{\scriptstyle{\star}}}^{{\rm e}_{\ell_{i^{\scriptstyle{\star}}}}})_{\ell_{i^{\scriptstyle{\star}}}}\) is the \(\ell_{i^{\scriptstyle{\star}}}\hbox{th}\) diagonal entry of the matrix \({\tilde{\mathbf{Q}}}_{k^{{\scriptstyle{\star}}}}^{{\rm e}_{\ell_{i^{\scriptstyle{\star}}}}},\) while

$$ w_{\rm min}:=\mathop{\hbox{min}}\limits_{j \in {\mathcal{J}}}w_j > 0, $$

and, in view of Assumption 2,

$$ \tilde{q}_{\rm min} > 0. $$

Note that the entries w min and \(\tilde{q}_{\rm min}\) do not depend on x. Overall, we have

$$ \Updelta V({\mathbf{x}}) \le B - \frac{\rho \; w_{\rm min}\;\tilde{q}_{\rm min}\; x_{n j}}{N} $$

so that, given any ε > 0,

$$ \Updelta V({\mathbf{x}}) < -\epsilon, \quad \forall {\mathbf{x}}\notin{\mathcal{X}}_0 := \left\{{\mathbf{x}}\in{\mathcal{X}}:x_{n j} \le \frac{N(B+\epsilon)}{{\rho \; w_{\rm min}} \; \tilde{q}_{\rm min}}\right\}. $$

Since vectors in \({\mathcal{X}}\) have integer components, the set \({\mathcal{X}}_0\) is finite, and the proof is complete.   \(\square\)

(ii) Proof of \({\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}} \subseteq \widetilde{{\mathbf{C}}}_{\varvec{\pi}_0^{{\mathbf{w}}}}^{1}\)

Consider an arrival rate \(\varvec{\lambda} \in {\mathbf{C}}_{\varvec{\pi}_{0}^{{\mathbf{w}}}}.\) In order to prove that \(\varvec{\lambda} \in \widetilde{{\mathbf{C}}}_{\varvec{\pi}_0^{{\mathbf{w}}}}^{1},\) we need to show that stability according to Definition 1 implies intermittent boundedness with probability 1. We proceed by giving a theorem that gives a sufficient condition for intermittent boundedness of a Markov Chain.

Theorem 3

Let \(\{Y(t)\}_{t=0}^{\infty}\) be a Markov Chain, with \({\mathcal{Y}}\) the, possibly empty, set of its transient states. If \(\{Y(t)\}_{t=0}^{\infty}\) almost surely exits the set of transient states in finite time, i.e. if

$$ P\left[\hbox{min}\{\tau\ge0:Y(\tau)\notin {\mathcal{Y}}\} < \infty|Y(0)=y \right] =1, \quad \forall y \in {\mathcal{Y}} $$
(40)

(which holds vacuously when \({\mathcal{Y}}\) is empty), then \(\{Y(t)\}_{t=0}^{\infty}\) is intermittently bounded with probability 1.

Proof

Consider the Markov Chain \(\{Y(t)\}_{t=0}^{\infty}\) that satisfies Eq. 40. Then with probability 1, the Markov Chain \(\{Y(t)\}_{t=0}^{\infty}\) will be eventually confined within a single recurrent class. It follows (e.g. from Theorem 7.3 in Chapter 2 of [8]) that, with probability 1, some (recurrent) state will be visited infinitely many times. Hence, there exists a set W, that is a subset of the sample space \(\Upomega ,\) i.e. \(W \subseteq \Upomega ,\) with P[W] = 1 such that for every event \({\omega} \in W ,\) there exist a state y, and a sequence \(\{t_i \}_{i=1}^{\infty},\) such that in the sample path ω the process satisfies

$$ Y(\omega,t_i)=y, \;\;\forall i=1,2,\ldots. $$

Hence, by Definition 2 it follows that \(\{Y(t)\}_{t=0}^{\infty}\) is intermittently bounded with probability 1.   \(\square\)

A direct consequence of Theorem 3 is Corollary 1, that we state next.

Corollary 1

Let \(\{Y(t)\}_{t=0}^{\infty}\) be a stable Markov Chain. Then, \(\{Y(t)\}_{t=0}^{\infty}\) is intermittently bounded with probability 1.

From Corollary 1, the desired result follows.

(iii) Proof of \(\widetilde{{\mathbf{C}}}_{{\mathcal E}}^{p} \subseteq \varvec{\Uplambda}\)

We need to show that if \(\varvec{\lambda} \in \widetilde{{\mathbf{C}}}_{{\mathcal E}}^{p}\) then \(\varvec{\lambda} \in \varvec{\Uplambda}.\) We start by introducing the notation required for our proof. We define the random variable \(n_{{\hat{\mathbf{S}}}}(t;k)\) to be the number of time slots τ in the interval [0, t] during which \({\hat{\mathbf{S}}}(\tau)\) takes the value S (k). Moreover, we denote by \(\{n_{{\hat{\mathbf{S}}}}(\omega,t;k)\}_{t =1}^{\infty},\) \(\{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega, t;k, {\mathbf{c}})\}_{t=1}^{\infty} ,\) \(\{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega,t; k,{\mathbf{c}}, {\mathbf{Q}})\}_{t=1}^{\infty}\) the sample path ω of the corresponding processes (Recall that the processes \(\{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(t;k, {\mathbf{c}})\}_{t=1}^{\infty},\) \(\{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(t; k,{\mathbf{c}}, {\mathbf{Q}})\}_{t=1}^{\infty}\) are defined in Sect. 4.). Finally by \(\{{\mathbf{A}}(\omega,t)\}_{t=1}^{\infty},\) \(\{{\hat{\mathbf{S}}}(\omega,t)\}_{t=1}^{\infty},\) \(\{{\mathbf{E}}(\omega,t)\}_{t=1}^{\infty},\) \(\{{\mathbf{Q}}^{\mathbf{c}}(\omega,t)\}_{t=1}^{\infty},\) and \(\{{\mathbf{X}}(\omega,t)\}_{t=1}^{\infty}\) we denote each of the sample paths ω of the respective processes.

Since \(\varvec{\lambda}\in \widetilde{{\mathbf{C}}}_{{\mathcal E}}^{p},\) there exists a policy \(\{{\mathbf{E}}(t)\}_{t=1}^{\infty} \in {\mathcal E},\) and an i.i.d. process \(\{{\mathbf{S}}(t),{\hat{\mathbf{S}}}(t), {\mathbf{A}}(t)\}_{t=1}^{\infty}\) such that \( {\mathbb{E}}[{\mathbf{A}}(t)]=\varvec{\lambda}.\) In particular

$$ P\left[\omega: \lim_{t \rightarrow \infty} \frac{1}{t} \sum_{\tau=1 }^{t} {\mathbf{A}}^j(\omega,\tau) = \varvec{\lambda}^j \right] = 1, \quad \forall j \in {\mathcal{J}}, $$
(41)
$$ P\left[ \omega: \lim_{t \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}}(\omega,t; k)}{t} = p_{{\hat{\mathbf{S}}}}(k)\right]=1, \quad \forall k \in {\mathcal{K}}. $$
(42)

Furthermore, from Eq. 22 we have that

$$ P \left[\omega: \lim_{t \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega, t; k,{\mathbf{c}},{\mathbf{Q}})}{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega,t; k, {\mathbf{c}})} = P [ {\mathbf{Q}}^{{\mathbf{c}}}(t) = {\mathbf{Q}} | {\hat{\mathbf{S}}}(t)= {\mathbf{S}}^{(k)}] \right] =1. $$
(43)

Also, since the process \(\{{\mathbf{X}}(t)\}_{t=0}^{\infty}\) is intermittently bounded with positive probability it follows that

$$ P \left[\omega: {\mathbf{X}}(\omega, \tau_i) < {\mathbf{X}}_{\rm max},\;\; {\hbox{for some finite}}\;{\mathbf{X}}_{\rm max}, {\hbox{and for some sequence}} \{\tau_i\}_{i=1}^{\infty}\right] > 0. $$
(44)

Since the events in Eqs. 4143 have probability 1, and the event in Eq. 44 has a positive probability, their intersection will have a positive probability. Hence, it follows that the 4 events have a non-empty common intersection. We first fix an outcome ω′ that belongs to this common intersection, and once ω′ is selected, we identify an X max, and a sequence \(\{t_i\}_{i=1}^{\infty}\) as specified by Eq. 44. We have

$$ \lim_{i \rightarrow \infty}\frac{1}{t_i} \sum_{\tau=1}^{t_i}{\mathbf{A}}^j(\omega^{\prime},\tau) = \varvec{\lambda}^j $$
(45)
$$ \lim_{i \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)}{t_i} = p_{{\hat{\mathbf{S}}}}(k) $$
(46)
$$ \lim_{t \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime}, t; k,{\mathbf{c}},{\mathbf{Q}})}{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega^{\prime},t; k, {\mathbf{c}})} = P [ {\mathbf{Q}}^{{\mathbf{c}}}(t) = {\mathbf{Q}} | {{\hat{\mathbf{S}}}}(t)= {\mathbf{S}}^{(k)}] $$
(47)
$$ {\mathbf{X}}(\omega^{\prime},t_i) < {\mathbf{X}}_{\rm max}, \quad \hbox{for some} \; {\mathbf{X}}_{\rm max}, \;\; \forall i=1,2,\ldots. $$
(48)

We now proceed to first sum both sides of Eq. 6 from time slot 0 to t i for some \(i=1,2,\ldots ,\) and cancel the identical terms. Then, by dividing both sides of the resulting equation by t i we obtain

$$ \frac{1}{t_i}{\mathbf{X}}^j(\omega^{\prime}, t_i) = \frac{1}{t_i} {\mathbf{X}}^j(\omega^{\prime},0) + \frac{1}{t_i} \sum_{\tau=1}^{t_i} {\mathbf{R}}^j {\mathbf{Q}}^{{\mathbf{E}}(\omega^{\prime},\tau)}(\omega^{\prime},\tau) {\mathbf{E}}^j(\omega^{\prime},\tau) + \frac{1}{t_i}\sum_{\tau=1}^{t_i} {\mathbf{A}}^j(\omega^{\prime},\tau). $$
(49)

From (48), we have

$$ \lim_{i \rightarrow \infty} \frac{1}{t_i} {\mathbf{X}}^j(\omega^{\prime}, t_i) = 0, $$
(50)

and

$$ \lim_{i \rightarrow \infty} \frac{1}{t_i} {\mathbf{X}}^j(\omega^{\prime},0) = 0. $$
(51)

Taking the limit in Eq. 49 as \(i \rightarrow \infty ,\) and by using Eqs. 45, 50 and 51 we obtain

$$ \begin{aligned} \varvec{\lambda}^j & = - \lim_{i \rightarrow \infty} \left\{\frac{1}{t_i} \sum_{\tau = 1}^{t_i} {\mathbf{R}}^j {\mathbf{Q}}^{{\mathbf{E}} (\omega^{\prime},\tau)} (\omega^{\prime},\tau) {\mathbf{E}}^j (\omega^{\prime},\tau) \right\} \\ & = -\lim_{i \rightarrow \infty} \left\{{\mathbf{R}}^j \sum_{k \in {\mathcal{K}}} \frac{1}{t_i}\sum\limits_{\buildrel{\tau \in \{1,\ldots, t_i\}}\over{{\rm s.t.\ }{\hat{\mathbf{S}}}(\omega^{\prime},\tau)={\mathbf{S}}^{(k)}}} {\mathbf{Q}}^{{\mathbf{E}}(\omega^{\prime},\tau)} (\omega^{\prime},\tau) {\mathbf{E}}^j (\omega^{\prime},\tau) \right\}\\ & = - \lim_{i \rightarrow \infty} \left\{{\mathbf{R}}^j \sum_{k \in \tilde{{\mathcal{K}}}} \frac{1}{t_i} \sum\limits_{\buildrel{\tau \in \{1,\ldots, t_i\}}\over{{\rm s.t.\ } {\hat{\mathbf{S}}}(\omega^{\prime},\tau)={\mathbf{S}}^{(k)}}}{\mathbf{Q}}^{{\mathbf{E}}(\omega^{\prime},\tau)}(\omega^{\prime},\tau) {\mathbf{E}}^j (\omega^{\prime},\tau) \right\}, \end{aligned} $$
(52)

where

$$ \tilde{{\mathcal{K}}}=\left\{k \in {\mathcal{K}} {\rm s.t.\ } {\hat{\mathbf{S}}}(\omega^{\prime}, \tau)= {\mathbf{S}}^{(k)} \hbox{ for some } \tau \in \{1,\ldots, \infty \}\right\}. $$

Thus, for \(k \in \tilde{{\mathcal{K}}},\) and for i large enough it follows that \(n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k) > 0.\) Without loss of generality (by redefining the sequence \(\{t_i\}_{i=1}^{\infty}\) if necessary), assume that \(n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k) > 0\) for all \(k \in \tilde{{\mathcal{K}}},\) and \(i=1,2,\ldots .\) Then, Eq. 52 can be written as

$$ \varvec{\lambda}^j=- \lim_{i \rightarrow \infty}\left\{{\mathbf{R}}^j \sum_{k \in \tilde{{\mathcal{K}}}} \frac{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)}{t_i} \frac{1}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)} \times\sum\limits_{\buildrel{ \tau \in \{1,\ldots, t_i\}}\over{{\rm s.t.\ }{\hat{\mathbf{S}}}(\omega^{\prime},\tau)={\mathbf{S}}^{(k)}}} {\mathbf{Q}}^{{\mathbf{E}}(\omega^{\prime},\tau)}(\omega^{\prime},\tau) {\mathbf{E}}^j (\omega^{\prime},\tau) \right\}. $$
(53)

Note that \({\mathbf{E}}^j (\omega^{\prime}, \tau) \in {\mathcal{T}}_k\) whenever \({\hat{\mathbf{S}}}(\omega^{\prime},\tau)= {\mathbf{S}}^{(k)}\). Also, for every time slot τ, the matrix \({\mathbf{Q}}^{{\mathbf{E}}(\omega^{\prime},\tau)}(\omega^{\prime},\tau)\) is a diagonal matrix, whose diagonal entries take values in the set {0,1}. Therefore, it is also true that the product \({\mathbf{Q}}^{{\mathbf{E}}(\omega^{\prime},\tau)}(\omega^{\prime},\tau)\) \({\mathbf{E}}^j (\omega^{\prime}, \tau) \in {\mathcal{T}}_k.\) Also, since

$$\sum\limits_{\buildrel{\tau \in \{1{,} \ldots {,} t_i \}}\over{{\rm s.t.} {\hat{\mathbf{S}}}(\omega^{\prime},\tau)={\mathbf{S}}^{(k)}}} \frac{1}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)} = \frac{1}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)} \sum\limits_{\buildrel{\tau \in \{1{,} \ldots {,} t_i\}}\over{{\rm s.t.}{\hat{\mathbf{S}}}(\omega^{\prime},\tau)={\mathbf{S}}^{(k)}}}1=\;\frac{1}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)} \; n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k) =1,$$

we have that for every \(i\,{\in}\,\{1,\ldots\},\) \(j \in {\mathcal{J}},\) and \(k\in \tilde{{\mathcal{K}}},\)

$$ \frac{1}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)} \sum\limits_{\buildrel{ \tau \in \{1,\ldots,t_i\}}\over{{\rm s.t.\ }{\hat{\mathbf{S}}}(\omega^{\prime},\tau)={\mathbf{S}}^{(k)}}} {\mathbf{Q}}^{{\mathbf{E}}(\omega^{\prime},\tau)}(\omega^{\prime},\tau) {\mathbf{E}}^j (\omega^{\prime}, \tau) \in \hbox{co}({\mathcal{T}}_k). $$

Since \(\tilde{{\mathcal{K}}}\) is a finite set, and since for every k, the set \(\hbox{co}({\mathcal{T}}_k)\) is a compact set, there exists a subsequence \(\{t_{i_\ell} \}_{\ell=1}^{\infty},\) and vectors f j k such that

$$ \lim_{\ell \rightarrow \infty} \left\{\frac{1}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_{i_\ell}; k)} \sum\limits_{\buildrel{ \tau \in \{1,\ldots,t_{i_\ell}\}}\over{{\rm s.t.\ }{\hat{\mathbf{S}}}(\omega^{\prime},\tau)= {\mathbf{S}}^{(k)}}} {\mathbf{Q}}^{{\mathbf{E}}(\omega^{\prime},\tau)}(\omega^{\prime},\tau) {\mathbf{E}}^j (\omega^{\prime}, \tau)\right\} ={\mathbf{f}}_k^j, $$
(54)

for all \(j \in {\mathcal{J}}, k \in \tilde{{\mathcal{K}}}.\) Hence from Eqs. 46, 53 and 54 we obtain

$$ \varvec{\lambda}^j=-{\mathbf{R}}^j \sum_{k \in \tilde{{\mathcal{K}}}} p_{{\hat{\mathbf{S}}}}(k) {\mathbf{f}}_k^j, \quad \forall k \in \tilde{{\mathcal{K}}}. $$
(55)

Finally, by letting the corresponding L  × 1 vector f j k be the 0-vector, whenever \(k \in K\setminus \tilde{{\mathcal{K}}}\) we conclude that

$$ \varvec{\lambda}^j= -{\mathbf{R}}^j \sum_{k \in {\mathcal{K}}} p_{{\hat{\mathbf{S}}}}(k) {\mathbf{f}}_k^j, \quad \forall k \in {\mathcal{K}}. $$
(56)

Clearly, \({\mathbf{f}}_k^j \in {\mathbb{R}}_{+}^{L}\) for every \(k \in {\mathcal{K}}\) and \(j \in {\mathcal{J}}.\) To complete the proof we need to show that \(\sum_{j=1}^{J}{\mathbf{f}}_{k}^{j} \in \hbox{co}(\tilde{{\mathcal{Q}}}_{k})\) for every \(k \in {\mathcal{K}}.\) We consider two cases.

  1. 1.

    \(k \in {\mathcal{K}} \setminus \tilde{{\mathcal{K}}}:\) For every \(k \in {\mathcal{K}} \setminus \tilde{{\mathcal{K}}},\) we have that

    $$ \sum_{j=1}^{J}{\mathbf{f}}_{k}^{j} \in \hbox{co}(\tilde{{\mathcal{Q}}}_k), $$
    (57)

    since \({\mathbf{0}} \in {\mathcal{T}}_k\) for every \(k \in {\mathcal{K}}.\)

  2. 2.

    \(k \in \tilde{{\mathcal{K}}}:\) From Eq. 54, and since \({\mathbf{E}} (\omega^{\prime}, \tau)= \sum_{j=1}^{J} {\mathbf{E}}^j(\omega^{\prime}, \tau),\) for all \(k \in \tilde{{\mathcal{K}}}\) we have

    $$\begin{aligned} \sum_{j=1}^{J} {\mathbf{f}}_k^j &=\lim_{i \rightarrow \infty} \left\{\sum\limits_{\buildrel{\tau \in \{1,\ldots,t_i\}} \over{{\rm s.t.\ }{\hat{\mathbf{S}}}(\omega^{\prime},\tau)= {\mathbf{S}}^{(k)}}} \frac{1}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)} {\mathbf{Q}}^{{\mathbf{E}}(\omega^{\prime},\tau)}(\omega^{\prime},\tau) {\mathbf{E}} (\omega^{\prime}, \tau)\right\}\\ &=\lim_{i \rightarrow \infty} \left\{\frac{1}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)} \sum_{{\mathbf{c}} \in {\mathcal{T}}_k}\sum_{{\mathbf{Q}} \in {\mathcal{Q}}}\sum\limits_{\buildrel{\tau \in \{1,\ldots,t_i\}}\over{{\rm s.t.\ }{\hat{\mathbf{S}}}(\omega^{\prime},\tau)= {\mathbf{S}}^{(k)},}_{\buildrel{{\mathbf{E}}(\omega^{\prime}, \tau)= {\mathbf{c}},}\over{{\mathbf{Q}}^{{\mathbf{c}}}(\omega^{\prime}, \tau) = {\mathbf{Q}}}}} {\mathbf{Q}}\;{\mathbf{c}} \right\}\\ &=\lim_{i \rightarrow \infty} \left \{\sum_{{\mathbf{c}} \in {\mathcal{T}}_k}\sum_{{\mathbf{Q}} \in {\mathcal{Q}}} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime},t_i; k,{\mathbf{c}}, {\mathbf{Q}})}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)} {\mathbf{Q}} \; {\mathbf{c}} \right \} \\ &=\lim_{i \rightarrow \infty} \left \{\sum_{{\mathbf{c}} \in {\mathcal{T}}_k}\sum_{{\mathbf{Q}} \in {\mathcal{Q}}} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime},t_i; k,{\mathbf{c}}, {\mathbf{Q}})}{t_i} \frac{t_i}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)} {\mathbf{Q}} \; {\mathbf{c}} \right \}. \end{aligned}$$
    (58)

Since each of the terms involved in the sum are non-negative, and since the outer limit exists, it follows that each of the product terms in the limit are bounded. Further, since \(\frac{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)}{t_i}\) converges to a non-zero value, we may extract a converging subsequence such that \(\lim_{i \rightarrow \infty} \left\{\frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime},t_i; k,{\mathbf{c}}, {\mathbf{Q}})}{t_{i}}\right\}\) exists, and therefore

$$ \sum_{j=1}^{J} {\mathbf{f}}_k^j=\sum_{{\mathbf{c}} \in {\mathcal{T}}_k} \sum_{{\mathbf{Q}} \in {\mathcal{Q}}} \lim_{i \rightarrow \infty} \left\{\frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime},t_i; k,{\mathbf{c}}, {\mathbf{Q}})}{t_{i}}\right\} \frac{1}{p_{{\hat{\mathbf{S}}}}(k)}\; {\mathbf{Q}}\;{\mathbf{c}}. $$
(59)

Note also that \(\lim_{i \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega^{\prime}, t_i; k,{\mathbf{c}})}{t_i}\) exists and can be written as a finite sum of existing limits as

$$ \lim_{i \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega^{\prime}, t_i; k,{\mathbf{c}})}{t_i} = \lim_{i \rightarrow \infty} \sum_{{\mathbf{Q}} \in {\mathcal{Q}}} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime}, t_i; k,{\mathbf{c}},{\mathbf{Q}})}{t_i} = \sum_{{\mathbf{Q}} \in {\mathcal{Q}}} \lim_{i \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime}, t_i; k,{\mathbf{c}},{\mathbf{Q}})}{t_i}, $$
(60)

where we made use of the fact that the limit \(\lim_{i \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime}, t_i; k,{\mathbf{c}},{\mathbf{Q}})}{t_i}\) exists. As discussed in Sect. 4, for all \({\mathbf{c}} \in {\mathcal{T}}_k,\) the quantity \(n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega^{\prime}, t_i; k,{\mathbf{c}}) \ne 0\) as \(t \rightarrow \infty .\) Hence, we can write

$$ \lim_{i \rightarrow \infty} \left\{\frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime},t_i; k,{\mathbf{c}}, {\mathbf{Q}})}{t_{i}} \right\}=\lim_{i \rightarrow \infty} \left\{\frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime},t_i; k,{\mathbf{c}}, {\mathbf{Q}})}{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega^{\prime}, t_i; k,{\mathbf{c}})}\frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega^{\prime}, t_i; k,{\mathbf{c}})}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime},t_i; k)}\frac{n_{{\hat{\mathbf{S}}}}(\omega^{\prime}, t_i; k)}{t_{i}} \right\}. $$
(61)

It follows from Eqs. 46 and 60 that

$$ \lim_{i \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega^{\prime},t_i; k,{\mathbf{c}})}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime}, t_i; k)} = \frac{\mathop{\lim}\limits_{i \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega^{\prime},t_i; k,{\mathbf{c}})}{t_{i}}}{\mathop{\lim}\limits_{i \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}}(\omega^{\prime}, t_i; k)}{t_i}} $$

exists. Let this limit be equal to

$$ \gamma_k^{{\mathbf{c}}}:=\lim_{i \rightarrow \infty} \frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}}(\omega^{\prime},t_i; k,{\mathbf{c}})}{n_{{\hat{\mathbf{S}}}}(\omega^{\prime}, t_i; k)}. $$
(62)

From Eqs. 46, 47 and 62 it follows that the individual limits in Eq. 61 exist. Hence, it can be written as

$$ \lim_{i \rightarrow \infty} \left\{\frac{n_{{\hat{\mathbf{S}}}{\mathbf{E}}{\mathbf{Q}}}(\omega^{\prime},t_i; k,{\mathbf{c}}, {\mathbf{Q}})}{t_{i}} \right\}= P[{\mathbf{Q}}^{{\mathbf{c}}}(t)= {\mathbf{Q}}|{\hat{\mathbf{S}}}(t) = {\mathbf{S}}^{(k)}] \;\gamma_k^{{\mathbf{c}}}\;p_{{\hat{\mathbf{S}}}}(k). $$
(63)

By replacing Eq. 63 in Eq. 59 we get

$$ \begin{aligned} \sum_{j=1}^{J} {\mathbf{f}}_k^j=&\sum_{{\mathbf{c}} \in {\mathcal{T}}_k} \sum_{{\mathbf{Q}} \in {\mathcal{Q}}} \gamma_k^{{\mathbf{c}}} \; P[{\mathbf{Q}}^{{\mathbf{c}}}(t)= {\mathbf{Q}}|{\hat{\mathbf{S}}}(t) ={\mathbf{S}}^{(k)}] \; {\mathbf{Q}}\; {\mathbf{c}} \\ =&\sum_{{\mathbf{c}} \in {\mathcal{T}}_k} \gamma_k^{{\mathbf{c}}} {\tilde{\mathbf{Q}}}_k^{{\mathbf{c}}} {\mathbf{c}}, \end{aligned} $$
(64)

where Eq. 64 follows by employing Eq. 10. Consequently, it follows that

$$ \sum_{j=1}^{J} {\mathbf{f}}_k^{j} \in \hbox{co}(\tilde{{\mathcal{Q}}}_k), $$

and the proof is complete.   \(\square\)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Pantelidou, A., Ephremides, A. & Tits, A.L. A cross-layer approach for stable throughput maximization under channel state uncertainty. Wireless Netw 15, 555–569 (2009). https://doi.org/10.1007/s11276-007-0089-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11276-007-0089-7

Keywords