Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Primary User Emulation Attacks: A Detection Technique Based on Kalman Filter
Previous Article in Journal
Designing and Managing a Smart Parking System Using Wireless Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fundamental Limitations in Energy Detection for Spectrum Sensing

1
Nanfang College, Sun Yat-Sen Univeristy, Guangzhou 510900, China
2
College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
3
Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
4
School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Korea
*
Author to whom correspondence should be addressed.
J. Sens. Actuator Netw. 2018, 7(3), 25; https://doi.org/10.3390/jsan7030025
Submission received: 15 May 2018 / Revised: 27 June 2018 / Accepted: 27 June 2018 / Published: 28 June 2018

Abstract

:
A key enabler for Cognitive Radio (CR) is spectrum sensing, which is physically implemented by sensor and actuator networks typically using the popular energy detection method. The threshold of the binary hypothesis for energy detection is generally determined by using the principles of constant false alarm rate (CFAR) or constant detection rate (CDR). The CDR principle guarantees the CR primary users at a designated low level of interferences, which is nonetheless subject to low spectrum usability of secondary users in a given sensing latency. On the other hand, the CFAR principle ensures secondary users’ spectrum utilization at a designated high level, while may nonetheless lead to a high level of interference to the primary users. The paper introduces a novel framework of energy detection for CR spectrum sensing, aiming to initiate a graceful compromise between the two reported principles. The proposed framework takes advantage of the summation of the false alarm probability P f a from CFAR and the missed detection probability ( 1 P d ) from CDR, which is further compared with a predetermined confidence level. Optimization presentations for the proposed framework to determine some key parameters are developed and analyzed. We identify two fundamental limitations that appear in spectrum sensing, which further define the relationship among the sample data size for detection, detection time, and signal-to-noise ratio (SNR). We claim that the proposed framework of energy detection yields merits in practical policymaking for detection time and design sample rate on specific channels to achieve better efficiency and less interferences.

1. Introduction

Cognitive Radio (CR) was first introduced as a promising candidate for dealing with spectrum scarcity in future wireless communications [1]. Under-utilized frequency bands originally allocated to licensed users (i.e., primary users) are freed and become accessible by non-licensed users (i.e., secondary users) equipped with CR in an opportunistic manner to maximize the spectrum utilization while minimizing interferences to the primary users. Despite its obvious advantages, CR technology is subject to a great challenge in detection of spectrum holes through spectrum sensing. This is because secondary users generally have very limited knowledge about the whole spectrum, which may leave the spectrum sensing results far from accurate. Some existing spectrum sensing methods in the literature are by way of matched filtering, waveform-based sensing [2], cyclostationary-based sensing [3,4], eigenvalue-based method [5,6], energy detection [7,8,9,10,11,12,13], etc. Obviously, energy detection is the most popular and simple way for spectrum sensing.
The energy detection method [7,8,9,10,11,12,13] for spectrum sensing measures the average energy of the total received signal during a period of time and compares it with a properly assigned threshold to decide the presence or absence of users. Typically, the energy detection method is formulated in a binary hypothesis test with a null Hypothesis H 0 for absence of users and an alternative Hypothesis H 1 for presence. The threshold is determined typically based on two standard principles: constant false alarm rate (CFAR) and constant detection rate (CDR) [14,15]. With the emphasis on promoting usage of spectrum hole, a threshold by CFAR is derived by assuring the probability of false alarm under H 0 less than a given confidence level α , while, with the emphasis on less interference of users, a threshold by CDR is derived by letting the probability of missed detection under H 1 less than a given confidence level α . Each criterion can ensure the error detection probability of one hypothesis under a low level while ignoring the error detection probability for the opponent hypothesis. Therefore, under some extreme circumstances, one error probability may be large although the other one is small. It is the purpose of this paper to develop a new criterion to simultaneously keep the two kinds of error detection probabilities at a low level. To the author’s knowledge, it is the first effort to develop this kind of criterion in energy detection for spectrum sensing.
A simple way to keep the two kinds of error detection probabilities simultaneously small is to restrict the summation of the two probabilities to less than a confidence level. To describe more in detail, let u be a constructed statistics for the binary hypothesis test. For a given small α ( 0 , 1 ) , say α = 0.05 , the threshold by CFAR principle is selected by the smallest λ such that
P [ u λ | H 0 ] α ,
while the threshold by CDR principle is selected by maximal λ such that
P [ u λ | H 1 ] α .
The new criterion to select λ based on the summation of the two probabilities is proposed as
P [ u λ | H 0 ] + P [ u λ | H 1 ] α ,
which ensures the two probabilities simultaneously smaller than the confidence level α . Denote the false alarm probability P f a ( λ ) = P [ u λ | H 0 ] for CFAR and the missed detection probability ( 1 P d ( λ ) ) for CDR with P d ( λ ) = P [ u λ | H 1 ] , respectively. The aforementioned summation principle turns to be
P f a ( λ ) + ( 1 P d ( λ ) ) α ,
This is actually not a well-posed presentation though, since there may be too many solutions or no solution sometimes to the inequality in Equation (1) with respect to λ for a given noise variance σ n 2 , signal to noise ratio (SNR), and data size M.
In this paper, to derive unique solution of threshold λ from (1), two kinds of optimization problems are introduced: (i) to find the minimum data size M with σ n 2 and SNR given; and (ii) to find the lowest SNR with σ n 2 and M given. Under the first optimization setting, we show that, with a small SNR, the data size M should be larger than a critical value, denoted by M min , to guarantee the existence of threshold λ that can satisfy the inequality in Equation (1) under a given confidence level α . We also show an asymptotical formula, i.e., Equation (61), for the minimum data size M min as SNR goes to zero. Under the second optimization setting, we show that for a given data size M the SNR should be greater than a minimum SNR to ensure the existence of threshold λ that satisfies inequality in Equation (1) under given confidence level α . The asymptotical formula for the minimum SNR, i.e., Equation (62), is found as M . Theoretical analysis and simulations are conducted for the two optimization settings and the obtained asymptotical formulas are verified.
The main contributions of this paper are as follows. (i) A new principle to select the threshold of energy detection is proposed by assuring the summation of the two kind of error detection probabilities less than a given confidence level. To derive unique threshold under the constraint, two kinds of optimization frameworks are proposed. (ii) The possible optimal selection of the thresholds under the proposed two optimization frameworks have been analyzed in Propositions 1 and 3 regardless of the constraints. (iii) The lower and upper bounds of the solutions for the two optimization frameworks have been established in Theorems 1–4, respectively, to the accurate distribution and the approximate distribution of the test statistics. Two asymptotical formulae under corresponding limit process for the two optimization problems are also derived to describe the fundamental limitations when using energy detection for spectrum sensing.
The fundamental limitations in energy detection found in this paper based on the constraint Equation (1) are different from the SNR wall introduced in [16], which is on the other hand a limitation regarding robustness of detectors. It is discovered in this paper that even when the noise variance keeps constant, some limitations still exist regarding the tradeoff between efficiency and noninterference. For example, when channel detection time is 2 s [17] and sample rate of a channel [18] is once every 16 μ s (which yields the data size M = 2 / 0.000016 = 125 , 000 ), by the asymptotical Equation (62), the minimum SNR is approximately 0.0158 (i.e., 18.0124 dB) under a confidence level α = 0.05 . In other words, it is impossible to detect any signal with SNR lower than 18.0124 dB under such a setting with a confidence level α = 0.05 . The analysis and understanding on these limitations not only enables a wise choice of channels in the CR spectrum sensing, but also helps policymaking in determination of detection settings, such as detection time and sample rate, for a specific channel under certain requirements on efficiency and interference at a confidence level. These issues are critical in the design of a CR spectrum sensing system, which address fundamental impacts on the resultant system performance.
The rest of this paper is organized as follows. The model setting and the CFAR and CDR principles for energy detection are introduced in Section 2, where the thresholds are derived by assuming that all signals are Gaussian. The principle of compromise for CFAR and CDR is introduced in Section 3, and the two presentations of optimization are introduced and theoretically analyzed. Some numerical experiments are conducted to check the relevant evolutions of the solutions of the proposed optimization problems in Section 4. In Section 5, the fundamental limitations in energy detection are demonstrated, and some asymptotical orders of the critical values are discovered via theoretical analysis. Finally, the conclusive remarks of this study are given in Section 6.

2. Model Setting and Thresholds by CFAR and CDR

A block diagram of typical energy detection for spectrum sensing is shown in Figure 1. The input Band Pass Filter (BPF) which has bandwidth W centered at f c aims to remove the out-of-band signals. Note that W and f c must be known to the secondary user so that it can perform spectrum sensing for the corresponding channels. After the signal is digitized by an analog-to-digital converter (ADC), a simple square and average device is used to estimate the received signal energy. Assume the input signal to the energy detection is real. The estimated energy, u, is then compared with a threshold, λ , to decide if a signal is present ( H 1 ) or not ( H 0 ).
Spectrum sensing is to determine whether a licensed band is currently used by its primary user. This can be formulated into a binary hypothesis testing problem [19,20]:
x ( k ) = n ( k ) , H 0 ( vacant ) , s ( k ) + n ( k ) , H 1 ( occupied ) ,
where s ( k ) , n ( k ) , and x ( k ) represent the primary user’s signal, the noise, and the received signal, respectively. The noise is assumed to be an iid Gaussian random process of zero mean and variance σ n 2 , whereas the signal is also assumed to be an iid Gaussian random process of zero mean and variance of σ s 2 . The signal to noise ratio is defined as the ratio of signal variance to the noise variance
SNR = σ s 2 / σ n 2 .
The test statistics generated from the energy detector, as shown in Figure 1, is
u = 1 M k = 1 M x k 2 .
The threshold is determined typically based on two standard principles: constant false alarm rate (CFAR) and constant detection rate (CDR) [14,15]. CDR guarantees a designated low level of interference to primary users, which nonetheless results in low spectrum usability of secondary users given a fixed sensing time. On the other hand, CFAR protects secondary users’ spectrum utilization at a designated high level, which may lead to a high level of interference to primary users. Therefore, each of the CFAR and CDR principles can general ensure either one of the error probabilities under low level within a limited sensing time, i.e., false alarm probability P f a for CFAR and missed detection probability ( 1 P d ) for CDR, respectively.
Under Hypotheses H 0 and H 1 , the test statistics u is a random variable whose probability density function (PDF) is chi-square distributed. Let us denote a chi-square distributed random variable X with M degrees of freedom as X χ M 2 , and recall its PDF as
f χ ( x , M ) = 1 2 M / 2 Γ ( M / 2 ) x M / 2 1 e x / 2 , for x > 0 , 0 , otherwise ,
where Γ ( · ) denotes Gamma function, given in Equation (A2) in the Appendix.
Clearly, under Hypothesis H 0 , M u / σ n 2 χ M 2 ; and M u / σ t 2 χ M 2 under H 1 with σ t 2 = ( 1 + SNR ) σ n 2 . Thus, the PDF of test statistics u, given by test, is
f u ( x ) σ n 2 M f χ ( x σ n 2 M , M ) , under H 0 ; σ t 2 M f χ ( x σ t 2 M , M ) , under H 1 .
Observe that E u = σ n 2 and var ( u ) = 2 M σ n 4 under H 0 , by the central limit theorem [21], the test statistics u asymptotically obeys the Gaussian distribution with mean σ n 2 and variance 2 M σ n 4 . Similar distribution can be derived under H 1 . Therefore, when M is sufficiently large, we can approximate the PDF of u using a Gaussian distribution:
f ˜ u ( x ) N ( σ n 2 , 2 σ n 4 / M ) , under H 0 ; N ( σ t 2 , 2 σ t 4 / M ) , under H 1 .
For a given threshold λ , the probability of false alarm is given by
P f a ( λ ) = prob [ u > λ | H 0 ] = Γ M 2 , M λ 2 σ n 2 ,
where Γ ( a , x ) is the upper incomplete gamma function in Equation (A3) in the Appendix, and the approximated form of P f a corresponding to distribution in Equation (7) for large M is
P ˜ f a ( λ ) = Q λ σ n 2 σ n 2 / M / 2 ,
where Q ( · ) is defined in Equation (A1) in the Appendix.
In practice, if it is required to guarantee a reuse probability of the unused spectrum, the probability of false alarm is fixed to a small value (e.g., P f a = 0.05 ) and meanwhile the detection probability is expected to be maximized as much as possible. This is referred to as constant false alarm rate (CFAR) principle [19,22]. Under the CFAR principle, the probability of false alarm rate ( P f a ) is predetermined, and the threshold ( λ f a ) can be set accordingly by
λ f a = 2 σ n 2 M Γ 1 M / 2 , P f a ,
where Γ 1 ( a , x ) is the inverse function of Γ ( a , x ) . For the approximation case corresponding to distribution Equation (7) for large M, the threshold is
λ ˜ f a = σ n 2 1 + Q 1 ( P f a ) M / 2 ,
where Q 1 ( x ) is the inverse function of Q ( x ) .
Similarly, under Hypothesis H 1 , for a given threshold λ , the probability of detection is given by
P d ( λ ) = prob [ u > λ | H 1 ] = Γ M 2 , M λ 2 σ t 2 ,
where Γ ( a , x ) is the upper incomplete gamma function. Its approximating form of P d corresponding to distribution in Equation (7) for large M is
P ˜ d ( λ ) = Q λ σ t 2 σ t 2 / M / 2 .
Practically, if it is required to guarantee interference-free to the primary users, the probability of detection should be set high (e.g., P d = 0.95 ) and the probability of false alarm should be minimized as much as possible. This is called the constant detection rate (CDR) principle [19,22]. With this, the threshold under the CDR principle to achieve a target probability of detection is given by
λ d = 2 σ n 2 ( 1 + SNR ) M Γ 1 M / 2 , P d .
The corresponding approximation case is
λ ˜ d = σ n 2 ( 1 + SNR ) 1 + Q 1 ( P d ) M / 2 .
Due to the similarity of Equations (10) and (14), we can expect that the derivation of the threshold values for CFAR and CDR are similar. Thus, it is not surprising to see that some analytic results derived by assuming CFAR based detection can be applied to CDR based detection with minor modifications and vice versa (see, e.g., [19,22]).

3. Thresholds by New Principle

It is clear that using CFAR and CDR principles can guarantee a low P f a and ( 1 P d ) , respectively. However, in practice, we may hope both of them to be low. This motivates us to come up with a new principle such that a threshold is determined to keep the sum of the two error probabilities at a designated low level. The problem of interest in the study is to find a threshold, for a given small α ( 0 , 1 ) , say α = 0.05 , such that
P f a ( λ ) + ( 1 P d ( λ ) ) α ,
where P f a ( λ ) and P d ( λ ) are given by Equations (8) and (12), respectively. This is nonetheless not a well-posed presentation, since there may be too many solutions or no solution to the inequality in Equation (16) with respect to λ for a given noise variance σ n 2 , SNR (or σ t 2 ), and data size M. In this section, a suite of well-posed presentations for realizing this idea are formulated and analyzed, and relevant properties are developed for reference. Specifically, the presentations considered in the study include the following two scenarios:
(i)
By assuming given σ n 2 and SNR (or σ t 2 ), our target is to find minimum data size M and the corresponding threshold λ satisfying the inequality in Equation (16). This results in a nonlinear optimization problem as following:
( NP 1 ) Min M s . t . Γ M 2 , M λ 2 σ n 2 + 1 Γ M 2 , M λ 2 σ t 2 α .
(ii)
By taking σ n 2 and M fixed, we target to find minimum SNR and corresponding threshold λ satisfying the inequality in Equation (16). This also results in a nonlinear optimization problem as following:
( NP 2 ) Min SNR s . t . Γ M 2 , M λ 2 σ n 2 + 1 Γ M 2 , M λ 2 σ t 2 α .
We find in Proposition 1 that the threshold can be unambiguously determined if σ n 2 and SNR are given. Based on this theoretical discovery, the numerical algorithm for solving (NP 1 ) and (NP 2 ) can be significantly simplified.
Proposition 1.
Both in nonlinear optimization problems (NP 1 ) and (NP 2 ), if solvable, the solution for λ should be
λ 0 = ( 1 + s ) σ n 2 ln ( 1 + s ) s ,
where s represents SNR for brief.
Proof. 
Let us consider the case for (NP 1 ) only, since (NP 2 ) has a very similar shape. We consider the Lagrange function of (NP 1 ) with respect to a multiplier μ as
L ( M , λ , s 1 , μ ) = M + μ Γ M 2 , M λ 2 σ n 2 + 1 Γ M 2 , M λ 2 σ t 2 + s 1 α ,
where s 1 0 is a slack variable. Clearly, differentiating L ( M , λ , s , μ ) by λ , we have
L ( M , λ , s 1 , μ ) λ = μ Γ ( M 2 ) M λ 2 σ t 2 M 2 1 e M λ 2 σ t 2 M 2 σ t 2 M λ 2 σ n 2 M 2 1 e M λ 2 σ n 2 M 2 σ n 2 .
Let L ( M , λ , s 1 , μ ) λ = 0 , we get a simplified equivalent equation as
1 σ t 2 e λ σ t 2 = 1 σ n 2 e λ σ n 2 ,
which further means
e 1 σ n 2 1 σ t 2 λ = σ t 2 σ n 2 = 1 + s .
Solving this equation, we derive Equation (17). ☐
The following proposition shows that the two nonlinear optimization problems (NP 1 ) and (NP 2 ) are well-posed, i.e., the solutions for (NP 1 ) and (NP 2 ) uniquely exist.
Proposition 2.
Both nonlinear optimization problems (NP 1 ) and (NP 2 ) are well-posed: (i). for any given α ( 0 , 1 ) , σ n 2 and SNR > 0 , (NP 1 ) has one and only one solution pair ( M , λ ) ; and (ii) for any given α ( 0 , 1 ) , σ n 2 and M > 0 , (NP 2 ) has one and only one solution pair ( S N R , λ ) .
Proof. 
Let SNR be denoted by s, and the LHS of the restriction inequality of (NP 1 ) be expressed by a function
Γ ( M , s , λ ) = Γ M 2 , M λ 2 σ n 2 + 1 Γ M 2 , M λ 2 σ t 2 .
By the definition of SNR, it follows that σ t 2 = ( 1 + s ) σ n 2 . Thus, we have
M λ 2 σ n 2 M λ 2 σ t 2 = M λ σ s 2 2 σ t 2 σ n 2 = M s λ 2 σ t 2 .
Note that the threshold λ should be located between the two energies σ n 2 and σ t 2 , i.e., λ ( σ n 2 , σ t 2 ) . We know that, for small SNR (s) and small M, the distance between M λ 2 σ n 2 and M λ 2 σ t 2 is very close. Hence, the value of Γ ( M , s , λ ) given by Equation (19) is close to 1, which means the restriction Γ ( M , s , λ ) α is probably violated.
Next, we demonstrate that the solution of (NP 1 ) is unique if it exists. Clearly, it is sufficient to show that Γ ( M , s , λ ) M < 0 . For this, by noticing that
Γ M 2 , M λ 2 σ n 2 M = 1 Γ ( M 2 ) d Γ ( M 2 ) d M Γ M 2 , M λ 2 σ n 2 1 Γ ( M 2 ) M λ 2 σ n 2 M 2 1 e M λ 2 σ n 2 λ 2 σ n 2 + 1 Γ ( M 2 ) M λ 2 σ n 2 e t t M 2 1 ln t 2 d t
and Γ M 2 , M λ 2 σ t 2 M , similar to Equation (21) by replacing σ n by σ t therein, we have
Γ ( M , s , λ ) M | λ = λ 0 = 1 Γ ( M 2 ) M λ 0 2 σ t 2 M λ 0 2 σ n 2 e t t M 2 1 1 Γ ( M 2 ) d Γ ( M 2 ) d M ln t 2 d t = Δ D ( M , s ) Γ ( M 2 ) ,
where λ 0 = ( 1 + s ) σ n 2 ln ( 1 + s ) s is given by Equation (17).
Denote
a = Δ M λ 0 2 σ t 2 = M ln ( 1 + s ) 2 s , b = Δ M λ 0 2 σ n 2 = M ( 1 + s ) ln ( 1 + s ) 2 s .
Clearly, b = a ( 1 + s ) . By noticing that a s 0 M 2 and b s 0 M 2 , we have lim s 0 D ( M , s ) = 0 . Similarly, by a s 0 and b s , it follows that lim s D ( M , s ) = 0 . Thus, to find the sign of Equation (22), we analyze the derivative of D ( M , s ) with respect to s below:
D ( M , s ) s = e a a M 2 1 e a s ( 1 + s ) M 2 1 ( C M 1 2 ln b ) b s ( C M 1 2 ln a ) a s = Δ e a a M 2 1 D 1 ( M , s ) ,
where C M = 1 Γ ( M 2 ) d Γ ( M 2 ) d M . Note that
e a s = e M ln ( 1 + s ) 2 = ( 1 + s ) M 2 , b s = ( 1 + s ) a s + a ,
we proceed the essential terms of Equation (23) as
D 1 ( M , s ) = 1 1 + s ( C M 1 2 ln b ) b s ( C M 1 2 ln a ) a s = a s 1 2 ln a b + a 1 + s ( C M 1 2 ln b ) = M 2 s ( 1 + s ) ln 2 ( 1 + s ) 2 s + ( C M 1 2 ln M 2 1 2 ) ln ( 1 + s ) + δ ( s ) ,
where δ ( s ) = 1 2 ln ( 1 + s ) ln s ln ( 1 + s ) . Recalling an inequality
ln x 1 x < Γ ( x ) Γ ( x ) < ln x 1 2 x
for x > 1 , we find
2 M < C M 1 2 ln M 2 < 1 M .
Thus, the sign of D 1 ( M , s ) changes from negative to positive as s moves from 0 to , and it also does for D ( M , s ) s . Together with the two limitations of D ( M , s ) , we know that D ( M , s ) decreases from D ( M , 0 ) = 0 to negative minimum and then increase to D ( M , ) = 0 . Thus, Γ ( M , s , λ ) M | λ = λ 0 < 0 for s ( 0 , ) , which derives the uniqueness of the solution.
Let us recall some basic facts of Gamma distribution below before the deducing the existence of the solution of (NP 1 ) when M is sufficiently large. For a Gamma distribution with density function as 1 Γ ( k ) x k 1 e x , its expectation and deviation are k and k , respectively. Let us first introduce a Gamma distributed random variable ξ with k = M / 2 . Denote
M λ 2 σ n 2 M 2 = Δ β 1 M 2 , M 2 M λ 2 σ t 2 = Δ β 2 M 2 ,
where β 1 > 0 and β 2 > 0 , by the fact that λ ( σ n 2 , σ t 2 ) . By Equation (A3) (in the Appendix) and Chebyshev’s inequality, we have
Γ M 2 , M λ 2 σ n 2 = P ξ E ξ > β 1 M 2 P | ξ E ξ | > β 1 M 2 Var ( ξ ) β 1 2 ( M / 2 ) 2 = 2 β 1 2 M M 0 .
Similarly,
1 Γ M 2 , M λ 2 σ t 2 = P E ξ ξ > β 2 M 2 P | ξ E ξ | > β 2 M 2 Var ( ξ ) β 2 2 ( M / 2 ) 2 = 2 β 2 2 M M 0 .
Hence, by Equation (19), Γ ( M , s , λ ) 0 as M . This means the restriction of (NP 1 ) can be satisfied if M is sufficiently large, which derives Assertion (i).
For Assertion (ii), by Equation (20), we also know that the distance between M λ 2 σ n 2 and M λ 2 σ t 2 is very close for small SNR (s). Thus, the restriction Γ ( M , s , λ ) α is probably violated.
Differentiating Γ ( M , s , λ ) by s, we have
Γ ( M , s , λ ) s = M λ 2 Γ ( M 2 ) M λ 2 σ t 2 M 2 1 e M λ 2 σ t 2 σ n 2 σ t 4 < 0 .
Thus, we know that Γ ( M , s , λ ) decreases as s . This means the solution is unique. By noticing that
M λ 0 2 σ t 2 = M ln ( 1 + s ) 2 s s 0 , M λ 0 2 σ n 2 = M ( 1 + s ) ln ( 1 + s ) 2 s s ,
which declares that Γ ( M , s , λ ) 0 as s . This further proves the existence of the solution, and, thus, Assertion (ii) follows. ☐
Similarly, by replacing P f a and P d in Equation (16) with the approximated distributions P ˜ f a and P ˜ d given by Equations (9) and (13), respectively, the corresponding case based on approximation distribution in Equation (7) for the new principle can be obtained. Precisely, for a given small α > 0 , a threshold λ can be identified such that
P ˜ f a ( λ ) + ( 1 P ˜ d ( λ ) ) α ,
where P ˜ f a ( λ ) and P ˜ d ( λ ) are given by Equations (9) and (13), respectively. To achieve this, we have to formulate the two nonlinear optimization problems. Firstly, with given σ n 2 and SNR (or σ t 2 ), we try to derive the minimum data size M and the corresponding threshold λ satisfying the inequality in Equation (27), the nonlinear optimization problem is formulated as:
( NP ˜ 1 ) Min M s . t . Q λ σ n 2 σ n 2 / M / 2 + 1 Q λ σ t 2 σ t 2 / M / 2 α .
Secondly, let σ n 2 and M be fixed, such that the minimum SNR and corresponding threshold λ satisfying the inequality in Equation (27) will be identified. This is also a nonlinear optimization problem as described as follows:
( NP ˜ 2 ) Min SNR s . t . Q λ σ n 2 σ n 2 / M / 2 + 1 Q λ σ t 2 σ t 2 / M / 2 α .
It is also found below that the potential threshold can be deterministically selected with given σ n 2 and SNR and data size M. By this theoretical discovery, the numerical algorithm to solve ( NP ˜ 1 ) and ( NP ˜ 2 ) can be largely simplified.
Proposition 3.
In both nonlinear optimization problems ( N P ˜ 1 ) and ( N P ˜ 2 ), if solvable, the solution for λ should be
λ ˜ 0 = σ t 2 σ n 2 ( σ t 2 σ n 2 ) + Δ σ t 4 σ n 4 ,
where
Δ = σ t 4 σ n 4 ( σ t 2 σ n 2 ) [ σ t 2 σ n 2 + δ ( σ t 2 + σ n 2 ) ]
with δ = 4 M ln ( 1 + S N R ) . A simplified form is
λ ˜ 0 = σ n 2 ( 1 + s ) 1 + 1 + δ + 2 δ s 2 + s ,
where s represents SNR. To assure λ ˜ 0 ( σ n 2 , σ t 2 ) , it requires that
M > 4 ln ( 1 + s ) s 2 .
Proof. 
In the following, only the case of ( NP ˜ 1 ) is proven since the proof for ( NP ˜ 2 ) is similar. We first construct the Lagrange function for ( NP ˜ 1 ) with respect to a multiplier μ as
L ˜ ( M , λ , s 2 , μ ) = M + μ Q λ σ n 2 σ n 2 / m + 1 Q λ σ t 2 σ t 2 / m + s 2 α ,
where m = M / 2 and s 2 0 is a slack variable. Clearly, differentiating L ˜ ( M , λ , s , μ ) in terms of λ , we have
L ˜ ( M , λ , s 2 , μ ) λ = μ 2 π m σ t 2 e m 2 ( λ σ t 2 ) 2 2 σ t 4 m σ n 2 e m 2 ( λ σ n 2 ) 2 2 σ n 4 .
Let L ˜ ( M , λ , s 2 , μ ) λ = 0 , we get a simplified equivalent equation as
( λ σ n 2 ) 2 σ n 4 ( λ σ t 2 ) 2 σ t 4 = 2 m 2 ln σ t 2 σ n 2 = 2 m 2 ln ( 1 + SNR ) = Δ δ ,
which further means
( σ t 4 σ n 4 ) λ 2 2 σ t 2 σ n 2 ( σ t 2 σ n 2 ) λ δ σ t 4 σ n 4 = 0 .
Thus,
λ = σ t 2 σ n 2 ( σ t 2 σ n 2 ) ± Δ σ t 4 σ n 4 ,
where Δ is given by Equation (29). Note that λ ( σ n 2 , σ t 2 ) , i.e., the threshold should be located between the two expectations of the distributions under Hypotheses H 0 and H 1 , and
Δ σ t 2 σ n 2 ( σ t 2 σ n 2 ) ,
thus we derive Equation (28). Equation (30) follows by substituting σ t 2 = ( 1 + s ) σ n 2 in Equation (28).
Clearly, λ ˜ 0 > σ n 2 . For λ ˜ 0 > σ t 2 , it is sufficient to require that
1 + 1 + δ + 2 δ s < 2 + s ,
which is equivalent to Equation (31). ☐
From the above, we have demonstrated that the two nonlinear optimization problems ( NP ˜ 1 ) and ( NP ˜ 2 ) are well-posed.
Proposition 4.
Both nonlinear optimization problems ( N P ˜ 1 ) and ( N P ˜ 2 ) are well-posed: (i) for any given α ( 0 , 1 ) , σ n 2 and SNR > 0 , ( N P ˜ 1 ) has one and only one solution pair ( M , λ ) ; and (ii) for any given α ( 0 , 1 ) , σ n 2 and M > 0 , ( N P ˜ 2 ) has one and only one solution pair ( S N R , λ ) .
Proof. 
For convenience, let SNR be denoted by s, and the LHS of the restriction inequality of ( NP ˜ 1 ) be expressed by a function
Q ( M , s , λ ) = Q m ( λ σ n 2 ) σ n 2 + 1 Q m ( λ σ t 2 ) σ t 2 ,
where m = M / 2 . For small SNR, σ t 2 and σ n 2 are very close. Thus, for small M, it is impossible to require that Q ( M , s , λ ) α , since m ( λ σ n 2 ) σ n 2 and m ( λ σ t 2 ) σ t 2 are too close.
(i) By differentiating Q ( M , s , λ ) in terms of m, we have
Q ( M , s , λ ) m = 1 2 π λ σ t 2 σ t 2 e m 2 ( λ σ t 2 ) 2 2 σ t 4 λ σ n 2 σ n 2 e m 2 ( λ σ n 2 ) 2 2 σ n 4 < 0 ,
by noticing that λ ( σ n 2 , σ t 2 ) . This concludes the uniqueness of the solution. Now, we derive the existence of the solution. Denote λ ˜ 0 as the possible threshold of λ given by Equation (28). Clearly,
λ ˜ 0 m 2 σ t 2 σ n 2 σ t 2 + σ n 2 = 2 ( 1 + s ) σ n 2 2 + s = Δ λ * ,
thus, for sufficiently large m, we have
λ ˜ 0 σ n 2 γ 1 | λ * σ n 2 | , σ t 2 λ ˜ 0 γ 2 | λ * σ t 2 | ,
where γ 1 > 0 and γ 2 > 0 . Thus, we further have
Q m ( λ ˜ 0 σ n 2 ) σ n 2 Q m γ 1 | λ * σ n 2 | σ n 2 m 0 , 1 Q m ( λ σ t 2 ) σ t 2 Q m γ 2 | λ * σ t 2 | σ t 2 m 0 .
These mean Q ( M , s , λ ) m 0 , which implies the existence of solution.
(ii) By differentiating Q ( M , s , λ ) in terms of s, we have
Q ( M , s , λ ) s = m λ σ n 2 σ t 4 2 π e m 2 ( λ σ t 2 ) 2 2 σ t 4 < 0 ,
which declares the uniqueness of solution for this case. Consider λ ˜ 0 given by Equation (28) as s ; clearly, λ ˜ 0 s with the order ln s , and σ t 2 s with the order s. Thus, we have
λ ˜ 0 σ n 2 s , σ t 2 λ ˜ 0 s ,
which means Q ( M , s , λ ) s 0 , guaranteeing the existence of solution for this case. ☐
Based on the above propositions, the proposed principles for threshold selection can be well defined provided data size M is sufficiently large and SNR is given, or SNR is sufficiently large and M is given. Hence, mathematically, some fundamental limitations are identified regarding data size M when SNR is given, as well as SNR when M is given.

4. Numerical Experiments

Two kinds of simulations are conducted in this section. One is to observe the evolutions of the solutions of the above four nonlinear optimization problems, and the other is to investigate the accuracy of the two asymptotical equations: Equation (61) for both (NP 1 ) and ( NP ˜ 1 ) as SNR tends to zero, and Equation (62) for both (NP 2 ) and ( NP ˜ 2 ) as M tends to infinity. The two formulae are discovered by Theorems 1–4.

4.1. Intuitive Sense of the Solutions for Relevant Optimization Problems

Specifically, we would like to check the evolutions of solutions M in (NP 1 ) and ( NP ˜ 1 ) as SNR tends to 0, respectively; and the evolutions of solutions SNR in (NP 2 ) and ( NP ˜ 2 ) as M tends to , respectively.
The nonlinear optimization problems (NP 1 )(NP 2 ) and ( NP ˜ 1 )( NP ˜ 2 ) can be solved directly by Matlab code /fmincon/, which is used to find minimum of an object function with nonlinear constraints. However, it is found to be quite unstable when using /fmincon/ to solve them directly, say (NP 1 ), most likely due to the nonlinearity in the constraint of (NP 1 ).
Proposition 1 significantly simplifies the solution process for (NP 1 ) and meanwhile makes it stable by substituting the potential optimal threshold given by Equation (17) into the constraint of (NP 1 ). The same for (NP 2 ). As to ( NP ˜ 1 ) and ( NP ˜ 2 ), Proposition 3 plays the same role.
Example 1.
Let α = 0.05 and σ n = 1 . We deal with (NP 1 ) by letting S N R = 1 k with k = 1 , 2 , , 20 . The solutions for M and corresponding λ are shown in Figure 2a,b, respectively.
Example 2.
Let α = 0.05 and σ n = 1 . We deal with (NP 2 ) by letting M = 1000 k with k = 1 , 2 , , 20 . The solutions for SNR and λ are shown in Figure 3a,b, respectively.
Example 3.
Let α = 0.05 and σ n = 1 . We deal with ( N P ˜ 1 ) by letting S N R = 1 k with k = 1 , 2 , , 20 . Since the solutions for M and λ are very close to the counterparts of Example 1, we only plot the differences between the two examples (the quantities of Example 3 minus that of Example 1), shown practically negligible in Figure 4a,b, respectively.
Example 4.
Let α = 0.05 and σ n = 1 . We deal with ( N P ˜ 2 ) by letting M = 1000 k with k = 1 , 2 , , 20 . Since the solutions for SNR and λ are very close to the counterparts of Example 2, we only plot the differences between the two examples (i.e., the quantities of Example 4 minus that of Example 2), as shown practically negligible in Figure 5a,b, respectively.
With Propositions 1 and 3, it is under our expectation that the thresholds λ tend to σ n 2 = 1 in Examples 1 and 3 as SNR tends to 0. The thresholds in Examples 2 and 4 tend to σ n 2 = 1 as M due to the fact that the minimum SNR required tends to 0 as M .

4.2. Accuracy of the Two Asymptotical Equations (61) and (62)

To investigate the performances of asymptotical Equation (61) for both (NP 1 ) and ( NP ˜ 1 ) as SNR tends to zero and Equation (62) for both (NP 2 ) and ( NP ˜ 2 ) as M tends to infinity, we design two more simulation examples below.
Example 5.
Let α = 0.05 and σ n = 1 . For both (NP 1 ) and ( N P ˜ 1 ), let S N R = 10 1 3 ( k 1 ) / 19 , k = 1 , 2 , , 20 , corresponding to the range of 10 dB to −20 dB. Let M be defined by Equation (61) and the corresponding λ is given by Equation (17) for (NP 1 ) and Equation (30) for ( N P ˜ 1 ). The constraint summation probabilities of (NP 1 ) and ( N P ˜ 1 ), marked by “-o” and “-x”, are plotted in Figure 6a. The bias between the solutions M from either (NP 1 ) or ( N P ˜ 1 ) and the asymptotical Equation (61) are plotted in Figure 6b. It is clear from both figures that the asymptotical Equation (61) provides quite accurate estimation of the solution for both (NP 1 ) and ( N P ˜ 1 ), meanwhile, the required confidence level α = 0.05 is also satisfied.
Example 6.
Let α = 0.05 and σ n = 1 . For both (NP 2 ) and ( N P ˜ 2 ), let M = 10 k with k = 1 , 2 , , 20 . Let SNR be defined by Equation (62) and the corresponding λ is given by Equation (17) for (NP 1 ) and Equation (30) for ( N P ˜ 1 ). The constraint summation probabilities of (NP 2 ) and ( N P ˜ 2 ), marked by “-o” and “-x”, are plotted in Figure 7a. The bias between the solutions SNR from either (NP 2 ) or ( N P ˜ 2 ) and the asymptotical Equation (62) are plotted in Figure 7b. It is clear from both figures that the asymptotical Equation (62) provides quite accurate estimation of the solution for both (NP 2 ) and ( N P ˜ 2 ); meanwhile, the required confidence level α = 0.05 is also satisfied.
From the simulation Examples 5 and 6, it is clear that the asymptotical Equations (61) and (62) provide quite accurate estimation of the solutions for the relevant four optimization problems with the constraint inequalities satisfied. The constraint inequality is tight for Equation (61) when SNR is sufficient small, and for Equation (62) when M is sufficient large. The relative difference between the two optimization frameworks is insignificant. This shows the potential application value of the two Equations (61) and (62).

5. Fundamental Limits of Detection

Let us denote M min as the solution of (NP 1 ) or ( NP ˜ 1 ), and SNR min as the solution of (NP 2 ) or ( NP ˜ 2 ), respectively. Obviously, for a fixed SNR, it is impossible to find a threshold λ satisfying inequality in Equation (16) or Equation (27) if the data size is smaller than M min . Equivalently, for a fixed data size M, it is not possible to find a threshold λ satisfying inequality in Equation (16) or Equation (27) if the SNR is smaller than SNR min . It can be observed that there exists some fundamental limitations in the effort of keeping the sum of the two error probabilities smaller than a designated confidence level. Theoretically, it is impossible to explicitly solve the four optimization problems introduced in Section 3. In this section, we investigate the asymptotical performances of the solutions to the four nonlinear optimization problems, i.e., to find the orders of M min as SNR tends to 0, and the orders of SNR min as M tends to .
We fist analyze ( NP ˜ 1 )( NP ˜ 2 ), since Q function is much better than incomplete Gamma function, e.g., Q function possesses an interesting property as Q ( x ) = 1 Q ( x ) . Then, we establish a relation between incomplete Gamma function given by Equation (A3) (in the Appendix) and Q function in Lemma 1 to facilitate the investigations for (NP 1 )(NP 2 ). Throughout this section, SNR is usually simplified as s for brief. Two functions ψ ( t ) and ϕ ( t ) are called to be of equivalent order, if
lim t t 0 ψ ( t ) ϕ ( t ) = 1 .
and are simply denoted below by ψ ( t ) ϕ ( t ) as t t 0 .
The solution of ( NP ˜ 1 ) is discussed in the followings as a starting point.
Theorem 1.
The solution of nonlinear optimization problem ( N P ˜ 1 ) with α ( 0 , 1 2 ) , denoted by M m i n , has bounds given as
2 ( 2 + s ) 2 s 2 x 1 2 < M m i n < 1 + 2 ( 2 + s ) 2 s 2 x 2 2 ,
where x 1 = Q 1 α + α 1 2 and x 2 = Q 1 α α 2 2 with α 1 = s + 2 π M m i n and α 2 = s + 2 π ( M m i n 1 ) . Additionally,
M m i n 2 ( 2 + s ) 2 s 2 Q 1 α 2 2
as s 0 .
Proof. 
Recall the notations λ ˜ 0 and λ * given by Equations (30) and (34), respectively, we derive
σ n 2 < λ * < λ ˜ 0 < σ t 2
for M > 4 ln ( 1 + s ) s 2 . It clearly indicates M min > 4 ln ( 1 + s ) s 2 . Otherwise, the minimum point λ ˜ 0 of function Q ( M , s , λ ) , given by Equation (33), does not belong to ( σ n 2 , σ t 2 ) . Thus, the minimum value of Q ( M , s , λ ) over ( σ n 2 , σ t 2 ) attains at either λ = σ n 2 or σ t 2 , which means Q ( M , s , λ ) > 1 2 > α over ( σ n 2 , σ t 2 ) , contradicting the definition of M min . Thus, M min as s 0 .
Clearly, we have
λ ˜ 0 λ * = ( 1 + s ) ( δ 1 1 ) 2 + s σ n 2 ,
where δ 1 = 1 + δ + 2 δ s and δ = 4 M ln ( 1 + s ) .
Using standard Mean Value Theorem to Q ( M , s , λ ) , given by Equation (33), with respective to λ = λ ˜ 0 , λ * , we have
Q ( M , s , λ ˜ 0 ) = Q ( M , s , λ * ) + Q ( M , s , λ ) λ | λ = ξ ( λ ˜ 0 λ * ) ,
where ξ ( λ * , λ ˜ 0 ) . Note that
Q ( M , s , λ ) λ | λ = ξ = m 2 π 1 σ t 2 e m 2 ( ξ σ t 2 ) 2 2 σ t 4 1 σ n 2 e m 2 ( ξ σ n 2 ) 2 2 σ n 4 < m 2 π 1 σ t 2 + 1 σ n 2 = m σ n 2 2 π 2 + s 1 + s ,
and
δ 1 1 = 1 + δ + 2 δ s 1 < 1 + δ 2 + δ s 1 = ( s + 2 ) δ 2 s = 2 ( s + 2 ) ln ( 1 + s ) M s < 2 ( s + 2 ) M ,
we have, by Equation (38),
Q ( M , s , λ ˜ 0 ) Q ( M , s , λ * ) < m ( δ 1 1 ) 2 π < 2 m ( s + 2 ) M 2 π = s + 2 m 2 π = Δ α 1 ( m , s ) .
Clearly, α 1 ( m , s ) 0 as m .
Observing that Q ( M , s , λ * ) = 2 Q m s 2 + s , and recalling M m i n , the solution of ( NP ˜ 1 ), by Equation (39) we know that
2 Q m 0 s 2 + s α 1 ( m 0 , s ) < Q ( M min , s , λ ˜ 0 ) α ,
2 Q m 1 s 2 + s + α 1 ( m 1 , s ) > Q ( M min 1 , s , λ ˜ 0 ) > α ,
where m 0 = M min / 2 and m 1 = ( M min 1 ) / 2 .
By solving the inequality in Equation (40), we derive a lower bound of M min as
M min > 2 ( 2 + s ) 2 s 2 x 1 2 ,
where x 1 = Q 1 α + α 1 ( m 0 , s ) 2 . By solving the inequality in Equation (41), we derive an upper bound of M min as
M min < 1 + 2 ( 2 + s ) 2 s 2 x 2 2 ,
where x 2 = Q 1 α α 1 ( m 1 , s ) 2 .
Recalling the fact that M min as s 0 guaranteed in the beginning of the proof, Equation (36) follows immediately. ☐
Now, based on the facts revealed in the above proof, the consideration of ( NP ˜ 2 ) is much simplified.
Theorem 2.
The solution of nonlinear optimization problem ( N P ˜ 2 ) with α ( 0 , 1 2 ) , denoted by S N R m i n , has bounds as following:
2 Q 1 ( α + α 1 2 ) M 2 Q 1 ( α + α 1 2 ) < S N R m i n < ϵ + 2 Q 1 ( α α 2 2 ) M 2 Q 1 ( α α 2 2 ) ,
where α 1 = S N R m i n + 2 π M and α 2 = S N R m i n ϵ + 2 π M with ϵ > 0 . Additionally,
S N R m i n 2 Q 1 ( α 2 ) M 2 Q 1 ( α 2 )
as M .
Proof. 
Recalling the solution of ( NP ˜ 2 ) SNR min , denoted by s m for brief, by Equation (39) we have
2 Q m s m 2 + s m α 1 ( m , s m ) < Q ( M , s m , λ ˜ 0 ) α ,
2 Q m ( s m ϵ ) 2 + ( s m ϵ ) + α 1 ( m , s m ϵ ) > Q ( M , s m ϵ , λ ˜ 0 ) > α ,
where ϵ > 0 and α 1 ( m , s ) is given by Equation (39). After solving the two inequalities and Equations (44) and (45), we can derive Equation (42).
Note that α 1 M 0 and α 2 M 0 , Equation (43) follows directly. ☐
As already mentioned, we establish a relation between Q function and incomplete Gamma function before the considerations of (NP 1 ) and (NP 2 ).
Lemma 1.
Extend the incomplete Gamma function given by Equation (A3) (in the Appendix) by letting Γ ( k , x ) = 1 for x < 0 . Then, for x ,
Γ M 2 , M 2 + x M 2 Q ( x ) 14 C 2 M ,
where C = 0.7056 .
Proof. 
Let us first recall a simple version of the Berry–Esseen inequality (see, e.g., page 670 of [23]) here: Let ξ 1 , ξ 2 , ⋯, be iid random variables with E ξ 1 = μ , E ξ 1 2 = σ 2 , and E | ξ 1 E ξ 1 | 3 = ρ < . In addition, let S n = k = 1 n ξ k . Let the distribution function of
S n * = S n n μ n ( σ 2 μ 2 )
be denoted by F n ( x ) , and the normal distribution function be denoted by Φ ( x ) . There exists a positive constant C such that, for all x and n,
| F n ( x ) Φ ( x ) | C ρ n ( σ 2 μ 2 ) 3 / 2 .
The best current bound for C was discovered by Shevtsova as 0.7056 [24] in 2007.
For M iid standardly normal distributed random variables η k , k = 1 , 2 , , M , we know that E η k 2 = 1 , E η k 4 = 3 , E η k 6 = 15 . On the other hand, by the definition of chi-square distribution, it is clear that S M = k = 1 M η k 2 is chi-square distributed with degree M, i.e., S M χ M 2 . Apply the Berry–Esseen inequality to ξ k = η k 2 , k = 1 , , M , by noticing μ = 1 , σ 2 = 3 and ρ 28 (by the fact | η k 2 1 | η k 2 + 1 ) for this case, we have
Pr S M M 2 M > x Q ( x ) 14 C 2 M ,
where C = 0.7056 .
Recalling the PDF of chi-square distribution given by Equation (5), we have
Pr S M M 2 M > x = Pr S M > M + x 2 M = M + x 2 M t M 2 1 e t 2 d t 2 M 2 Γ ( M 2 ) = M 2 + x M 2 u M 2 1 e u d u Γ ( M 2 ) = Γ M 2 , M 2 + x M 2 .
Combining Equations (47) and (48), we derive Equation (46). ☐
Now, we are in the position to analyze the situation of (NP 1 ).
Theorem 3.
The solution of nonlinear optimization problem (NP 1 ) with α ( 0 , 1 ) , denoted by M m i n , has bounds as following:
2 ( 2 + s ) 2 s 2 x 1 2 < M m i n < 1 + 2 ( 2 + s ) 2 s 2 x 2 2 ,
where x 1 and x 2 are given in the following proof. Additionally,
M m i n 2 ( 2 + s ) 2 s 2 Q 1 α 2 2
as s 0 .
Proof. 
Recalling the functions Γ ( M , s , λ ) and Q ( M , s , λ ) given by Equations (19) and (33), respectively, by Equation (46), we have
| Γ ( M , s , λ ) Q ( M , s , λ ) | 14 C 2 M ,
where C = 0.7056 .
Recalling also λ 0 and λ * given by Equations (17) and (34), respectively, we have
σ n 2 < λ * < λ 0 < σ t 2
by the fact 2 s 2 + s < ln ( 1 + s ) for s > 0 , and by the inequality ln ( 1 + s ) < s s 2 2 + s 3 3 for s > 0 , we further have
0 < λ 0 λ * = ( 1 + s ) σ n 2 s ln ( 1 + s ) 2 s 2 + s < s 2 ( 1 + s ) ( 1 + 2 s ) σ n 2 6 ( 2 + s ) .
Similar to the derivation of Equation (39), by Equation (52), we have
Q ( M , s , λ 0 ) Q ( M , s , λ * ) < m s 2 ( 1 + 2 s ) 6 2 π .
Observing that Q ( M , s , λ * ) = 2 Q m s 2 + s , and recalling M min , the solution of (NP 1 ), by Equations (51) and (53) we know that
2 Q m 0 s 2 + s α 2 ( m 0 , s ) < Γ ( M min , s , λ ˜ 0 ) α ,
2 Q m 1 s 2 + s + α 2 ( m 1 , s ) > Γ ( M min 1 , s , λ ˜ 0 ) > α ,
where m 0 = M min / 2 , m 1 = ( M min 1 ) / 2 , and
α 2 ( m , s ) = 14 C 2 M + m s 2 ( 1 + 2 s ) 6 2 π .
By solving the inequality in Equation (54), we derive a lower bound of M min as
M min > 2 ( 2 + s ) 2 s 2 x 1 2 ,
where x 1 = Q 1 α + α 2 ( m 0 , s ) 2 . While by solving the inequality in Equation (55), we derive an upper bound of M min as
M min < 1 + 2 ( 2 + s ) 2 s 2 x 2 2 ,
where x 2 = Q 1 α α 2 ( m 1 , s ) 2 .
It is sufficient to prove Equation (50) based on the facts that α 2 ( m 0 , s ) 0 and α 2 ( m 1 , s ) 0 as s 0 . For this, we first note that
0 < M λ 0 2 σ n 2 M λ 0 2 σ t 2 = M ln ( 1 + s ) 2 < M s 2 = m 2 s .
If m 2 s s 0 0 , then Γ ( M , s , λ ) s 0 1 . This means that M s 0 . On the other side, we have
M λ 0 2 σ n 2 M 2 = M 2 ( 1 + s ) ln ( 1 + s ) s 1 > m 2 s 2 + s , M 2 M λ 0 2 σ t 2 = M 2 1 ln ( 1 + s ) s > m 2 s ( 3 2 s ) 6
by inequalities ln ( 1 + s ) > 2 s 2 + s and ln ( 1 + s ) < s s 2 2 + s 3 3 for s > 0 . Thus, we know that m 0 s = O ( 1 ) as s 0 . Otherwise, by Chebyshev’s inequality, similar to Equations(25) and (26), we have Γ ( M min , s , λ ) s 0 , which contradicts the fact that Γ ( M min , s , λ ) should be around the quantity α > 0 . Hence, the conclusions that α 2 ( m 0 , s ) s 0 and α 2 ( m 1 , s ) s 0 follow directly. ☐
Based on the foundation in above proof, the analysis of (NP 2 ) is simplified as below.
Theorem 4.
The solution of nonlinear optimization problem (NP 2 ) with α ( 0 , 1 ) , denoted by S N R m i n , has bounds as follows:
2 Q 1 ( α + α 1 2 ) M 2 Q 1 ( α + α 1 2 ) < S N R m i n < ϵ + 2 Q 1 ( α α 2 2 ) M 2 Q 1 ( α α 2 2 ) ,
where α 1 and α 2 are given in the following proof, equipped with ϵ > 0 . Additionally,
S N R m i n 2 Q 1 ( α 2 ) M 2 Q 1 ( α 2 )
as M .
Proof. 
Recalling the solution of (NP 2 ) SNR min , denoted by s m for brief, and observing Q ( M , s , λ * ) = 2 Q m s 2 + s , by Equations (51) and (53), we have
2 Q m s m 2 + s m α 2 ( m , s m ) < Q ( M , s m , λ ˜ 0 ) α ,
2 Q m ( s m ϵ ) 2 + ( s m ϵ ) + α 2 ( m , s m ϵ ) > Q ( M , s m ϵ , λ ˜ 0 ) > α ,
where ϵ > 0 and α 2 ( m , s ) is defined by Equation (56). Solving the two inequalities in Equations (59) and (60), we derive Equation (57) with α 1 = α 2 ( m , s m ) and α 2 = α 2 ( m , s m ϵ ) .
By the facts pointed out in last part of proof for Theorem 3, i.e., that m 2 s is not an infinitesimal quantity (thus, s 0 if m ) and that m s is bounded, and recalling the definition in Equation (56) we have α 1 M 0 and α 2 M 0 . Thus, Equation (58) follows directly. ☐
Now, we find M min , either the solution of (NP 1 ) or ( NP ˜ 1 ), has an asymptotical order as
M min 2 ( 2 + s ) 2 s 2 Q 1 α 2 2 ( s 0 ) ;
and SNR min , either the solution of (NP 2 ) or ( NP ˜ 2 ), has an asymptotical order as
SNR min 2 Q 1 ( α 2 ) M 2 Q 1 ( α 2 ) ( M ) .
If replacing the notations “∼” by “=” in the above two formulas, we find that the derived two equations are equivalent by ignoring the differences between M, SNR and M min SNR min respectively.
Based on the foundation in above proof, the analysis of (NP 2 ) is simplified as below.
By noticing the facts that the potential threshold λ ˜ 0 tends to λ * as SNR 0 (or M ), and that Q ( M , s , λ * ) = 2 Q m s 2 + s , we can propose another principle by replacing the restriction in Equation (27) in ( NP ˜ 1 )( NP ˜ 2 ) as
P ˜ f a ( λ * ) + ( 1 P ˜ d ( λ * ) ) α ,
which is equivalent to
2 Q m s 2 + s α .
Under this principle, the corresponding ( NP ˜ 1 )( NP ˜ 2 ) issues can be explicitly solved. Such an idea of replacement also serves as the key for the proofs presented in this section.

6. Conclusions

Spectrum sensing is a key step of enabling the recently emerged CR technologies by detecting the presence/absence of signals to explore spatial and temporal availability of spectrum resources. Among the possible methods for spectrum sensing, energy detection is the most popular and widely adopted technique, most likely due to its low implementation complexity. Two detection principles, i.e., CFAR and CDR, have been reported to set a threshold for corresponding binary hypothesis. CDR protects primary users at a designated low level of interference, while CFAR ensures a high resource utilization available to the secondary users. In practice, it is desired to initiate a graceful tradeoff between these two principles.
Motivated by this, the paper explored a new principle where the sum of the false alarm probability P f a from CFAR and the false detection probability ( 1 P d ) from CDR is kept smaller than a predetermined confidence level. Mathematically, for a given small confidence level α ( 0 , 1 ) , say α = 0.05 , the proposed principle aims to identify a threshold λ such that
P f a ( λ ) + ( 1 P d ( λ ) ) α .
However, this equation regarding potential threshold λ may lead to too many solutions or no solutions for a given noise variance σ n 2 , SNR and data size M. To tackle this situation, the paper firstly introduced two well-posed presentations for the optimization problems by finding the minimum data size M (with σ n 2 and SNR given) and SNR (with σ n 2 and M given), respectively.
From our analysis, we found that for a fixed small SNR the data size M should be larger than a critical value, denoted by M min , to guarantee the existence of threshold λ suggested by the new principle under given confidence level α . An asymptotical explicit form between M min and SNR, i.e., Equation (61), is further given in Section 5. On the other hand, it is also discovered that, for a given data size M, SNR should be greater than a minimum SNR to ensure the existence of threshold λ suggested by the new principle under a given confidence level α . An asymptotical explicit form between the minimum SNR and M, i.e., Equations (62), is further proposed in Section 5. We found that, if data size M is fixed, SNR should be greater than a certain level to perform considerate detection. If SNR known to be small and fixed, the data size should be greater than a certain level to detect reliably. These discoveries are important for policymaking for the settings of a CR sensing system, such as detection time and design sample rate for a special channel to achieve efficiency and noninterference at a confidence level. The proposed optimization problems can be effectively solved fast by setting the initial solution given by Equation (61) or (62). Therefore, the proposed framework is applicable both in theoretical and operational aspects.
It is worth noting that the inequality in Equation (63) can be extended to a more general form as
w 1 P f a ( λ ) + w 2 ( 1 P d ( λ ) ) α ,
where w 1 0 and w 2 0 . Clearly, if w 1 = 1 and w 2 = 0 , Equation (64) derives CFAR principle; if w 1 = 0 and w 2 = 1 , Equation (64) leads to CDR principle; and, finally, if w 1 = 1 and w 2 = 1 , Equation (64) turns to be the new principle introduced in this paper. It is of interest to consider relevant theories based on the inequality in Equation (64) in the general setting.

Author Contributions

Xiao-Li Hu conceived, analyzed, and wrote the original proposed framework under the supervision of Pin-Han Ho during a visit to Waterloo. Pin-Han Ho and Limei Peng gave advice on this work and helped revise the paper.

Funding

This research was funded by [National Research Foundation of Korea] grant number [2018R1D1A1B07051118].

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

The Appendix provides some preliminaries on a number of commonly used distributions in the paper for easy reading.
The complement of the standard normal distribution function is often denoted Q ( x ) , i.e.,
Q ( x ) = x 1 2 π e t 2 2 d t ,
and is sometimes referred to simply as the Q-function, especially in engineering texts. This represents the tail probability of the standard Gaussian distribution. The Gamma function and regularized upper incomplete Gamma function are defined as
Γ ( k ) = 0 t k 1 e t d t ,
Γ ( k , x ) = 1 Γ ( k ) x t k 1 e t d t
for k > 0 , respectively. We simply use Q 1 ( x ) , Γ 1 ( k , x ) to present the inverse functions of Q ( x ) , Γ ( k , x ) respectively.

References

  1. Liang, Y.-C.; Chen, K.-C.; Li, G.Y.; Mahonen, P. Cognitive radio networking and communications: An overview. IEEE Trans. Veh. Technol. 2011, 60, 3386–3407. [Google Scholar] [CrossRef]
  2. Sahai, A.; Tandra, R.; Mishra, S.M.; Hoven, N. Fundamental design tradeoffs in cognitive radio systems. In Proceedings of the international Workshop on Technology and Policy for Accessing Spectrum, Boston, MA, USA, 5 August 2006. [Google Scholar]
  3. Rahimzadeh, F.; Shahtalebi, K.; Parvaresh, F. Using NLMS algorithms in cyclostationary-based spectrum sensing for cognitive radio networks. Wirel. Pers. Commun. 2017, 97, 2781–2797. [Google Scholar] [CrossRef]
  4. Deepa, B.; Iyer, A.P.; Murthy, C.R. Cyclostationary-based architecture for spectrum sensing in IEEE 802.22 WRAN. In Proceedings of the IEEE Global Telecommunications Conference GLOBECOM 2010, Miami, FL, USA, 6–10 December 2010. [Google Scholar]
  5. Tsinos, C.G.; Berberidis, K. Adaptive eigenvalue-based spectrum sensing for multi-antenna cognitive radio systems. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 4454–4458. [Google Scholar]
  6. Tsinos, C.G.; Berberidis, K. Decentralized adaptive eigenvalue-based spectrum sensing for multiantenna cognitive radio systems. IEEE Trans. Wirel. Commun. 2015, 14, 1703–1715. [Google Scholar] [CrossRef]
  7. Herath, S.P.; Rajatheva, N.; Tellambura, C. Energy detection of unknown signals in fading and diversity reception. IEEE Trans. Commun. 2011, 59, 2443–2453. [Google Scholar] [CrossRef]
  8. Sofotasios, P.C.; Mohjazi, L.; Muhaidat, S.; Al-Qutayri, M.; Karagiannidis, G.K. Energy detection of unknown signals over cascaded fading channels. IEEE Antennas Wirel. Propag. Lett. 2016, 15, 135–138. [Google Scholar] [CrossRef]
  9. Bagheri, A.; Sofotasios, P.C.; Tsiftsis, T.A.; Ho-Van, K.; Loupis, M.I.; Freear, S.; Valkama, M. Energy detection based spectrum sensing over enriched multipath fading channels. In Proceedings of the 2016 IEEE Wireless Communications and Networking Conference, Doha, Qatar, 3–6 April 2016. [Google Scholar]
  10. Politis, C.; Maleki, S.; Tsinos, C.G.; Liolis, K.P.; Chatzinotas, S.; Ottersten, B. Simultaneous sensing and transmission for cognitive radios with imperfect signal cancellation. IEEE Trans. Wirel. Commun. 2017, 16, 5599–5615. [Google Scholar] [CrossRef]
  11. Politis, C.; Maleki, S.; Tsinos, C.; Chatzinotas, S.; Ottersten, B. On-board the satellite interference detection with imperfect signal cancellation. In Proceedings of the 2016 IEEE 17th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Edinburgh, UK, 3–6 July 2016; pp. 1–5. [Google Scholar]
  12. Ye, F.; Zhang, X.; Li, Y.; Tang, C. Faithworthy collaborative spectrum censing based on credibility and evidence theory for cognitive radio networks. Symmetry 2017, 9, 36. [Google Scholar] [CrossRef]
  13. Liu, P.; Qi, W.; Yuan, E.; Wei, L.; Zhao, Y. Full-duplex cooperative sensing for spectrum-heterogeneous cognitive radio networks. Sensors 2017, 17, 1773. [Google Scholar] [CrossRef] [PubMed]
  14. Abraham, D.A. Performance analysis of constant-false-alarm-rate detectors using characteristic functions. IEEE J. Ocean. Eng. 2017. [Google Scholar] [CrossRef]
  15. Weinberg, G.V. Constant false alarm rate detection in Pareto Type II Clutter. Digit. Signal Process. 2017, 68, 192–198. [Google Scholar] [CrossRef]
  16. Tandra, R.; Sahai, A. SNR walls for signal detection. IEEE J. Sel. Top. Signal Process. 2008, 2, 4–17. [Google Scholar] [CrossRef]
  17. Carlos, K.C.; Birru, D. IEEE 802.22: An introduction to the first wireless standard based on cognitive radios. J. Commun. 2006, 1, 38–47. [Google Scholar]
  18. Penna, F.; Pastrone, C.; Spirito, M.; Garello, R. An experimental study on spectrum sensing for cognitive WSNs under Wi-Fi interference. In Proceedings of the Wireless World Research Forum—WWRF 21, Stockholm, Sweden, 13–15 October 2008. [Google Scholar]
  19. Ye, Z.; Memik, G.; Grosspietsch, J. Energy Detection Using Estimated Noise Variance for Spectrum Sensing in Cognitive Radio Networks. In Proceedings of the Wireless Communications and Networking Conference, Las Vegas, NV, USA, 31 March–3 April 2008; pp. 711–716. [Google Scholar]
  20. Shellhammer, S.J.; Tandra, R.; Tomcik, J. Performance of power detector sensors of DVT signals in IEEE 802.22 WRANs. In Proceedings of the First International Workshop on Technology and Policy for Accessing Spectrum, Boston, MA, USA, 5 August 2006. [Google Scholar]
  21. Bertsekas, D.P.; Tsitsiklis, J.N. Introduction to Probability; Athena Scientific: Nashua, NH, USA, 2008. [Google Scholar]
  22. Peh, E.C.; Liang, Y.-C.; Guan, Y.-L.; Zeng, Y.-H. Optimization of cooperative sensing in cognitive radio networks: A sensing-throughput tradeoff view. IEEE Trans. Veh. Technol. 2007, 58, 5294–5299. [Google Scholar] [CrossRef]
  23. Kuang, J. Applied Inequalities, 3rd ed.; Shandong Science and Technology Press: Jinan, China, 2004. (In Chinese) [Google Scholar]
  24. Shevtsova, G.I. Sharpening of the upper bound of the absolute constant in the Berry-Esseen inequality. Theory Probab. Appl. 2007, 51, 549–553. [Google Scholar] [CrossRef]
Figure 1. Block diagram of an energy detector.
Figure 1. Block diagram of an energy detector.
Jsan 07 00025 g001
Figure 2. The plot of the solutions ( M , λ ) for (NP 1 ) with SNR = 1 k , k = 1 , 2 , , 20 , under setting α = 0.05 and σ n = 1 . (a) The plot of the solutions M of (NP 1 ) vs. 1/SNR. (b) The plot of the corresponding λ vs. 1/SNR.
Figure 2. The plot of the solutions ( M , λ ) for (NP 1 ) with SNR = 1 k , k = 1 , 2 , , 20 , under setting α = 0.05 and σ n = 1 . (a) The plot of the solutions M of (NP 1 ) vs. 1/SNR. (b) The plot of the corresponding λ vs. 1/SNR.
Jsan 07 00025 g002
Figure 3. The plot of solutions ( SNR , λ ) for (NP 2 ) with M = 1000 k with k = 1 , 2 , , 20 , k = 1 , 2 , , 20 , under setting α = 0.05 and σ n = 1 . (a) The plot of SNR vs. M/1000. (b) The plot of corresponding λ vs. M/1000.
Figure 3. The plot of solutions ( SNR , λ ) for (NP 2 ) with M = 1000 k with k = 1 , 2 , , 20 , k = 1 , 2 , , 20 , under setting α = 0.05 and σ n = 1 . (a) The plot of SNR vs. M/1000. (b) The plot of corresponding λ vs. M/1000.
Jsan 07 00025 g003
Figure 4. The plot of differences between the solutions ( M , λ ) for ( NP ˜ 1 ) and (NP 1 ) with SNR = 1 k , k = 1 , 2 , , 20 , under setting α = 0.05 and σ n = 1 . (a) The plot of the differences between solutions M of ( NP ˜ 1 ) and (NP 1 ) vs. 1/SNR. (b) The plot of differences for the corresponding λ vs. 1/SNR.
Figure 4. The plot of differences between the solutions ( M , λ ) for ( NP ˜ 1 ) and (NP 1 ) with SNR = 1 k , k = 1 , 2 , , 20 , under setting α = 0.05 and σ n = 1 . (a) The plot of the differences between solutions M of ( NP ˜ 1 ) and (NP 1 ) vs. 1/SNR. (b) The plot of differences for the corresponding λ vs. 1/SNR.
Jsan 07 00025 g004
Figure 5. The plot of differences between the solutions ( S N R , λ ) for ( NP ˜ 2 ) and (NP 2 ) with M = 1000 k , k = 1 , 2 , , 20 , under setting α = 0.05 and σ n = 1 . (a) The plot of the differences between solutions S N R of ( NP ˜ 2 ) and (NP 2 ) vs. M/1000. (b) The plot of differences for the corresponding λ vs. M/1000.
Figure 5. The plot of differences between the solutions ( S N R , λ ) for ( NP ˜ 2 ) and (NP 2 ) with M = 1000 k , k = 1 , 2 , , 20 , under setting α = 0.05 and σ n = 1 . (a) The plot of the differences between solutions S N R of ( NP ˜ 2 ) and (NP 2 ) vs. M/1000. (b) The plot of differences for the corresponding λ vs. M/1000.
Jsan 07 00025 g005
Figure 6. The plot for Example 5 under the setting S N R = 10 1 3 ( k 1 ) / 19 , k = 1 , 2 , , 20 , corresponding to the range of 10 dB to −20 dB, α = 0.05 and σ n = 1 . (a) The plot of the constraint summation probabilities of (NP 1 ) and ( NP ˜ 1 ), marked by “-o” and “-x”, respectively. (b) The plot of the bias between the solution M from either (NP 1 ) or ( NP ˜ 1 ) and the asymptotical Equation (61), marked by “-o” and “-x”, respectively.
Figure 6. The plot for Example 5 under the setting S N R = 10 1 3 ( k 1 ) / 19 , k = 1 , 2 , , 20 , corresponding to the range of 10 dB to −20 dB, α = 0.05 and σ n = 1 . (a) The plot of the constraint summation probabilities of (NP 1 ) and ( NP ˜ 1 ), marked by “-o” and “-x”, respectively. (b) The plot of the bias between the solution M from either (NP 1 ) or ( NP ˜ 1 ) and the asymptotical Equation (61), marked by “-o” and “-x”, respectively.
Jsan 07 00025 g006
Figure 7. The plot for Example 6 under the setting M = 10 k , k = 1 , 2 , , 20 , under setting α = 0.05 and σ n = 1 . (a) The plot of the constraint summation probabilities of (NP 2 ) and ( NP ˜ 2 ), marked by “-o” and “-x”, respectively. (b) The plot of the bias between the SNR solutions from either (NP 2 ) or ( NP ˜ 2 ) and the asymptotical Equation (62), marked by “-o” and “-x”, respectively.
Figure 7. The plot for Example 6 under the setting M = 10 k , k = 1 , 2 , , 20 , under setting α = 0.05 and σ n = 1 . (a) The plot of the constraint summation probabilities of (NP 2 ) and ( NP ˜ 2 ), marked by “-o” and “-x”, respectively. (b) The plot of the bias between the SNR solutions from either (NP 2 ) or ( NP ˜ 2 ) and the asymptotical Equation (62), marked by “-o” and “-x”, respectively.
Jsan 07 00025 g007

Share and Cite

MDPI and ACS Style

Hu, X.-L.; Ho, P.-H.; Peng, L. Fundamental Limitations in Energy Detection for Spectrum Sensing. J. Sens. Actuator Netw. 2018, 7, 25. https://doi.org/10.3390/jsan7030025

AMA Style

Hu X-L, Ho P-H, Peng L. Fundamental Limitations in Energy Detection for Spectrum Sensing. Journal of Sensor and Actuator Networks. 2018; 7(3):25. https://doi.org/10.3390/jsan7030025

Chicago/Turabian Style

Hu, Xiao-Li, Pin-Han Ho, and Limei Peng. 2018. "Fundamental Limitations in Energy Detection for Spectrum Sensing" Journal of Sensor and Actuator Networks 7, no. 3: 25. https://doi.org/10.3390/jsan7030025

APA Style

Hu, X. -L., Ho, P. -H., & Peng, L. (2018). Fundamental Limitations in Energy Detection for Spectrum Sensing. Journal of Sensor and Actuator Networks, 7(3), 25. https://doi.org/10.3390/jsan7030025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop