Q-function
In statistics, the Q-function is the tail probability of the standard normal distribution .[1][2] In other words, Q(x) is the probability that a normal (Gaussian) random variable will obtain a value larger than x standard deviations above the mean.
If the underlying random variable is y, then the proper argument to the tail probability is derived as:
which expresses the number of standard deviations away from the mean.
Other definitions of the Q-function, all of which are simple transformations of the normal cumulative distribution function, are also used occasionally.[3]
Because of its relation to the cumulative distribution function of the normal distribution, the Q-function can also be expressed in terms of the error function, which is an important function in applied mathematics and physics.
Definition and basic properties
Formally, the Q-function is defined as
Thus,
where is the cumulative distribution function of the normal Gaussian distribution.
The Q-function can be expressed in terms of the error function, or the complementary error function, as[2]
An alternative form of the Q-function known as Craig's formula, after its discoverer, is expressed as:[4]
This expression is valid only for positive values of x, but it can be used in conjunction with Q(x) = 1 − Q(−x) to obtain Q(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.
- The Q-function is not an elementary function. However, the bounds
- become increasingly tight for large x, and are often useful.
- Using the substitution v =u2/2, the upper bound is derived as follows:
- Similarly, using and the quotient rule,
- Solving for Q(x) provides the lower bound.
- The Chernoff bound of the Q-function is
- Improved exponential bounds and a pure exponential approximation are [5]
- A tight approximation of for is given by Karagiannidis & Lioumpas (2007)[6] who showed for the appropriate choice of parameters that
- The absolute error between and over the range is minimized by evaluating
- Using and numerically integrating, they found the minimum error occurred when which gave a good approximation for
- Substituting these values and using the relationship between and from above gives
Inverse Q
The inverse Q-function can be related to the inverse error functions:
The function finds application in digital communications. It is usually expressed in dB and generally called Q-factor:
where y is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for QPSK in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the signal to noise ratio that yields a bit error rate equal to y.
Values
The Q-function is well tabulated and can be computed directly in most of the mathematical software packages such as R, those available in Python, Matlab and Mathematica. Some values of the Q-function are given below for reference.
|
|
|
|
Generalization to high dimensions
The Q-function can be generalized to higher dimensions:[7]
where follows the multivariate normal distribution with covariance and the threshold is of the form for some positive vector and positive constant . As in the one dimensional case, there is no simple analytical formula for the Q-function. Nevertheless, the Q-function can be approximated arbitrarily well as becomes larger and larger.[8]
References
- ↑ The Q-function, from cnx.org
- 1 2 Basic properties of the Q-function Archived March 25, 2009, at the Wayback Machine.
- ↑ Normal Distribution Function - from Wolfram MathWorld
- ↑ John W. Craig, A new, simple and exact result for calculating the probability of error for two-dimensional signal constellaions, Proc. 1991 IEEE Military Commun. Conf., vol. 2, pp. 571–575.
- ↑ Chiani, M., Dardari, D., Simon, M.K. (2003). New Exponential Bounds and Approximations for the Computation of Error Probability in Fading Channels. IEEE Transactions on Wireless Communications, 4(2), 840–845, doi=10.1109/TWC.2003.814350.
- ↑ Karagiannidis, G. K., & Lioumpas, A. S. (2007). An improved approximation for the Gaussian Q-function. Communications Letters, IEEE, 11(8), 644-646.
- ↑ Savage, I. R. (1962). "Mills ratio for multivariate normal distributions". Journal Res. Nat. Bur. Standards Sect. B. 66: 93–96.
- ↑ Botev, Z. I. (2016). "The normal law under linear restrictions: simulation and estimation via minimax tilting". Journal of the Royal Statistical Society: Series B (Statistical Methodology). doi:10.1111/rssb.12162.