Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
180 views

Multivariate Normal Distribution

The document discusses the multivariate normal distribution, which is a generalization of the normal distribution to more than one dimension. It defines the distribution using parameters like the mean vector and covariance matrix. It provides the probability density function and discusses properties like marginal distributions and how to draw values from the distribution.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
180 views

Multivariate Normal Distribution

The document discusses the multivariate normal distribution, which is a generalization of the normal distribution to more than one dimension. It defines the distribution using parameters like the mean vector and covariance matrix. It provides the probability density function and discusses properties like marginal distributions and how to draw values from the distribution.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Multivariate normal distribution

In probability theory and statistics, the multivariate normal


distribution, multivariate Gaussian distribution, or joint Multivariate normal
normal distribution is a generalization of the one- Probability density function
dimensional (univariate) normal distribution to higher
dimensions. One definition is that a random vector is said to
be k-variate normally distributed if every linear combination of
its k components has a univariate normal distribution. Its
importance derives mainly from the multivariate central limit
theorem. The multivariate normal distribution is often used to
describe, at least approximately, any set of (possibly)
correlated real-valued random variables each of which clusters
around a mean value.
Many sample points from a multivariate normal distribution
with and , shown along with the 3-
Contents sigma ellipse, the two marginal distributions, and the two 1-
d histograms.
Definitions
Notation and parametrization Notation
Standard normal random vector Parameters μ ∈ Rk — location
Centered normal random vector Σ ∈ Rk × k — covariance (positive
Normal random vector semi-definite matrix)
Equivalent definitions
Support x ∈ μ + span(Σ) ⊆ Rk
Density function
Non-degenerate case PDF
Bivariate case exists only when Σ is positive-definite
Degenerate case Mean μ
Cumulative distribution function
Mode μ
Interval
Complementary cumulative distribution function Variance Σ
(tail distribution) Entropy
Properties
Higher moments MGF
Likelihood function
CF
Differential entropy
Kullback–Leibler divergence Kullback- see below
Mutual information Leibler
Joint normality divergence
Normally distributed and independent
Two normally distributed random variables
need not be jointly bivariate normal
Correlations and independence
Conditional distributions
Bivariate case
Bivariate conditional expectation
In the general case
In the centered case with unit variances
Marginal distributions
Affine transformation
Geometric interpretation
Statistical Inference
Parameter estimation
Bayesian inference
Multivariate normality tests
Computational methods
Drawing values from the distribution
See also
References
Literature

Definitions

Notation and parametrization

The multivariate normal distribution of a k-dimensional random vector can be written in the
following notation:

or to make it explicitly known that X is k-dimensional,

with k-dimensional mean vector

and covariance matrix

such that The inverse of the covariance matrix is called the precision matrix, denoted by .

Standard normal random vector

A real random vector is called a standard normal random vector if all of its components
are independent and each is a zero-mean unit-variance normally distributed random variable, i.e. if for
all . [1]:p. 454

Centered normal random vector

A real random vector is called a centered normal random vector if there exists a deterministic
matrix such that has the same distribution as where is a standard normal random vector with
components.[1]:p. 454

Normal random vector


A real random vector is called a normal random vector if there exists a random -vector ,
which is a standard normal random vector, a -vector , and a matrix , such that
. [2]:p. 454[1]:p. 455

Formally:

Here the covariance matrix is .

In the degenerate case where the covariance matrix is singular, the corresponding distribution has no density; see the
section below for details. This case arises frequently in statistics; for example, in the distribution of the vector of residuals
in the ordinary least squares regression. The are in general not independent; they can be seen as the result of applying
the matrix to a collection of independent Gaussian variables .

Equivalent definitions

The following definitions are equivalent to the definition given above. A random vector has a
multivariate normal distribution if it satisfies one of the following equivalent conditions.

Every linear combination of its components is normally distributed. That is, for
any constant vector , the random variable has a univariate normal distribution, where a
univariate normal distribution with zero variance is a point mass on its mean.
There is a k-vector and a symmetric, positive semidefinite matrix , such that the characteristic
function of is

The spherical normal distribution can be characterised as the unique distribution where components are independent in
any orthogonal coordinate system.[3][4]

Density function

Non-degenerate case

The multivariate normal distribution is said to be "non-degenerate"


when the symmetric covariance matrix is positive definite. In this
case the distribution has density[5]

Bivariate normal joint density

where is a real k-dimensional column vector and is the determinant of . The equation above reduces to
that of the univariate normal distribution if is a matrix (i.e. a single real number).

The circularly symmetric version of the complex normal distribution has a slightly different form.
Each iso-density locus—the locus of points in k-dimensional space each of which gives the same particular value of the
density—is an ellipse or its higher-dimensional generalization; hence the multivariate normal is a special case of the
elliptical distributions.

The descriptive statistic is known as the Mahalanobis distance, which represents the distance
of the test point from the mean . Note that in the case when , the distribution reduces to a univariate normal
distribution and the Mahalanobis distance reduces to the absolute value of the standard score. See also Interval below.

Bivariate case

In the 2-dimensional nonsingular case ( ), the probability density function of a vector is:

where is the correlation between and and where and . In this case,

In the bivariate case, the first equivalent condition for multivariate normality can be made less restrictive: it is sufficient to
verify that countably many distinct linear combinations of and are normal in order to conclude that the vector
is bivariate normal.[6]

The bivariate iso-density loci plotted in the -plane are ellipses. As the absolute value of the correlation parameter
increases, these loci are squeezed toward the following line :

This is because this expression, with (where sgn is the Sign function) replaced by , is the best linear unbiased
prediction of given a value of .[7]

Degenerate case

If the covariance matrix is not full rank, then the multivariate normal distribution is degenerate and does not have a
density. More precisely, it does not have a density with respect to k-dimensional Lebesgue measure (which is the usual
measure assumed in calculus-level probability courses). Only random vectors whose distributions are absolutely
continuous with respect to a measure are said to have densities (with respect to that measure). To talk about densities but
avoid dealing with measure-theoretic complications it can be simpler to restrict attention to a subset of of the
coordinates of such that the covariance matrix for this subset is positive definite; then the other coordinates may be
thought of as an affine function of the selected coordinates.

To talk about densities meaningfully in the singular case, then, we must select a different base measure. Using the
disintegration theorem we can define a restriction of Lebesgue measure to the -dimensional affine subspace of
where the Gaussian distribution is supported, i.e. . With respect to this measure the
distribution has density:

where is the generalized inverse and det* is the pseudo-determinant.[8]


Cumulative distribution function

The notion of cumulative distribution function (cdf) in dimension 1 can be extended in two ways to the multidimensional
case, based on rectangular and ellipsoidal regions.

The first way is to define the cdf of a random vector as the probability that all components of are less than or
equal to the corresponding values in the vector :[9]

Though there is no closed form for , there are a number of algorithms that estimate it numerically (https://cran.r-proj
ect.org/web/packages/TruncatedNormal/).[9][10]

Another way is to define the cdf as the probability that a sample lies inside the ellipsoid determined by its
Mahalanobis distance from the Gaussian, a direct generalization of the standard deviation .[11] In order to compute the
values of this function, closed analytic formulae exist,[11] as follows.

Interval

The interval for the multivariate normal distribution yields a region consisting of those vectors x satisfying

Here is a -dimensional vector, is the known -dimensional mean vector, is the known covariance matrix and
is the quantile function for probability of the chi-squared distribution with degrees of freedom.[12] When
the expression defines the interior of an ellipse and the chi-squared distribution simplifies to an exponential
distribution with mean equal to two.

Complementary cumulative distribution function (tail distribution)

The complementary cumulative distribution function (ccdf) or the tail distribution is defined as
. When , then the ccdf can be written as a probability the maximum of
dependent Gaussian variables:[13]

While no simple closed formula exists for computing the ccdf, the maximum of dependent Gaussian variables can be
estimated accurately via the Monte Carlo method.[13][14]

Properties

Higher moments

The kth-order moments of x are given by

where r1 + r2 + ⋯ + rN = k.

The kth-order central moments are as follows


a. If k is odd, μ 1, …, N(x − μ) = 0 .
b. If k is even with k = 2λ, then

where the sum is taken over all allocations of the set into λ (unordered) pairs. That is, for a kth (= 2λ = 6)
central moment, one sums the products of λ = 3 covariances (the expected value μ is taken to be 0 in the interests of
parsimony):

This yields terms in the sum (15 in the above case), each being the product of λ (in this case 3) covariances.
For fourth order moments (four variables) there are three terms. For sixth-order moments there are 3 × 5 = 15 terms,
and for eighth-order moments there are 3 × 5 × 7 = 105 terms.

The covariances are then determined by replacing the terms of the list by the corresponding terms of the list
consisting of r1 ones, then r2 twos, etc.. To illustrate this, examine the following 4th-order central moment case:

where is the covariance of Xi and Xj. With the above method one first finds the general case for a kth moment with k
different X variables, , and then one simplifies this accordingly. For example, for , one lets
Xi = Xj and one uses the fact that .

Likelihood function

If the mean and variance matrix are known, a suitable log likelihood function for a single observation x is

where x is a vector of real numbers (to derive this, simply take the log of the PDF). The circularly symmetric version of
the complex case, where z is a vector of complex numbers, would be

i.e. with the conjugate transpose (indicated by ) replacing the normal transpose (indicated by ). This is slightly different
than in the real case, because the circularly symmetric version of the complex normal distribution has a slightly different
form.
A similar notation is used for multiple linear regression.[15]

Differential entropy

The differential entropy of the multivariate normal distribution is[16]

where the bars denote the matrix determinant and k is the dimensionality of the vector space.

Kullback–Leibler divergence

The Kullback–Leibler divergence from to , for non-singular matrices Σ0 and Σ1 , is:[17]

where is the dimension of the vector space.

The logarithm must be taken to base e since the two terms following the logarithm are themselves base-e logarithms of
expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result
measured in nats. Dividing the entire expression above by loge 2 yields the divergence in bits.

When ,

Mutual information

The mutual information of a distribution is a special case of the Kullback–Leibler divergence in which is the full
multivariate distribution and is the product of the 1-dimensional marginal distributions. In the notation of the Kullback–
Leibler divergence section of this article, is a diagonal matrix with the diagonal entries of , and . The
resulting formula for mutual information is:

where is the correlation matrix constructed from .

In the bivariate case the expression for the mutual information is:

Joint normality

Normally distributed and independent


If and are normally distributed and independent, this implies they are "jointly normally distributed", i.e., the pair
must have multivariate normal distribution. However, a pair of jointly normally distributed variables need not be
independent (would only be so if uncorrelated, ).

Two normally distributed random variables need not be jointly bivariate normal

The fact that two random variables and both have a normal distribution does not imply that the pair has a
joint normal distribution. A simple example is one in which X has a normal distribution with expected value 0 and
variance 1, and if and if , where . There are similar counterexamples for more
than two random variables. In general, they sum to a mixture model.

Correlations and independence

In general, random variables may be uncorrelated but statistically dependent. But if a random vector has a multivariate
normal distribution then any two or more of its components that are uncorrelated are independent. This implies that any
two or more of its components that are pairwise independent are independent. But, as pointed out just above, it is not true
that two random variables that are (separately, marginally) normally distributed and uncorrelated are independent.

Conditional distributions

If N-dimensional x is partitioned as follows

and accordingly μ and Σ are partitioned as follows

then the distribution of x1 conditional on x2 = a is multivariate normal (x1 | x2 = a) ~ N(μ, Σ) where

and covariance matrix

[18]

This matrix is the Schur complement of Σ22 in Σ. This means that to calculate the conditional covariance matrix, one
inverts the overall covariance matrix, drops the rows and columns corresponding to the variables being conditioned upon,
and then inverts back to get the conditional covariance matrix. Here is the generalized inverse of .

Note that knowing that x2 = a alters the variance, though the new variance does not depend on the specific value of a;
perhaps more surprisingly, the mean is shifted by ; compare this with the situation of not knowing the
value of a, in which case x1 would have distribution .

An interesting fact derived in order to prove this result, is that the random vectors and are
independent.

The matrix Σ12 Σ22 −1 is known as the matrix of regression coefficients.


Bivariate case

In the bivariate case where x is partitioned into and , the conditional distribution of given is[19]

where is the correlation coefficient between and .

Bivariate conditional expectation

In the general case

The conditional expectation of X1 given X2 is:

Proof: the result is obtained by taking the expectation of the conditional distribution above.

In the centered case with unit variances

The conditional expectation of X1 given X2 is

and the conditional variance is

thus the conditional variance does not depend on x2 .

The conditional expectation of X1 given that X2 is smaller/bigger than z is (Maddala 1983, p. 367[20]) :

where the final ratio here is called the inverse Mills ratio.

Proof: the last two results are obtained using the result , so that

and then using the properties of the expectation of a truncated


normal distribution.

Marginal distributions
To obtain the marginal distribution over a subset of multivariate normal random variables, one only needs to drop the
irrelevant variables (the variables that one wants to marginalize out) from the mean vector and the covariance matrix. The
proof for this follows from the definitions of multivariate normal distributions and linear algebra.[21]

Example

Let X = [X1 , X2 , X3 ] be multivariate normal random variables with mean vector μ = [μ1 , μ2 , μ3 ] and covariance matrix Σ
(standard parametrization for multivariate normal distributions). Then the joint distribution of X′ = [X1 , X3 ] is multivariate

normal with mean vector μ′ = [μ1 , μ3 ] and covariance matrix .

Affine transformation

If Y = c + BX is an affine transformation of where c is an vector of constants and B is a


constant matrix, then Y has a multivariate normal distribution with expected value c + Bμ and variance BΣBT
i.e., . In particular, any subset of the Xi has a marginal distribution that is also multivariate
normal. To see this, consider the following example: to extract the subset (X1 , X2 , X4 )T, use

which extracts the desired elements directly.

Another corollary is that the distribution of Z = b · X, where b is a constant vector with the same number of elements as
X and the dot indicates the dot product, is univariate Gaussian with . This result follows by
using

Observe how the positive-definiteness of Σ implies that the variance of the dot product must be positive.

An affine transformation of X such as 2X is not the same as the sum of two independent realisations of X.

Geometric interpretation

The equidensity contours of a non-singular multivariate normal distribution are ellipsoids (i.e. linear transformations of
hyperspheres) centered at the mean.[22] Hence the multivariate normal distribution is an example of the class of elliptical
distributions. The directions of the principal axes of the ellipsoids are given by the eigenvectors of the covariance matrix
. The squared relative lengths of the principal axes are given by the corresponding eigenvalues.

If Σ = UΛUT = UΛ1/2 (UΛ1/2 )T is an eigendecomposition where the columns of U are unit eigenvectors and Λ is a
diagonal matrix of the eigenvalues, then we have

Moreover, U can be chosen to be a rotation matrix, as inverting an axis does not have any effect on N(0, Λ), but inverting
a column changes the sign of U's determinant. The distribution N(μ, Σ) is in effect N(0, I) scaled by Λ1/2 , rotated by U
and translated by μ.

Conversely, any choice of μ, full rank matrix U, and positive diagonal entries Λi yields a non-singular multivariate
normal distribution. If any Λi is zero and U is square, the resulting covariance matrix UΛUT is singular. Geometrically
this means that every contour ellipsoid is infinitely thin and has zero volume in n-dimensional space, as at least one of the
principal axes has length of zero; this is the degenerate case.
"The radius around the true mean in a bivariate normal random variable, re-written in polar coordinates (radius and
angle), follows a Hoyt distribution."[23]

In one dimension the probabiltity to find a sample of the normal distibution in the interval is approximately
68.27%, in higher dimensions the probability to find a sample in the region of the standard deviation ellipse is lower[24].

Dimensionality Probability
1 0.6827
2 0.3935
3 0.1987
4 0.0902
5 0.0374
6 0.0144
7 0.0052
8 0.0018
9 0.0006
10 0.0002

Statistical Inference

Parameter estimation

The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is
straightforward.

In short, the probability density function (pdf) of a multivariate normal is

and the ML estimator of the covariance matrix from a sample of n observations is

which is simply the sample covariance matrix. This is a biased estimator whose expectation is

An unbiased sample covariance is

(matrix form; I is Identity matrix, J is

matrix of ones)

The Fisher information matrix for estimating the parameters of a multivariate normal distribution has a closed form
expression. This can be used, for example, to compute the Cramér–Rao bound for parameter estimation in this setting.
See Fisher information for more details.
Bayesian inference

In Bayesian statistics, the conjugate prior of the mean vector is another multivariate normal distribution, and the conjugate
prior of the covariance matrix is an inverse-Wishart distribution . Suppose then that n observations have been made

and that a conjugate prior has been assigned, where

where

and

Then,

where

Multivariate normality tests

Multivariate normality tests check a given set of data for similarity to the multivariate normal distribution. The null
hypothesis is that the data set is similar to the normal distribution, therefore a sufficiently small p-value indicates non-
normal data. Multivariate normality tests include the Cox–Small test[25] and Smith and Jain's adaptation[26] of the
Friedman–Rafsky test created by Larry Rafsky and Jerome Friedman.[27]

Mardia's test[28] is based on multivariate extensions of skewness and kurtosis measures. For a sample {x1 , ..., xn } of k-
dimensional vectors we compute

Under the null hypothesis of multivariate normality, the statistic A will have approximately a chi-squared distribution with
1
6
⋅k(k + 1)(k + 2) degrees of freedom, and B will be approximately standard normal N(0,1).
Mardia's kurtosis statistic is skewed and converges very slowly to the limiting normal distribution. For medium size
samples , the parameters of the asymptotic distribution of the kurtosis statistic are modified[29] For small
sample tests ( ) empirical critical values are used. Tables of critical values for both statistics are given by
Rencher[30] for k = 2, 3, 4.

Mardia's tests are affine invariant but not consistent. For example, the multivariate skewness test is not consistent against
symmetric non-normal alternatives.[31]

The BHEP test[32] computes the norm of the difference between the empirical characteristic function and the theoretical
characteristic function of the normal distribution. Calculation of the norm is performed in the L2 (μ) space of square-
integrable functions with respect to the Gaussian weighting function . The test statistic is

The limiting distribution of this test statistic is a weighted sum of chi-squared random variables,[32] however in practice it
is more convenient to compute the sample quantiles using the Monte-Carlo simulations.

A detailed survey of these and other test procedures is available.[33]

Computational methods

Drawing values from the distribution

A widely used method for drawing (sampling) a random vector x from the N-dimensional multivariate normal distribution
with mean vector μ and covariance matrix Σ works as follows:[34]

1. Find any real matrix A such that A AT = Σ. When Σ is positive-definite, the Cholesky decomposition is
typically used, and the extended form of this decomposition can always be used (as the covariance
matrix may be only positive semi-definite) in both cases a suitable matrix A is obtained. An alternative is
to use the matrix A = UΛ½ obtained from a spectral decomposition Σ = UΛU−1 of Σ. The former approach
is more computationally straightforward but the matrices A change for different orderings of the elements
of the random vector, while the latter approach gives matrices that are related by simple re-orderings. In
theory both approaches give equally good ways of determining a suitable matrix A, but there are
differences in computation time.
2. Let z = (z1, …, zN)T be a vector whose components are N independent standard normal variates (which
can be generated, for example, by using the Box–Muller transform).
3. Let x be μ + Az. This has the desired distribution due to the affine transformation property.

See also
Chi distribution, the pdf of the 2-norm (or Euclidean norm) of a multivariate normally distributed vector
(centered at zero).
Complex normal distribution, an application of bivariate normal distribution
Copula, for the definition of the Gaussian or normal copula model.
Multivariate t-distribution, which is another widely used spherically symmetric multivariate distribution.
Multivariate stable distribution extension of the multivariate normal distribution, when the index (exponent
in the characteristic function) is between zero and two.
Mahalanobis distance
Wishart distribution
Matrix normal distribution

References
1. Lapidoth, Amos (2009). A Foundation in Digital Communication. Cambridge University Press. ISBN 978-
0-521-19395-5.
2. Gut, Allan (2009). An Intermediate Course in Probability. Springer. ISBN 978-1-441-90161-3.
3. Kac, M. (1939). "On a characterization of the normal distribution". American Journal of Mathematics. 61
(3): 726–728. doi:10.2307/2371328 (https://doi.org/10.2307%2F2371328). JSTOR 2371328 (https://www.
jstor.org/stable/2371328).
4. Sinz, Fabian; Gerwinn, Sebastian; Bethge, Matthias (2009). "Characterization of the p-generalized
normal distribution". Journal of Multivariate Analysis. 100 (5): 817–820. doi:10.1016/j.jmva.2008.07.006
(https://doi.org/10.1016%2Fj.jmva.2008.07.006).
5. UIUC, Lecture 21. The Multivariate Normal Distribution (http://www.math.uiuc.edu/~r-ash/Stat/StatLec21-
25.pdf), 21.5:"Finding the Density".
6. Hamedani, G. G.; Tata, M. N. (1975). "On the determination of the bivariate normal distribution from
distributions of linear combinations of the variables". The American Mathematical Monthly. 82 (9): 913–
915. doi:10.2307/2318494 (https://doi.org/10.2307%2F2318494). JSTOR 2318494 (https://www.jstor.org/
stable/2318494).
7. Wyatt, John. "Linear least mean-squared error estimation" (http://web.mit.edu/6.041/www/LECTURE/lec2
2.pdf) (PDF). Lecture notes course on applied probability. Retrieved 23 January 2012.
8. Rao, C.R. (1973). Linear Statistical Inference and Its Applications. New York: Wiley. pp. 527–528.
9. Botev, Z. I. (2016). "The normal law under linear restrictions: simulation and estimation via minimax
tilting". Journal of the Royal Statistical Society, Series B. 79: 125–148. arXiv:1603.04166 (https://arxiv.or
g/abs/1603.04166). Bibcode:2016arXiv160304166B (https://ui.adsabs.harvard.edu/abs/2016arXiv16030
4166B). doi:10.1111/rssb.12162 (https://doi.org/10.1111%2Frssb.12162).
10. Genz, Alan (2009). Computation of Multivariate Normal and t Probabilities (https://www.springer.com/stati
stics/computational+statistics/book/978-3-642-01688-2). Springer. ISBN 978-3-642-01689-9.
11. Bensimhoun Michael, N-Dimensional Cumulative Function, And Other Useful Facts About Gaussians
and Normal Densities (2006) (https://upload.wikimedia.org/wikipedia/commons/a/a2/Cumulative_functio
n_n_dimensional_Gaussians_12.2013.pdf)
12. Siotani, Minoru (1964). "Tolerance regions for a multivariate normal population" (http://www.ism.ac.jp/edit
sec/aism/pdf/016_1_0135.pdf) (PDF). Annals of the Institute of Statistical Mathematics. 16 (1): 135–153.
doi:10.1007/BF02868568 (https://doi.org/10.1007%2FBF02868568).
13. Botev, Z. I.; Mandjes, M.; Ridder, A. (6–9 December 2015). "Tail distribution of the maximum of correlated
Gaussian random variables". 2015 Winter Simulation Conference (WSC). Huntington Beach, Calif.,
USA: IEEE. pp. 633–642. doi:10.1109/WSC.2015.7408202 (https://doi.org/10.1109%2FWSC.2015.7408
202). ISBN 978-1-4673-9743-8.
14. Adler, R. J.; Blanchet, J.; Liu, J. (7–10 Dec 2008). "Efficient simulation for tail probabilities of Gaussian
random fields". 2008 Winter Simulation Conference (WSC). Miami, Fla., USA: IEEE. pp. 328–336.
doi:10.1109/WSC.2008.473608 (https://doi.org/10.1109%2FWSC.2008.473608). ISBN 978-1-4244-
2707-9.
15. Tong, T. (2010) Multiple Linear Regression : MLE and Its Distributional Results (http://amath.colorado.ed
u/courses/7400/2010Spr/lecture9.pdf) Archived (https://www.webcitation.org/6HPbX5thy?url=http://amat
h.colorado.edu/courses/7400/2010Spr/lecture9.pdf) 2013-06-16 at WebCite, Lecture Notes
16. Gokhale, DV; Ahmed, NA; Res, BC; Piscataway, NJ (May 1989). "Entropy Expressions and Their
Estimators for Multivariate Distributions". IEEE Transactions on Information Theory. 35 (3): 688–692.
doi:10.1109/18.30996 (https://doi.org/10.1109%2F18.30996).
17. J. Duchi, Derivations for Linear Algebra and Optimization [1] (http://stanford.edu/~jduchi/projects/general
_notes.pdf). pp. 13
18. Eaton, Morris L. (1983). Multivariate Statistics: a Vector Space Approach. John Wiley and Sons. pp. 116–
117. ISBN 978-0-471-02776-8.
19. Jensen, J (2000). Statistics for Petroleum Engineers and Geoscientists. Amsterdam: Elsevier. p. 207.
20. Gangadharrao, Maddala (1983). Limited Dependent and Qualitative Variables in Econometrics.
Cambridge University Press.
21. The formal proof for marginal distribution is shown here
http://fourier.eng.hmc.edu/e161/lectures/gaussianprocess/node7.html
22. Nikolaus Hansen (2016). "The CMA Evolution Strategy: A Tutorial" (https://web.archive.org/web/201003
31114258/http://www.lri.fr/~hansen/cmatutorial.pdf) (PDF). arXiv:1604.00772 (https://arxiv.org/abs/1604.0
0772). Bibcode:2016arXiv160400772H (https://ui.adsabs.harvard.edu/abs/2016arXiv160400772H).
Archived from the original (http://www.lri.fr/~hansen/cmatutorial.pdf) (PDF) on 2010-03-31. Retrieved
2012-01-07.
23. Daniel Wollschlaeger. "The Hoyt Distribution (Documentation for R package 'shotGroups' version 0.6.2)"
(http://finzi.psych.upenn.edu/usr/share/doc/library/shotGroups/html/hoyt.html).
24. Wang, Bin; Shi, Wenzhong; Miao, Zelang (2015-03-13). Rocchini, Duccio (ed.). "Confidence Analysis of
Standard Deviational Ellipse and Its Extension into Higher Dimensional Euclidean Space" (https://dx.plo
s.org/10.1371/journal.pone.0118537). PLOS ONE. 10 (3): e0118537. doi:10.1371/journal.pone.0118537
(https://doi.org/10.1371%2Fjournal.pone.0118537). ISSN 1932-6203 (https://www.worldcat.org/issn/1932
-6203). PMC 4358977 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4358977). PMID 25769048 (http
s://pubmed.ncbi.nlm.nih.gov/25769048).
25. Cox, D. R.; Small, N. J. H. (1978). "Testing multivariate normality". Biometrika. 65 (2): 263.
doi:10.1093/biomet/65.2.263 (https://doi.org/10.1093%2Fbiomet%2F65.2.263).
26. Smith, S. P.; Jain, A. K. (1988). "A test to determine the multivariate normality of a data set". IEEE
Transactions on Pattern Analysis and Machine Intelligence. 10 (5): 757. doi:10.1109/34.6789 (https://doi.
org/10.1109%2F34.6789).
27. Friedman, J. H.; Rafsky, L. C. (1979). "Multivariate Generalizations of the Wald–Wolfowitz and Smirnov
Two-Sample Tests". The Annals of Statistics. 7 (4): 697. doi:10.1214/aos/1176344722 (https://doi.org/10.
1214%2Faos%2F1176344722).
28. Mardia, K. V. (1970). "Measures of multivariate skewness and kurtosis with applications". Biometrika. 57
(3): 519–530. doi:10.1093/biomet/57.3.519 (https://doi.org/10.1093%2Fbiomet%2F57.3.519).
29. Rencher (1995), pages 112–113.
30. Rencher (1995), pages 493–495.
31. Baringhaus, L.; Henze, N. (1991). "Limit distributions for measures of multivariate skewness and kurtosis
based on projections". Journal of Multivariate Analysis. 38: 51–69. doi:10.1016/0047-259X(91)90031-V
(https://doi.org/10.1016%2F0047-259X%2891%2990031-V).
32. Baringhaus, L.; Henze, N. (1988). "A consistent test for multivariate normality based on the empirical
characteristic function". Metrika. 35 (1): 339–348. doi:10.1007/BF02613322 (https://doi.org/10.1007%2FB
F02613322).
33. Henze, Norbert (2002). "Invariant tests for multivariate normality: a critical review". Statistical Papers. 43
(4): 467–506. doi:10.1007/s00362-002-0119-6 (https://doi.org/10.1007%2Fs00362-002-0119-6).
34. Gentle, J.E. (2009). Computational Statistics (http://cds.cern.ch/record/1639470). Statistics and
Computing. New York: Springer. pp. 315–316. doi:10.1007/978-0-387-98144-4 (https://doi.org/10.1007%
2F978-0-387-98144-4). ISBN 978-0-387-98143-7.

Literature
Rencher, A.C. (1995). Methods of Multivariate Analysis. New York: Wiley.
Tong, Y. L. (1990). The multivariate normal distribution. Springer Series in Statistics. New York: Springer-
Verlag. doi:10.1007/978-1-4613-9655-0 (https://doi.org/10.1007%2F978-1-4613-9655-0). ISBN 978-1-
4613-9657-4.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Multivariate_normal_distribution&oldid=970403679"

This page was last edited on 31 July 2020, at 03:01 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you
agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.

You might also like