Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Mayhs

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Probability distribution (https://en.wikipedia.

org/wiki/Probability_distribution)

In probability and statistics, a probability distribution assigns a probability to each measurable subset of
the possible outcomes of a random experiment, survey, or procedure of statistical inference. Examples are
found in experiments whose sample space is non-numerical, where the distribution would be a categorical
distribution; experiments whose sample space is encoded by discrete random variables, where the
distribution can be specified by a probability mass function; and experiments with sample spaces encoded
by continuous random variables, where the distribution can be specified by a probability density function.
More complex experiments, such as those involving stochastic processes defined in continuous time, may
demand the use of more general probability measures.

A probability distribution can either be univariate or multivariate. A univariate distribution gives the
probabilities of a single random variable taking on various alternative values; a multivariate distribution (a
joint probability distribution) gives the probabilities of a random vector—a set of two or more random
variables—taking on various combinations of values. Important and commonly encountered univariate
probability distributions include the binomial distribution, the hypergeometric distribution, and the normal
distribution. The multivariate normal distribution is a commonly encountered multivariate distribution.

Random Variable
In probability and statistics, a random variable, aleatory variable or stochastic variable is
a variable whose value is subject to variations due to chance (i.e.randomness, in a mathematical
sense).[1]:391 A random variable can take on a set of possible different values (similarly to other mathematical
variables), each with an associated probability, in contrast to other mathematical variables.
A random variable's possible values might represent the possible outcomes of a yet-to-be-performed
experiment, or the possible outcomes of a past experiment whose already-existing value is uncertain (for
example, due to imprecise measurements or quantum uncertainty). They may also conceptually represent
either the results of an "objectively" random process (such as rolling a die) or the "subjective" randomness
that results from incomplete knowledge of a quantity. The meaning of the probabilities assigned to the
potential values of a random variable is not part of probability theory itself but is instead related to
philosophical arguments over the interpretation of probability. The mathematics works the same regardless
of the particular interpretation in use.
Application : The concept of the probability distribution and the random variables which they describe underlies
the mathematical discipline of probability theory, and the science of statistics. There is spread or variability in
almost any value that can be measured in a population (e.g. height of people, durability of a metal, sales growth,
traffic flow, etc.); almost all measurements are made with some intrinsic error; in physics many processes are
described probabilistically,from the kinetic properties of gases to the quantum mechanical description
of fundamental particles. For these and many other reasons, simple numbers are often inadequate for describing
a quantity, while probability distributions are often more appropriate.

Binomial Distribution (https://en.wikipedia.org/wiki/Binomial_distribution )

In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability
distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields
success with probability p. A success/failure experiment is also called a Bernoulli experiment or Bernoulli trial;
when n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the
popular binomial test of statistical significance.
The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with
replacement from a population of size N. If the sampling is carried out without replacement, the draws are not
independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However,
for N much larger than n, the binomial distribution is a good approximation, and widely used.

Application : (http://www.statisticshowto.com/what-is-a-binomial-distribution/)

Many instances of binomial distributions can be found in real life. For example, if a new drug is introduced to cure a
disease, it either cures the disease (it’s successful) or it doesn’t cure the disease (it’s a failure). If you purchase a lottery
ticket, you’re either going to win money, or you aren’t. Basically, anything you can think of that can only be a success or
a failure can be represented by a binomial distribution.

Normal distribution (https://en.wikipedia.org/wiki/Normal_distribution)

The normal distribution is remarkably useful because of the central limit theorem. In its most general form, under
some conditions (which include finite variance), it states that averages of random variables independently drawn
from independent distributions converge in distribution to the normal, that is, become normally distributed when
the number of random variables is sufficiently large. Physical quantities that are expected to be the sum of many
independent processes (such as measurement errors) often have distributions that are nearly normal.[3] Moreover,
many results and methods (such as propagation of uncertainty and least squares parameter fitting) can be
derived analytically in explicit form when the relevant variables are normally distributed.

The normal distribution is sometimes informally called the bell curve. However, many other distributions are
bell-shaped (such as Cauchy's, Student's, andlogistic). The terms Gaussian function and Gaussian bell curve are
also ambiguous because they sometimes refer to multiples of the normal distribution that cannot be directly
interpreted in terms of probabilities.

The probability density of the normal distribution is:

Here, is the mean or expectation of the distribution (and also its median and mode). The parameter is

its standard deviation with its variance then . A random variable with a Gaussian distribution is said to

be normally distributed and is called a normal deviate.

If and , the distribution is called the standard normal distribution or the unit normal

distribution denoted by and a random variable with that distribution is a standard normal

deviate.

The normal distribution is the only absolutely continuous distribution whose cumulants beyond the first two
(i.e., other than the mean and variance) are zero. It is also the continuous distribution with the maximum
entropy for a specified mean and variance.[4][5]

The normal distribution is a subclass of the elliptical distributions. The normal distribution is symmetric about
its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that
are inherently positive or strongly skewed, such as the weight of a person or the price of a share. Such
variables may be better described by other distributions, such as the log-normal distribution or the Pareto
distribution.
The value of the normal distribution is practically zero when the value x lies more than a few standard
deviations away from the mean. Therefore, it may not be an appropriate model when one expects a
significant fraction of outliers — values that lie many standard deviations away from the mean — and least
squares and other statistical inference methods that are optimal for normally distributed variables often
become highly unreliable when applied to such data. In those cases, a more heavy-tailed distribution should
be assumed and the appropriate robust statistical inference methods applied.

The Gaussian distribution belongs to the family of stable distributions which are the attractors of sums
of independent, identically distributed distributions whether or not the mean or variance is finite. Except for
the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance.

Normal Approximation (https://en.wikipedia.org/wiki/Binomial_distribution)

If n is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to
B(n, p) is given by the normal distribution

and this basic approximation can be improved in a simple way by using a suitable continuity correction. The
basic approximation generally improves as n increases (at least 20) and is better when p is not near to 0 or
1.[9] Various rules of thumb may be used to decide whether n is large enough, and p is far enough from the
extremes of zero or one:

 One rule is that both x=np and n(1 − p) must be greater than 5. However, the specific number varies from
source to source, and depends on how good an approximation one wants; some sources give 10 which
gives virtually the same results as the following rule for large n until n is very large (ex: x=11, n=7752).

 A second rule[9] is that for n > 5 the normal approximation is adequate if

 Another commonly used rule holds that the normal approximation is appropriate only if everything
within 3 standard deviations of its mean is within the range of possible values,[citation needed] that is if

Poisson Distribution (https://en.wikipedia.org/wiki/Poisson_distribution)


In probability theory and statistics, the Poisson distribution (French pronunciation [pwasɔ̃]; in English
usually /ˈpwɑːsɒn/), named after French mathematicianSiméon Denis Poisson, is a discrete probability
distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or
space if these events occur with a known average rate and independently of the time since the last event.[1] The
Poisson distribution can also be used for the number of events in other specified intervals such as distance, area
or volume.
Application : Other examples that may follow a Poisson: the number of phone calls received by a call center per
hour, the number of decay events per second from a radioactive source, or the number of pedicabs in queue in a
particular street in a given hour of a day.[3]

Poisson approximation(https://en.wikipedia.org/wiki/Binomial_distribution)
The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while
the product np remains fixed. Therefore the Poisson distribution with parameter λ = np can be used as an
approximation to B(n, p) of the binomial distribution if n is sufficiently large and p is sufficiently small. According to
two rules of thumb, this approximation is good if n ≥ 20 and p ≤ 0.05, or if n ≥ 100 and np ≤ 10.[11]

You might also like