Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
35 views

STAT

The null hypothesis (H0) represents a theory that has not been proven. It is given special consideration in statistical hypothesis testing. The conclusion of a test is always given in terms of rejecting or not rejecting H0, never in terms of the alternative hypothesis (H1). There are two types of errors that can occur - a type I error when H0 is wrongly rejected, and a type II error when H0 is not rejected but is actually false. The probability of each error depends on factors like the sample size and significance level of the test.

Uploaded by

Bianca De Jesus
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

STAT

The null hypothesis (H0) represents a theory that has not been proven. It is given special consideration in statistical hypothesis testing. The conclusion of a test is always given in terms of rejecting or not rejecting H0, never in terms of the alternative hypothesis (H1). There are two types of errors that can occur - a type I error when H0 is wrongly rejected, and a type II error when H0 is not rejected but is actually false. The probability of each error depends on factors like the sample size and significance level of the test.

Uploaded by

Bianca De Jesus
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Null Hypothesis

The null hypothesis, H0, represents a theory that has been put forward, either
because it is believed to be true or because it is to be used as a basis for
argument, but has not been proved. For example, in a clinical trial of a new
drug, the null hypothesis might be that the new drug is no better, on average,
than the current drug. We would write
H0: there is no difference between the two drugs on average.

We give special consideration to the null hypothesis. This is due to the fact
that the null hypothesis relates to the statement being tested, whereas the
alternative hypothesis relates to the statement to be accepted if / when the
null is rejected.

The final conclusion once the test has been carried out is always given in
terms of the null hypothesis. We either "Reject H0 in favour of H1" or "Do not
reject H0"; we never conclude "Reject H1", or even "Accept H1".

If we conclude "Do not reject H0", this does not necessarily mean that the null
hypothesis is true, it only suggests that there is not sufficient evidence against
H0 in favour of H1. Rejecting the null hypothesis then, suggests that the
alternative hypothesis may be true.

See also hypothesis test.

 Alternative Hypothesis
The alternative hypothesis, H1, is a statement of what a statistical hypothesis
test is set up to establish. For example, in a clinical trial of a new drug, the
alternative hypothesis might be that the new drug has a different effect, on
average, compared to that of the current drug. We would write
H1: the two drugs have different effects, on average.
The alternative hypothesis might also be that the new drug is better, on
average, than the current drug. In this case we would write
H1: the new drug is better than the current drug, on average.

The final conclusion once the test has been carried out is always given in
terms of the null hypothesis. We either "Reject H0 in favour of H1" or "Do not
reject H0". We never conclude "Reject H1", or even "Accept H1".
If we conclude "Do not reject H0", this does not necessarily mean that the null
hypothesis is true, it only suggests that there is not sufficient evidence against
H0 in favour of H1. Rejecting the null hypothesis then, suggests that the
alternative hypothesis may be true.

 Type I Error

In a hypothesis test, a type I error occurs when the null hypothesis is rejected
when it is in fact true; that is, H0 is wrongly rejected.

For example, in a clinical trial of a new drug, the null hypothesis might be that
the new drug is no better, on average, than the current drug; i.e.
H0: there is no difference between the two drugs on average.
A type I error would occur if we concluded that the two drugs produced
different effects when in fact there was no difference between them.
The following table gives a summary of possible results of any hypothesis
test:
Decision
Reject H0 Don't reject H0
H0 Type I Error Right decision
Truth
H1 Right decision Type II Error
A type I error is often considered to be more serious, and therefore more
important to avoid, than a type II error. The hypothesis test procedure is
therefore adjusted so that there is a guaranteed 'low' probability of rejecting
the null hypothesis wrongly; this probability is never 0. This probability of a
type I error can be precisely computed as
P(type I error) = significance level =

The exact probability of a type II error is generally unknown.

If we do not reject the null hypothesis, it may still be false (a type II error) as
the sample may not be big enough to identify the falseness of the null
hypothesis (especially if the truth is very close to hypothesis).

For any given set of data, type I and type II errors are inversely related; the
smaller the risk of one, the higher the risk of the other.

A type I error can also be referred to as an error of the first kind.


 Type II Error
In a hypothesis test, a type II error occurs when the null hypothesis H0, is not
rejected when it is in fact false. For example, in a clinical trial of a new drug,
the null hypothesis might be that the new drug is no better, on average, than
the current drug; i.e.
H0: there is no difference between the two drugs on average.
A type II error would occur if it was concluded that the two drugs produced the
same effect, i.e. there is no difference between the two drugs on average,
when in fact they produced different ones.

A type II error is frequently due to sample sizes being too small.

The probability of a type II error is generally unknown, but is symbolised by   


and written
P(type II error) = 

A type II error can also be referred to as an error of the second kind.

Critical Value(s)

The critical value(s) for a hypothesis test is a threshold to which the value of
the test statistic in a sample is compared to determine whether or not the null
hypothesis is rejected.

The critical value for any hypothesis test depends on the significance level at
which the test is carried out, and whether the test is one-sided or two-sided.

P-Value

The probability value (p-value) of a statistical hypothesis test is the probability


of getting a value of the test statistic as extreme as or more extreme than that
observed by chance alone, if the null hypothesis H0, is true.

It is the probability of wrongly rejecting the null hypothesis if it is in fact true.

It is equal to the significance level of the test for which we would only just
reject the null hypothesis. The p-value is compared with the actual
significance level of our test and, if it is smaller, the result is significant. That
is, if the null hypothesis were to be rejected at the 5% signficance level, this
would be reported as "p < 0.05".

Small p-values suggest that the null hypothesis is unlikely to be true. The
smaller it is, the more convincing is the rejection of the null hypothesis. It
indicates the strength of evidence for say, rejecting the null hypothesis H0,
rather than simply concluding "Reject H0' or "Do not reject H0".

One-sided Test

A one-sided test is a statistical hypothesis test in which the values for which
we can reject the null hypothesis, H0 are located entirely in one tail of the
probability distribution.

In other words, the critical region for a one-sided test is the set of values less
than the critical value of the test, or the set of values greater than the critical
value of the test.

A one-sided test is also referred to as a one-tailed test of significance.

The choice between a one-sided and a two-sided test is determined by the


purpose of the investigation or prior reasons for using a one-sided test.

Example

Suppose we wanted to test a manufacturers claim that there are, on average,


50 matches in a box. We could set up the following hypotheses
H0: µ = 50,
against
H1: µ < 50 or H1: µ > 50
Either of these two alternative hypotheses would lead to a one-sided test.
Presumably, we would want to test the null hypothesis against the first
alternative hypothesis since it would be useful to know if there is likely to be
less than 50 matches, on average, in a box (no one would complain if they get
the correct number of matches in a box or more).
Yet another alternative hypothesis could be tested against the same null,
leading this time to a two-sided test:
H0: µ = 50,
against
H1: µ not equal to 50
Here, nothing specific can be said about the average number of matches in a
box; only that, if we could reject the null hypothesis in our test, we would know
that the average number of matches in a box is likely to be less than or
greater than 50.

 Two-Sided Test

A two-sided test is a statistical hypothesis test in which the values for which
we can reject the null hypothesis, H0 are located in both tails of the probability
distribution.

In other words, the critical region for a two-sided test is the set of values less
than a first critical value of the test and the set of values greater than a second
critical value of the test.

A two-sided test is also referred to as a two-tailed test of significance.

The choice between a one-sided test and a two-sided test is determined by


the purpose of the investigation or prior reasons for using a one-sided test.

Example

Suppose we wanted to test a manufacturers claim that there are, on average,


50 matches in a box. We could set up the following hypotheses
H0: µ = 50,
against
H1: µ < 50 or H1: µ > 50
Either of these two alternative hypotheses would lead to a one-sided test.
Presumably, we would want to test the null hypothesis against the first
alternative hypothesis since it would be useful to know if there is likely to be
less than 50 matches, on average, in a box (no one would complain if they get
the correct number of matches in a box or more).
Yet another alternative hypothesis could be tested against the same null,
leading this time to a two-sided test:
H0: µ = 50,
against
H1: µ not equal to 50
Here, nothing specific can be said about the average number of matches in a
box; only that, if we could reject the null hypothesis in our test, we would know
that the average number of matches in a box is likely to be less than or
greater than 50.

You might also like