Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Lesson 10 (Simple Tests of Hypothesis)

The document provides an overview of hypothesis testing, including definitions of null and alternative hypotheses, types of errors (Type I and Type II), and decision rules such as P-value and regions of acceptance. It outlines the steps in hypothesis testing and discusses the significance level, degrees of freedom, and the appropriate tests (z-test and t-test) to use based on sample size and standard deviation knowledge. The document emphasizes the importance of statistical evidence in accepting or rejecting hypotheses in research.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lesson 10 (Simple Tests of Hypothesis)

The document provides an overview of hypothesis testing, including definitions of null and alternative hypotheses, types of errors (Type I and Type II), and decision rules such as P-value and regions of acceptance. It outlines the steps in hypothesis testing and discusses the significance level, degrees of freedom, and the appropriate tests (z-test and t-test) to use based on sample size and standard deviation knowledge. The document emphasizes the importance of statistical evidence in accepting or rejecting hypotheses in research.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Simple Tests of Hypothesis

Statistical hypothesis - is an assumption


about a population parameter.
– an assertion subject to verification
– an assumption used as the basis for
action
– a guess or prediction made by the
researcher regarding the possible
outcome of the study.
• Hypothesis testing refers to the formal
procedures used by statisticians or researchers
to accept or reject statistical hypotheses.
– The process of making an inference or
generalization on population parameters based on
the results of the study on samples.
– The assertion we hold as true until we have
sufficient statistical evidence to conclude
otherwise.
• In hypothesis testing, one draws conclusions
about the population using sample data.
Two types of statistical hypotheses.
(A) Null hypothesis. The null hypothesis,
denoted by Ho, is usually the hypothesis that
sample observations result purely from
chance.
– Must always express the idea of non-
significance of difference or relationship
– Hypothesis to be tested
– Always hoped to be rejected
• Examples on how to state the null
hypothesis:
– There is no significant relationship
between the respondents’ sex and
academic performance.
– One variable does not depend on the
other variable.
– Two variables are independent from
each other.
(B) Alternative hypothesis. The alternative
hypothesis, denoted by H1 or Ha, is the
hypothesis that sample observations are
influenced by some non-random cause.
– The one that we conclude is true if the
Ho is rejected.
– States that there is an effect, there is a
difference, or there is a relationship.
– Generally represents the idea which the
researcher wants to prove
• Examples on how to state the alternative
hypothesis:
– There is a significant relationship
between the respondents’ sex and
academic performance.
– One variable depends on the other
variable.
– Two variables are dependent with each
other.
Decision Errors
• Two types of errors can result from a hypothesis test.
(1) Type I error (α error). A Type I error occurs when
the researcher rejects a null hypothesis when it is true.
– The error of concluding that there is something (a
difference, or a change, or an effect) when in reality, there
is none.
– Rejecting a true Ho
– Example: Ho: Juan is not guilty. If the judge convicts Juan
when in fact he is not guilty, the court commits a Type I
error.
– α is read as Alpha which means level of significance
(2) Type II error (β error). A Type II error occurs
when the researcher fails to reject a null
hypothesis that is false.
– The error of concluding that there is nothing (no
difference, or no change, or no effect) when in
reality, there is.
– Accepting a false Ho
– Example: Ho: Juan is not guilty. If the judge acquits
Juan when in fact he is guilty, the court commits a
Type II error.
– β is read as Beta
Decision Rules
(1) P-value. The strength of evidence in support
of a null hypothesis is measured by the P-value.
Suppose the test statistic is equal to S. The P-
value is the probability of observing a test
statistic as extreme as S, assuming the null
hypothesis is true. If the P-value is less than the
significance level, we reject the null hypothesis.
• Note: The lower the p-value, the stronger the
evidence that the null hypothesis is false.
(2) Region of acceptance. The region of acceptance is a
range of values. If the test statistic falls within the region
of acceptance, the null hypothesis is not rejected. The
region of acceptance is defined so that the chance of
making a Type I error is equal to the significance level.

• The set of values outside the region of acceptance is


called the region of rejection. If the test statistic falls
within the region of rejection, the null hypothesis is
rejected. In such cases, we say that the hypothesis
has been rejected at the α level of significance.
One-Tailed and Two-Tailed Tests
• A test of a statistical hypothesis, where the
region of rejection is on only one side of the
sampling distribution, is called a one-tailed test.
• For example, suppose the null hypothesis states
that the mean is less than or equal to 10. The
alternative hypothesis would be that the mean
is greater than 10. The region of rejection would
consist of a range of numbers located on the
right side of sampling distribution; that is, a set
of numbers greater than 10.
• A test of a statistical hypothesis, where the
region of rejection is on both sides of the
sampling distribution, is called a two-tailed test.
• For example, suppose the null hypothesis states
that the mean is equal to 10. The alternative
hypothesis would be that the mean is less than
10 or greater than 10. The region of rejection
would consist of a range of numbers located on
both sides of sampling distribution; that is, the
region of rejection would consist partly of
numbers that were less than 10 and partly of
numbers that were greater than 10.
Level of Significance of a Test

• is the maximum value of the probability of


rejecting the null hypothesis (Ho) when in fact it
is true.
• the maximum size of Type I error that researcher
prepared to take risk.
• the probability of rejecting a true hypothesis
• is a measure of the strength of the evidence that
must be present in your sample before you will
reject the null hypothesis and conclude that the
effect is statistically significant.
• A 5% significance level means that we can accept 5
chances in 100 that we could reject the null
hypothesis when it should be accepted. It also
implies that we are 95% confident that we have
made the right decision.
– It indicates a 5% risk of concluding that a difference
exists when there is no actual difference.

• Degrees of Freedom
– Measures the number of scores that are free to vary
when computing the sum of squares for sample data.
– refers to the maximum number of logically independent
values, which are values that have the freedom to vary,
in the data sample.
• In testing the difference between two means,
the z-test or t-test may be used.

z-test is used when the population standard


deviation is known. This is also used if n ≥ 30.

t-test is used when the sample standard


deviations are known. This is also applied if n
< 30.
Steps in Hypothesis Testing
1. Formulate the null hypothesis (Ho) that there
is no significant difference between items
being compared. State the alternative
hypothesis (Ha) which is used in case Ho is
rejected.
2. Set the level of significance, α. Typically the
0.05 or the 0.01 level is used.
3. Determine the test to be used.
4. Determine the degrees of freedom (df) and the tabular
value for the test.
– For a single sample, df = number of items – 1 = n – 1.

– For two samples, df = n1 + n2 – 2, where n1 refers to


the number of items in the first sample; and the n2
refers to the number of items in the second sample.
– For a z-test, use the table of critical values of z based
on the area of the normal curve.
– For a t-test, look for the tabular value from the table
of t-distribution.
5. Compute for z or t as needed, using any of the formulas
found on the next page.
z-test
t-test
• Note: In this subject, we will be considering
only z-test for comparing two sample means,
t-test for comparing two samples means
(independent data) and t-test for correlated or
dependent data.
Data Analysis ToolPak Installation

You might also like