Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
5 views11 pages

AHE - 606 Unit 4

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 11

AHE 606 (RESEARCH METHODOLOGY IN VETERINARY AND ANIMAL

HUSBANDRY EXTENSION)

Department of Veterinary & Animal Husbandry


Extension Education, BVC
Topics covered
• Hypothesis– importance, selection criteria (quality of
workable hypothesis), formulation and testing of
hypothesis.
What is Hypothesis
• A
Testing?
statistical hypothesis is an assumption about a population
parameter. This assumption may or may not be true. Hypothesis
testing refers to the formal procedures used by statisticians to accept
or reject statistical hypotheses.
• There are two types of statistical hypotheses.

Null hypothesis. The null hypothesis, denoted by H0, is usually


the hypothesis that sample observations result purely from chance.
Alternative hypothesis. The alternative hypothesis, denoted by
H1 or Ha, is the hypothesis that sample observations are influenced by
some non-random cause.
Hypothesis Tests
A formal process to determine whether to reject a null hypothesis, based on sample data.
This process, called hypothesis testing, consists of four steps.
1. State the hypotheses. This involves stating the null and alternative hypotheses.
The hypotheses are stated in such a way that they are mutually exclusive. That is, if one is
true, the other must be false.
2. Formulate an analysis plan. The analysis plan describes how to use sample data
to evaluate the null hypothesis. The evaluation often focuses around a single test statistic.
3. Analyse sample data. Find the value of the test statistic (mean score, proportion,
t-score, z-score, etc.) described in the analysis plan.
4. Interpret results. Apply the decision rule described in the analysis plan. If the
value of the test statistic is unlikely, based on the null hypothesis, reject the null
hypothesis.
Decision Errors
Two types of errors can result from a hypothesis test.
• Type I error. A Type I error occurs when the researcher rejects a null
hypothesis when it is true. The probability of committing a Type I error is called
the significance level. This probability is also called alpha, and is often denoted
by α.
• Type II error. A Type II error occurs when the researcher fails to reject a null
hypothesis that is false. The probability of committing a Type II error is
called Beta, and is often denoted by β. The probability of not committing a Type
II error is called the Power of the test.
Power of a Hypothesis

Test
The probability of not committing a Type II error is called the power of a
hypothesis test.
• Factors That Affect Power

• The power of a hypothesis test is affected by three factors.

1. Sample size (n). Other things being equal, the greater the sample size,
the greater the power of the test.
2. Significance level (α). The higher the significance level, the higher
the power of the test.
3. The "true" value of the parameter being tested. The greater the
difference between the "true" value of a parameter and the value specified in
the null hypothesis, the greater the power of the test.
Hypothesis Test for a
Mean
• To conduct a hypothesis test of a mean, when the following
conditions are met:
1. The sampling method is simple random sampling.
2. The sampling distribution is normal
• This approach consists of four steps:

(1) state the hypotheses,


(2) formulate an analysis plan,
(3) analyze sample data,
(4) interpret results.
Hypothesis Test for
……….
1. State the Hypotheses
Every hypothesis test requires the analyst to state a
null hypothesis and an alternative hypothesis.
The hypotheses are stated in such a way that they are
mutually exclusive. That is, if one is true, the other must be
false; and vice versa.
Hypothesis Test for
……….
2. Formulate an Analysis Plan
The analysis plan describes how to use sample data to accept or
reject the null hypothesis.
It should specify the following elements.
1. Significance level. Often, researchers choose significance
levels equal to 0.01, 0.05, or 0.10; but any value between 0 and 1 can be
used.
2. Test method. Use the one-sample t-test to determine whether
the hypothesized mean differs significantly from the observed sample
mean.
Hypothesis Test for
……….
3. Analyze sample data

Using sample data, conduct a one-sample t-test. This involves finding the standard error, degrees of freedom, test statistic, and

the P-value associated with the test statistic.

1. Standard error. Compute the standard error (SE) of the sampling distribution.

SE = s * sqrt{ ( 1/n ) * [ ( N - n ) / ( N - 1 ) ] }

where s is the standard deviation of the sample, N is the population size, and n is the sample size. When the population size is

much larger (at least 20 times larger) than the sample size, the standard error can be approximated by:

SE = s / sqrt( n )

2. Degrees of freedom. The degrees of freedom (DF) is equal to the sample size (n) minus one. Thus, DF = n - 1.

3. Test statistic. The test statistic is a t-score (t) defined by the following equation.

t = (x - μ) / SE

where x is the sample mean, μ is the hypothesized population mean in the null hypothesis, and SE is the standard error.

4. P-value. The P-value is the probability of observing a sample statistic as extreme as the test statistic. Since the test statistic

is a t-score, use the t Distribution Calculator to assess the probability associated with the t-score, given the degrees of freedom

computed above. (See sample problems at the end of this lesson for examples of how this is done.)
Hypothesis Test for
……….
4. Interpret Results
If the sample findings are unlikely, given the null hypothesis, the
researcher rejects the null hypothesis. Typically, this involves comparing the P-
value to the significance level, and rejecting the null hypothesis when the P-value is
less than the significance level.

You might also like