Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

8614 2,,19GGT00160

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

ASSIGNMENT No.

Name: Syed Sami Ul Hassan Rizvi

Roll No: 19GGT00160

Subject: Educational Statistics

Course code: 8614

Program: B.Ed

Semester: Spring 2023

Date: 10/09/2023
ASSIGNMENT No. 2

Name: Syed Sami Ul Hassan Rizvi

Roll No: 19GGT00160

Subject: Educational Statistics

Course code: 8614

Program: B.Ed

Semester: Spring 2023

Date: 10/09/2023
Q.1

How do we calculate median? Also mention its merits

and demerits. Answer:

To start, let's understand what the median is. The median is the value
that separates the data into two equal halves, where half of the values
are above the median, and half are below it. Unlike the mean, which
is the average of all the values, the median is not affected by extreme
outliers in the dataset. This makes it a robust measure of central
tendency, especially in data sets with skewed distributions or extreme
values.

Calculating the median involves the following steps:

1. Arrange the data:

Start by arranging the dataset in ascending (or descending) order.


This step is necessary to find the middle value(s) later.

2. Determine the number of data points:

Count the total number of observations in the dataset. This will help
identify whether there is an odd or even number of values.

3. Find the middle value(s):

If there is an odd number of observations, the median will be the


middle value directly. For example, in a dataset of 7 values, the
median is the fourth value. However, if there is an even number of
observations, the median is the average of the two middle values.
For instance, if the dataset contains 8 values, the median is the
average of the fourth and fifth values.

Now, let's go through some examples to clarify the process:

Example 1: Odd number of values

Consider the dataset: 1, 3, 5, 2, 4, 6, 7

- Arrange the data in ascending order: 1, 2, 3, 4, 5, 6, 7

- As there are 7 values, the median will be the fourth value: 4

Example 2: Even number of values

Consider the dataset: 2, 7, 4, 6, 1, 3

- Arrange the data in ascending order: 1, 2, 3, 4, 6, 7

- As there are 6 values, the median will be the average of the third
and fourth values: (3 + 4) / 2 = 3.5

Now that we understand how to calculate the median, let's discuss its
merits and demerits:

Merits of using the median:

1. Robustness:

The median is resistant to extreme values/outliers, making it suitable


for datasets with skewed distributions or extreme observations. It is
unaffected by these outliers because it only considers the middle
value(s).

2. Appropriate for ordinal data:

The median can be used with ordinal data, where the values have a
specific order but not necessarily an equal interval. It accurately
represents the central tendency in such cases.

3. Less affected by sample size:


Unlike the mean, which can be heavily influenced by large or small
sample sizes, the median is relatively stable across different sample
sizes. Thus, it provides a more reliable representation of the data's
central position.

4. Easy to understand and interpret:

The median represents the value that divides the data into two equal
halves, making it intuitive and easy to understand for individuals with
limited statistical knowledge.

Demerits of using the median:

1. Less efficient:

The median discards a lot of information from the dataset as it only


considers the middle value(s) and ignores the rest. This can lead to a
loss of precision in estimating the central tendency, especially when
the dataset is large and rich in data points.

2. Limited arithmetic properties:

Unlike the mean, which has certain arithmetic properties (e.g., the
sum of deviations from the mean equals zero), the median lacks such
properties. This can make it less desirable in specific statistical
calculations.

3. Limited inferential statistics:

The median has fewer inferential statistical techniques associated


with it compared to the mean. This can limit its application in certain
analyses and interpretations.

4. Not suitable for interval or ratio data:

While the median is appropriate for ordinal data, it is less suitable for
interval or ratio data, which have equal intervals between the values.
In such cases, the mean is a better measure of central tendency.
In conclusion, the median is a useful measure of central tendency that
represents the middle value in a dataset. It is relatively unaffected by
extreme values and provides a robust estimation of central position,
making it advantageous in various scenarios. However, it has
limitations when compared to the mean, such as being less efficient
and having fewer arithmetic properties. Understanding the context
and characteristics of the data is crucial in determining whether the
median is the appropriate measure to use.

Q.2 Explain the process and errors in

hypothesis testing. Answer:

Hypothesis testing is a statistical method used to make inferences


about population parameters based on sample data. It involves
formulating a null hypothesis and an alternative hypothesis,
collecting data, conducting statistical tests, and interpreting the
results. It is an essential tool in scientific research and decision-
making processes.
However, like any statistical method, hypothesis testing has its own
limitations and potential errors.

The Process of Hypothesis Testing:

1. Formulating the null and alternative hypotheses:

The first step in hypothesis testing is to define the null hypothesis


(H0) and the alternative hypothesis (Ha). The null hypothesis
represents the status quo or the assumption that there is no significant
difference or relationship between variables, while the alternative
hypothesis suggests the presence of a significant difference or
relationship.

For example, let's consider a study investigating whether a new drug


is effective in treating a specific condition. The null hypothesis could
be that the drug has no effect (H0: the mean difference between the
drug and placebo groups is zero), while the alternative hypothesis
could be that the drug is effective (Ha: the mean difference between
the drug and placebo groups is not zero).

2. Selecting the appropriate test statistic:

The choice of the test statistic depends on the nature of the variables
being studied and the research question. Common test statistics
include t-tests, chi-square tests, and F- tests. The selection of the test
statistic is crucial as it determines the sampling distribution used to
calculate the p-value, which measures the strength of evidence
against the null hypothesis.

In our example, if the outcome variable is continuous, we may use a


t-test to compare the means of the drug and placebo groups. If the
outcome variable is categorical, a chi- square test might be more
appropriate.

3. Collecting and analyzing the data:

Once the hypotheses and test statistic are defined, data collection can
begin. It is important to ensure that the sample is representative of the
population of interest to make valid inferences.

In our example, participants would be randomly assigned to either the


drug or placebo group, and their outcomes would be measured and
recorded. The appropriate test statistic (e.g., t-statistic) would then be
calculated using the sample data.

4. Calculating the p-value:

The p-value is the probability of observing a test statistic as extreme


as the one calculated, assuming the null hypothesis is true. It
measures the strength of evidence against the null hypothesis. A low
p-value (typically less than 0.05) suggests strong evidence against the
null hypothesis, leading to its rejection in favor of the alternative
hypothesis.

5. Interpreting the results:

The final step is to interpret the results and draw conclusions. If the
p-value is less than the chosen significance level (usually 0.05), we
reject the null hypothesis in favor of the alternative hypothesis. In this
case, we conclude that there is evidence to support the alternative
hypothesis.

If the p-value is greater than the significance level, we fail to reject


the null hypothesis. However, this does not provide evidence for the
null hypothesis's truth; it simply means that we do not have sufficient
evidence to reject it.

Errors in Hypothesis Testing:

Hypothesis testing is not a foolproof method, and errors can occur.


These errors are known as Type I and Type II errors.

1. Type I error (False Positive):

A Type I error occurs when the null hypothesis is rejected, but it is


actually true. It is the error of concluding a significant effect or
relationship when none exists. The probability of committing a Type
I error is denoted by α (alpha), commonly set at 0.05 or 0.01. It
represents the maximum allowable probability of rejecting the null
hypothesis when it is true.

In our example, a Type I error would occur if we erroneously


conclude that the drug is effective when it has no real effect.

2. Type II error (False Negative):

A Type II error occurs when the null hypothesis is not rejected, but it
is false. It is the error of failing to detect a significant effect or
relationship that truly exists. The probability of committing a Type II
error is denoted by β (beta) and is influenced by factors such as
sample size, the magnitude of the effect, and the chosen significance
level (α). The complement of β is the power of the test (1 - β), which
represents the probability of correctly rejecting the null hypothesis. In
our example, a Type II error
would occur if we fail to conclude that the drug is effective
when it actually has a real effect.

Minimizing Errors in Hypothesis Testing:

While it is impossible to completely eliminate errors in


hypothesis testing, there are strategies to minimize them:

1. Increase sample size:

One way to reduce both Type I and Type II errors is to increase


the sample size. A larger sample provides more information
and reduces the variability of the estimate, improving the power
of the test. It allows for more accurate estimation of effect sizes
and enhances the ability to detect true differences or
relationships.

2. Choose an appropriate significance level:

The significance level (α) determines the threshold at which the


null hypothesis is rejected. Choosing a smaller α reduces the
probability of Type I errors but increases the chance of Type II
errors. Researchers should carefully consider the consequences
of each error type and the context of the study when selecting
the significance level.

3. Conduct a power analysis:

A power analysis helps determine the sample size required to


achieve a desired power level (1 - β) while controlling the Type
I error rate. By conducting a power analysis before data
collection, researchers can estimate the required sample size to
detect a specific effect size with sufficient power. This ensures
that the study is adequately powered to detect meaningful
effects.

4. Use appropriate statistical tests:

Using the wrong statistical test or assuming incorrect


assumptions about the data can lead to biased and unreliable
results. It is crucial to select the appropriate test based on the
nature of the variables and research question. Consulting with a
statistician or using statistical software can help ensure the
correct test is employed.

5. Replicate the study:

Replication of research findings by independent researchers is


an essential step in scientific inquiry. Replication provides
additional evidence for or against the null hypothesis, reducing
the chances of Type I and Type II errors. It also allows for the
estimation of effect size variability across different studies.
Conclusion:

Hypothesis testing is a fundamental statistical method used to make inferences


about population parameters. It follows a systematic process, from formulating
hypotheses to interpreting the results. However, errors can occur in hypothesis
testing, namely Type I and Type II errors. These errors can have significant
consequences and should be minimized. By increasing sample size, choosing
appropriate significance levels, conducting power analyses, using correct
statistical tests, and replicating studies, researchers can reduce the likelihood of
making errors and draw more accurate conclusions
Q.3 What do you understand by ‘Pearson Correlation’? Where
is it used and how is it interpreted?

Answer:

Pearson correlation is a statistical measure that quantifies the strength


and direction of the linear relationship between two variables. It is
one of the most widely used correlation coefficients and is denoted by
the symbol r. The Pearson correlation coefficient ranges from -1 to
+1, where -1 represents a perfect negative correlation, +1 represents a
perfect positive correlation, and 0 represents no correlation.

The Pearson correlation coefficient was developed by Karl Pearson, a


British mathematician, in the early 1900s. It is based on the concept
of covariance, which measures the extent to which two variables vary
together. Pearson's correlation is a normalized version of the
covariance, which makes it easier to interpret and compare across
different datasets.

The formula for calculating Pearson correlation coefficient

(r) is as follows: r = Σ((X - X̄ )(Y - Ȳ)) / (√(Σ(X - X̄ )²) *


√(Σ(Y - Ȳ)²))

Where:

- X and Y are the values of the two variables being correlated

- X̄ and Ȳ are the means of the two variables

- Σ denotes the sum of the calculations

The Pearson correlation coefficient can be calculated manually using


this formula, or it can be obtained using statistical software or
calculators. The Pearson correlation coefficient has several important
properties that make it a useful tool for data analysis.
First, it is a measure of linear association, meaning it assesses the
strength of the linear relationship between two variables. If the
relationship is not linear, the Pearson correlation coefficient may not
accurately represent the association between the variables.

Second, the Pearson correlation coefficient is symmetric, meaning the


order of the variables does not affect the value of the correlation. For
example, the correlation between X and Y is the same as the
correlation between Y and X.

Third, the Pearson correlation coefficient is dimensionless, which


means it is not affected by the scale or units of measurement of the
variables. This property allows for easy comparison of correlations
across different datasets.

The interpretation of the Pearson correlation coefficient depends on


its magnitude. The value of r ranges from -1 to +1, with a value of 0
representing no correlation, -1 representing a perfect negative
correlation, and +1 representing a perfect positive correlation.

If the correlation coefficient is close to 0 (around -0.1 to +0.1), it


indicates a very weak or no linear relationship between the variables.
As the absolute value of r increases, the strength of the linear
association becomes stronger. A correlation coefficient of -0.5 or
+0.5 is considered moderate, -0.7 or +0.7 is considered strong, and -1
or +1 is considered very strong.

The sign of the correlation coefficient indicates the direction of the


relationship. A positive correlation coefficient (+r) indicates a
positive relationship, where an increase in one variable is associated
with an increase in the other variable. For example, if the correlation
coefficient between study time and exam score is +0.8, it means that
as study time increases, the exam score tends to increase as well.

On the other hand, a negative correlation coefficient (-r) indicates a


negative relationship, where an increase in one variable is associated
with a decrease in the other variable. For example, if the correlation
coefficient between rainfall and temperature is -0.6, it means that as
rainfall increases, the temperature tends to decrease.

It is essential to note that the Pearson correlation coefficient measures


only the strength and direction of the linear relationship between two
variables. It does not imply causation, meaning that a high correlation
does not necessarily imply that one variable causes the other.
Correlation coefficients can be affected by confounding factors or
other variables that influence both variables independently.

The Pearson correlation coefficient is widely used in various fields,


including but not limited to:

1. Social sciences:

It is used to assess the relationships between variables in psychology,


sociology, education, and other social science research. For example,
it can be used to study the relationship between income and
happiness, or between parental education and children's academic
performance.

2. Business and economics:

Pearson correlation is employed to analyze relationships between


economic variables. It can be used to study the correlation between
stock prices of different companies, the relationship between
advertising expenditure and sales revenue, or the association between
unemployment rate and GDP growth.

3. Medicine and healthcare:

Pearson correlation is utilized to analyze relationships between


variables in medical and healthcare research. It can be used to study
the correlation between a patient's age and blood pressure, the
association between smoking and lung cancer, or the relationship
between exercise and heart rate.
4. Environmental sciences:

Pearson correlation is applied to analyze relationships between


environmental variables. For example, it can be used to study the
correlation between temperature and plant growth, or the association
between air pollution and respiratory diseases.

5. Education research:

Pearson correlation is used to assess relationships between variables


in educational research. It can be used to study the correlation
between instructional strategies and student achievement, the
relationship between teacher qualifications and student performance,
or the association between class size and student engagement.

6. Market research:

Pearson correlation is employed to analyze relationships between


variables in market research. It can be used to study the correlation
between customer satisfaction and repeat purchases, the relationship
between price sensitivity and brand loyalty, or the association
between product quality and sales volume.
To interpret the Pearson correlation coefficient accurately, it is
essential to consider other factors such as the sample size, statistical
significance, and the context of the variables being studied. An
examination of these factors can help determine the practical
relevance and reliability of the correlation.

The sample size is an important consideration because correlations


can fluctuate in small samples due to random variability. A larger
sample size provides more reliable estimates of the true correlation in
the population.

Statistical significance indicates whether the observed correlation is


likely to have occurred by chance. Hypothesis testing can be used to
determine whether the correlation coefficient is significantly different
from zero. If the p-value associated with the correlation is less than a
predetermined significance level (e.g., 0.05), it suggests that the
correlation is statistically significant, and the null hypothesis of no
correlation can be rejected.

However, it is crucial to remember that statistical significance does


not necessarily imply practical significance. A statistically significant
correlation may have a small effect size, which may not have much
practical significance or real-world importance. Thus, it is essential to
consider the effect size along with statistical significance when
interpreting the correlation.

The context and domain knowledge of the variables being studied are
also crucial in interpreting the correlation. Understanding the nature
of the variables and the underlying theory or previous research in the
field can provide important insights into the meaning and
implications of the correlation coefficients. It is generally
recommended to interpret correlations in conjunction with other
statistical analyses and supporting evidence.

In summary, Pearson correlation is a statistical measure that


quantifies the strength and direction of the linear relationship between
two variables. It is widely used in various fields to analyze the
relationships between different variables. The interpretation of the
Pearson correlation coefficient depends on its magnitude and sign,
with a value of 0 representing no correlation, -1 representing a perfect
negative correlation, and +1 representing a perfect positive
correlation. However, it is important to consider sample size,
statistical significance, and the context of the variables to accurately
interpret and understand the practical relevance of the correlation.
Q#04
Explain ANOVA and its logics
Answer:
ANOVA, which stands for Analysis of Variance, is a statistical
method used to analyze the differences between two or more means
or groups. It helps to determine whether the differences observed are
statistically significant or simply due to random chance. ANOVA is
an extension of the t-test, which is used to compare the means of two
groups, and it allows for the comparison of multiple groups
simultaneously.

The logic behind ANOVA is based on the idea of partitioning the


total variance observed in a dataset into different components. These
components represent the variation due to different sources, such as
the variation within groups and the variation between groups. By
comparing the variation between the groups to the variation within
the groups, ANOVA determines whether the observed differences in
means are greater than what would be expected by chance.

The null hypothesis in ANOVA is that there is no difference between


the means of the groups being compared. The alternative hypothesis
is that at least one of the group means is different from the others.
ANOVA calculates a test statistic called the F- statistic, which is the
ratio of the variation between groups to the variation within groups. If
the F-statistic is large enough, it suggests that the group means are
significantly different, and the null hypothesis is rejected.

To understand ANOVA in more depth, it is important to understand


the different sources of variance that are considered in the analysis.
ANOVA decomposes the total variance (TSS) into three components:
the between-groups sum of squares (SSB), the within-groups sum of
squares (SSW), and the total sum of squares (SST).

The between-groups sum of squares (SSB) quantifies the variation


between the means of the different groups being compared. It is
calculated by taking the sum of the squared differences between each
group mean and the overall mean, weighted by the sample sizes of
the groups. The between-groups sum of squares represents the
variability that is explained by the different group means.

The within-groups sum of squares (SSW) quantifies the variation


within each group. It is calculated by taking the sum of the squared
differences between each individual observation and the mean of its
group. The within-groups sum of squares represents the variability
that is not accounted for by the group means.

The total sum of squares (SST) is the sum of the squared differences
between each individual observation and the overall mean. It
represents the total variability in the dataset. Based on these
components, ANOVA calculates the F-statistic as the ratio of the
between-groups sum of squares to the within-groups sum of squares:

F = (SSB / dfB) / (SSW / dfW)

where dfB is the degrees of freedom associated with the between-


groups sum of squares, and dfW is the degrees of freedom associated
with the within-groups sum of squares.

The degrees of freedom for the between-groups sum of squares (dfB)


is equal to the number of groups minus one. The degrees of freedom
for the within-groups sum of squares (dfW) is equal to the total
sample size minus the number of groups.

Once the F-statistic is calculated, it is compared to a critical value


from the F- distribution. This critical value depends on the chosen
significance level (usually 0.05) and the degrees of freedom. If the
calculated F-statistic is greater than the critical value, the null
hypothesis is rejected, and it can be concluded that there is a
significant difference between at least two of the group means.

If ANOVA indicates that there is a significant difference between the


group means, it may not identify which specific groups differ from
each other. In such cases, post-hoc tests can be conducted to compare
the means of specific groups. These post-hoc tests, such as the
Tukey's Honestly Significant Difference (HSD) test or the Bonferroni
correction, help to determine pairwise differences among the groups.

ANOVA can be extended to analyze more complex designs, such as


factorial designs with multiple independent variables. In these cases,
ANOVA allows for the examination of main effects (the overall
effect of each independent variable) and interaction effects (the
combined effect of two or more independent variables).

Additionally, ANOVA assumptions should be considered when


interpreting the results. These assumptions include the normality of
the data within groups, the homogeneity of variances, and the
independence of observations. Violations of these assumptions may
affect the validity of the ANOVA results, and alternative tests or
transformations may need to be considered.

In conclusion, ANOVA is a powerful statistical method used to


analyze the differences between two or more means or groups. It is
based on the partitioning of the total variance observed in a dataset
into different components, and it determines whether the observed
differences are statistically significant or due to chance. ANOVA
calculations involve the comparison of the variation between and
within groups, and the resulting F-statistic is compared to a critical
value to make the conclusion. ANOVA provides valuable insights
into the differences among groups and is widely used in various fields
of research and data analysis.
Q#05

Explain Chi-Square. Also discuss it as

independent test.

Answer:

Chi-square is a statistical test that is used to determine whether there


is a significant association between two categorical variables in a
population. It measures the difference between the observed
frequencies and the expected frequencies under a null hypothesis of
independence. The chi-square test is widely used in various fields
such as social sciences, biology, business, and psychology to analyze
data and infer relationships between variables.

The basic principle behind the chi-square test is comparing the


observed frequencies of the categories of the two variables with the
frequencies that would be expected if the variables were independent.
If the observed frequencies significantly differ from the expected
frequencies, it suggests that there is an association between the
variables.

The chi-square test can be applied to both small and large sample
sizes, as it is based on comparing frequencies rather than means.
There are two common types of chi- square tests: the chi-square
goodness-of-fit test and the chi-square test of independence.

1. Chi-Square Goodness-of-Fit Test:

The chi-square goodness-of-fit test assesses whether the observed


frequencies of a single categorical variable differ significantly from
the frequencies that would be expected under a specific theoretical
distribution or assumption. This test is useful when we want to
determine if the observed data matches our expectations.

Let's consider an example to understand the chi-square goodness-of-


fit test. Suppose we are interested in assessing whether the observed
1. Determine the expected frequencies. In our example, if the total
sample size is 100 and we assume an equal distribution, we
would expect the frequency of each eye color to be 25 (100/4).

2. Calculate the chi-square statistic using the formula:

χ² = Σ( (Observed frequency - Expected frequency)² / Expected


frequency )

This formula calculates the sum of the squared differences


between observed and expected frequencies divided by the
expected frequency for each category.

3. Determine the degrees of freedom (df). For the goodness-of-fit


test, the degrees of freedom are calculated as the number of
categories minus one (df = k - 1, where k is the number of
categories). In our example, df = 4 - 1 = 3.

4. Compare the calculated chi-square value with the critical chi-


square value at the desired significance level (α), usually 0.05
or 0.01. If the calculated chi-square value exceeds the critical
value, we reject the null hypothesis and conclude that there is a
significant difference between the observed and expected
frequencies.

The chi-square test statistic follows a chi-square distribution, with the


shape of the distribution depending on the degrees of freedom. By
comparing the calculated chi- square value with the critical value, we
can determine the significance of the association between the
observed and expected frequencies.

2. Chi-Square Test of Independence:

The chi-square test of independence is used to examine whether there


is a significant association between two categorical variables. It
determines if the observed frequencies differ significantly from the
frequencies expected under the assumption of independence between
the variables.
Let's consider an example to understand the chi-square test of
independence. Suppose we want to investigate the relationship
between gender (male or female) and smoking habits (smoker or non-
smoker). We collect data from a sample of individuals, categorizing
them into four groups: male smokers, male non-smokers, female
smokers, and female non-smokers. We want to determine if there is
an association between gender and smoking habits.

The chi-square test of independence involves the following steps:


1. State the null hypothesis (H0) and the alternative hypothesis
(Ha) based on the research question. In this case, the null
hypothesis would be that there is no association between
gender and smoking habits.

2. Construct a contingency table, also known as a cross-tabulation


or a two-way table, which shows the frequencies of the
categories for both variables. The table will have rows for
gender (male and female) and columns for smoking habits
(smoker and non-smoker). The cells of the table will contain
the observed frequencies for each combination.

| Smoker | Non-smoker |

| | | Male | a | b |

3. Calculate the expected frequencies for each combination


assuming the variables are independent. This is done by
multiplying the row total by the column total and dividing by
the grand total. For example, the expected frequency for male
smokers (cell a) would be (Row 1 total * Column 1 total) /
Grand total.

4. Calculate the chi-square statistic using the formula:

χ² = Σ( (Observed frequency - Expected frequency)² / Expected


frequency )

This formula calculates the sum of the squared differences


between observed and expected frequencies divided by the
expected frequency for each combination.

5. Determine the degrees of freedom (df). For the chi-square test


of independence, the degrees of freedom are calculated as
(number of rows - 1) multiplied by (number of columns - 1) (df
= (r - 1) * (c - 1), where r is the number of rows and c is the
number of columns).
6. Compare the calculated chi-square value with the critical chi-
square value at the desired significance level (α), typically 0.05
or 0.01. If the calculated chi-square value exceeds the critical
value, we reject the null hypothesis and conclude that there is a
significant association between the variables.

The chi-square test of independence assesses the strength and


direction of the association between two categorical variables. It
determines whether the observed
frequencies of the combinations are significantly different from what
would be expected if the variables were independent.

The chi-square test has some important assumptions:

1. The data used for the test should be categorical or counts in nature.

2. The observations should be independent of each other.

3. The expected frequency in each category should be at least 5. If


the expected frequency in any category is less than 5, the chi-square
test may not be valid, and alternative tests like Fisher's exact test may
be more appropriate.

In summary, the chi-square test is a statistical analysis that


determines the presence or absence of an association between two
categorical variables. It can be used as a goodness-of-fit test to
compare observed and expected frequencies under a specific
assumption or theoretical distribution, or as a test of independence to
examine the association between two categorical variables. By
comparing the calculated chi-square value with the critical value at a
defined significance level, we can determine if the observed
differences are statistically significant.

You might also like