Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Stat PPT 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Signi cant Difference of 2 or • Homogeneity

• How do we test for homogeneity?


more Categories
Levene’s Test
Parametric Statistics

1. Click Analyze
It is based on the assumption that the > Descriptive Statistics
data are drawn from normally distributed > Explore
population.
2. Select variables and send them to
It may also be based on the assumptions Dependent List
that the two sets of data have the same box.
mean, median, or variance.
3. Click on the Plots options. Tick
Non-parametric statistics
Normality plots
with tests.
It does not require many assumptions,
such as normality of data.
4. Click Continue and then click Ok.
These distribution-free techniques yield
conclusions regardless of the shape of
the population distributions. To run Test for Normality in SPSS
Tools for Inferential Statistics 1. Click Explorations
> Descriptives
2. Select dependent variable and
send it to Variables.
3. Click on the Statistics options. Tick
Shapiro-Wilk
tests.
4. The result is shown in the right
panel.

To run Test for Normality in Jamovi


Skewness and Kurtosis should be
somewhere in the span of -1.96 to + 1.96.
When to use Parametric Test?
The Shapiro-Wilk test/Kolmogorov-
Smirnov p-value should be above 0.05.
TWO MAIN ASSUMPTIONS Histograms, Normal Q-Q plots and
Box plots should visually indicate that our
• Normality
data are approximately normally
• How do we check or test for
distributed.
normality? Graphically Statistical test
Skewness and Kurtosis

fi







Divide the value of skewness by the respondents (or subjects) were grouped
standard error of skewness and divide by (independent variable).
the value of kurtosis by the standard error
of kurtosis. dependent variable should be
numerical (interval or ratio data)
If either or both there resulting values
is within -1.96 and + 1.96, then the data independent variable should be
is normally distributed. Otherwise, the categorical and has 2 categories
assumption of normality is rejected. The data follow the normal probability
Skewness and Kurtosis distribution.

Kolmogorov-Smirnov and Shapiro- The samples are randomly taken.


Wilk Test are the two test of normality that
SPSS displays.
To run an Independent Samples t Test
Kolmogorov-Smirnov test is in Excel
considered if the sample size is greater
than or equal to 30. Then, Shapiro-Wilk 1. Click on the “Data” menu, and then
test if the sample size is less than 30. choose the “Data Analysis” tab.
The data is normally distributed if the 2. Select either t-Test: Two-Sample
p- value (sig- value) is higher or greater Assuming Unequal Variances or t-
than 0.05. Test: Two-Sample Assuming Equal
Variances and click “OK”.
Normality Test
3. Click in the Variable 1 Range Box and
When to use Parametric Test? select the range
• OTHER ASSUMPTIONS • Data 4. Click in the Variable 2 Range Box and
• Normally-distributed • Scale of select the range
Measurement 5. Select any of the three in “Output
• At least interval option.”
• Sampling Technique 6. Click “OK”.
• Probability sampling
• Sample Size • ≥30 To run an Independent Samples t Test in
SPSS
Significant Differences f2 categories
1. Click Analyze
t-test for Independent Samples
> Compare Means > Independent-
A parametric statistical procedure for Samples T Test
comparing two samples that are 2. Transfer the dependent variable into
independent, or not related. the Test Variable(s) box, and transfer the
H : There is no significant difference independent variable into the Grouping
o Variable.
in the mean (dependent variable) when

3. Click on the Define Groups. Enter “1” A nonparametric statistical procedure


into the Group 1 box and enter “2” into for comparing two samples that are
the Group 2 box. independent, or not related.
4. Click Continue and then click Ok. H : There is no significant difference
o
To run an Independent Samples t Test in the mean
in Jamovi
(dependent variable) when respondents
1. Click Analyses (or subjects)
> T-Tests
were grouped by (independent variable).
> Independent-Samples T Test
dependent variable should be
2. Drag and drop your outcome variable numerical (interval or ratio data)
to Dependent variables, and transfer the
independent variable into the Grouping independent variable should be
Variable. categorical and has 2 categories

3. The result is shown in the right panel. The data are not normally distributed.
The samples are not randomly taken.

Degree of freedom To run a Mann Whitney U Test in SPSS

Degree of freedom are number of 1. Click Analyze


“observations” in the data that are free to > Nonparametric Tests
vary when estimating statistical
> Legacy Dialogs
parameters.
> 2 Independent Sample
Degrees of freedom is a combination
2. Transfer the dependent variable into
of how much data you have and how the Test Variable List box, and the
many parameters you need to estimate. independent variable into the Grouping
Decision Rule Variable box.
3. Click on the Define Groups button.
Enter “1” into the Group 1 box and enter
“2” into the Group 2
box.
4. Click Continue and then click Ok.

If p-value (significant-value) is less


than or equal to α, reject null hypothesis. To run an Mann-Whitney U Test in
Otherwise, do not reject the null Jamovi
hypothesis.
1. Click Analyses
> T-Tests
Mann Whitney U test > Independent-Samples T Test





2. Drag and drop your outcome variable Click in the Variable 2 Range Box and
to Dependent variables, and transfer the select the range
independent variable into the Grouping
Select any of the three in “Output option.”
Variable. Click “OK”.
3. Untick Student’s in the Tests options.
Tick Mann- To run an Paired Samples t Test in
Jamovi
Whitney U tests.
4. The result is shown in the right panel. 1. Click Analyses
> T-Tests
> Paired-Samples T Test
Paired Sample t-test
2. Select pair of variables and move it to
A parametric statistical procedure for “Paired Variables”.
comparing two samples that are paired or
related. 3. The result is shown in the right panel.

H : There is no significant difference


o
in the mean Wilcoxon Signed-Rank Test

(dependent variable) of the respondents A nonparametric statistical procedure


(or subjects) for comparing two samples that are
paired or related.
before and after (independent variables)
H : There is no significant difference
dependent variables should be o
numerical (interval or ratio data) in the mean

independent variable should consist of (dependent variable) of the respondents


two categorical, "related groups" or (or subjects)
"matched pairs" before and after (independent variables)
The data follow the normal probability dependent variables should be
distribution. numerical (ordinal, interval or ratio data)
The samples are randomly taken. independent variable should consist of
To run an Paired Samples t Test in Excel two categorical, "related groups" or
"matched pairs"
Click on the “Data” menu, and then
choose the “Data Analysis” tab. The data are not normally distributed.

Select t-Test: Paired Two Sample for The samples are not randomly taken.
Means and click “OK”.
Click in the Variable 1 Range Box and To run a Wilcoxon Signed-Rank
select the range Test in SPSS,

1. Click Analyze Kruskal-Wallis Independent H-Test


> Nonparametric Tests Groups
> Legacy Dialogs
> 2 Related Samples
One-way Anova
2. Select each pair of variables and move
it to “Paired Variables”.  A parametric statistical procedure for
3. Click Ok. comparing three or more samples that are
independent, or not related.
To run an Paired Samples t Test in
Jamovi  H : There is no significant difference in
o
1. Click Analyses the mean (dependent variable) when
> T-Tests respondents (or subjects) were grouped by
> Paired-Samples T Test (independent variable).

2. Select pair of variables and move it to  dependent variable should be


“Paired Variables”. numerical (interval or ratio data)
3. Untick Student’s in the Tests options.  independent variable should be
Tick Wilcoxon rank test. categorical and has 3 or more categories
4. The result is shown in the right panel.
 The data follow the normal probability
distribution.

Signi cant Difference for 3 or  The samples are randomly taken.


more Categories To run a One-way ANOVA in Excel

1. Click on the “Data” menu, and then


choose the “Data Analysis” tab.

2. Select Anova: Single Factor and Click


OK. Next to Input Range click the up
arrow.

 These tests are statistics that checks if 3. Select the data and click the down arrow.
three or more means are reliably different Select any of the three in “Output
from each other. option.”

 These tests find out significantly 4. Click “OK” to run the analysis.
difference exists between three or more
groups of data. To run a One-way ANOVA in SPSS

1. Click Analyze
One – way Analysis of Variance > Compare Means
fi





> One-Way ANOVA  independent variable should be


categorical and has 3 or more categories
2. Transfer the dependent variable into the
Dependent List box, and add the independent  The data are not normally distributed. 
The samples are not randomly taken.
variable into the Factor box.
3. Click Options. Check the box for Means To run a Kruskal- Wallis H Test in SPSS
plot.
1. Click Analyze
4. Click Continue and then click Ok. > Nonparametric Tests
To run a One-way ANOVA in Jamovi > Legacy Dialogs
> K Independent Samples
1. Click Analyses
> ANOVA 2. Transfer the dependent variable into the
Test Variable List box, and the independent
> One-way ANOVA variable into the Grouping Variable box.
2. Drag and drop your outcome variable to 3. Click on the De ne Groups button. Enter
Dependent variables, and transfer the the smallest value into the Minimum box and
independent variable into the Grouping the largest value into the Maximum box.
Variable. 4. Click Continue and then click Ok.
3. Select whether your variances are equal or To run a Kruskal- Wallis H Test in Jamovi
unequal. To test for equality of variances
using Levene’s test, tick the box Equality of 1. Click Analyses
variances. 4. The result is shown in the right > ANOVA
panel.
> One-way ANOVA Kruskal-Wallis

2. Drag and drop your outcome variable to


How about if the assumptions of the Dependent variables, and transfer the
parametric test are not satisfied? independent variable into the Grouping
Kruskal Wallis H-Test Variable.

 A nonparametric statistical procedure for 3. The result is shown in the right panel.
comparing three or more samples that are
independent, or not related.

 H : There is no significant difference in What will you do if the result is signi cant?
o
the mean (dependent variable) when  Use a post hoc test to explore the
respondents (or subjects) were grouped by mean differences between pairs of
(independent variable). groups.

 dependent variable should be  Post hoc tests attempt to control the


numerical (interval or ratio data) experiment wise error rate (usually alpha




fi

fi


= 0.05) in the same manner that the one- Hypothesis Testing Concerning
way ANOVA is used instead of multiple t-
tests. Signi cant Relationship
 Post hoc tests are termed a posteriori
tests; that is, performed after the event
(the event in this case being a study). Correlation

Post Hoc Analysis A correlation exists between two variables


when the values of one variable are somehow
 Post hoc (“after this” in Latin) tests are associated with the values of the other
used to uncover specific differences variable.
between three or more group means
when an analysis of variance (ANOVA) F It is a statistical technique that is used to
test is significant. measure and describe a relationship between
two variables (X and Y).
 Post hoc tests allow researchers to
locate those specific differences and are Correlation
calculated only if the omnibus F test is
significant.  Correlation analysis attempts to measure
the strength of such relationships between
Source: http://methods.sagepub.com two variables by means of a single number
called a correlation coef cient.

 A linear correlation exists between two


Friedman Test
variables when there is a correlation and the
 A nonparametric statistical procedure for plotted points of paired data result in a pattern
comparing more than two samples that are that can be approximated by a straight line.
paired or related.

H : There is no significant difference in the


o
mean (dependent variable) of the respondents
(or subjects) different points in time
(independent variables)

 dependent variables should be


numerical (ordinal, interval or ratio data)

 independent variable should consist


of three or more categorical, "related
groups" or "matched pairs"

 The data are not normally distributed. 


The samples are not randomly taken.
fi
fi

negative).
 High degree: If the coef cient value lies
Parametric Test between ± 0.50 and ± 1, then it is said to be a
strong correlation.
Pearson product-moment Scorrelation
(Pearson-r)
 Moderate degree: If the value lies
Nonparametric-Test between ± 0.30 and ± 0.49, then it is said to
be a medium correlation.
Spearman’s Rank Order Correlation
(Spearman Rho)
 Low degree: When the value lies below
 These tests are statistics that determines
± .29, then it is said to be a small correlation.
if there exists significant relationship between
two quantitative variables.
 No correlation: When the value is zero.
 These tests give the information about the
magnitude of the association, or correlation,
as well as the direction of the relationship. To run a Pearson Correlation in SPSS

Pearson product-moment correlation 1. Click Analyze


(Pearson-r) > Correlate

 A parametric test statistic that measures > Bivariate


the statistical relationship, or association,
between two continuous variables. 2. Move the two variables you want to test
over to the Variables box on the right.
 H : There is no significant relationship in
o 3. Make sure Pearson is checked under
the variable 1 and variable 2. Correlation Coef cients.

 variables should be numerical 4. Click Ok.


(interval or ratio data)

 The data follow the normal probability


To run a Pearson Correlation in Jamovi
distribution.
1. Click Analyses
 The samples are randomly taken.
> Regression

> Correlation Matrix


Degree of Correlation (r) 2. Drag all the variables over to the right hand
side. 3. The result is shown in the right panel.
 Perfect: If the coef cient value is ± 1,
then it said to be a perfect correlation: as one
variable increases, the other variable tends to
also increase (if positive) or decrease (if


fi


fi

fi



1. Click Analyze
> Correlate

> Bivariate

Spearman Rank Order Correlation (Spearman 2. Move the two variables you want to test
Rho) over to the Variables box on the right.

 A nonparametric test statistic that 3. Make sure Spearman is checked under


measures the statistical relationship, or Correlation Coef cients.
association, between two variables.
4. Click Ok.
 H : There is no significant relationship
o To run a Spearman’s Correlation in
in the variable 1 and variable 2. Jamovi

-variables should be numerical (ordinal, 1. Click Analyses


interval or ratio data) > Regression

> Correlation Matrix


 The data are not normally distributed.
2. Drag all the variables over to the right hand
side.
 The samples are not randomly taken.
3. Untick Pearson in the Correlation
Coef cients options. Tick Spearman tests.

Degree of Correlation (ρ) 4. The result is shown in the right panel.

Chi-square, Gamma, or Cramers v

 0: no association between the variables.  A statistical procedure in determining


 0 to -0.29 or 0 to +0.29: a negligible significant association between categorical
or very small association. variables.
 -0.30 to -0.49 or 0.30 to 0.49: a
moderate association between the variables.  H : There is no significant relationship
 0.50 to 0.69 or -0.50 to -0.69: a o
substantial association between the variables. between variable 1 and variable 2
 >0.70, or < -0.70: a very strong
 Two variables should be categorical
association.
(nominal or ordinal data)
 1 or -1, there is a perfect association
between the events.  Two variables should consist of two
or more categorical, independent groups
To run a Spearman’s Correlation in SPSS  Chi-square is a measure of association
for nominal and ordinal variables.


fi


fi








 Gamma is a measure of association you do not see the “Data
for ordinal variables. Analysis” option, you need to install
the add-in.)
 Cramer’s V is a measure of
association for nominal variables.

3. Select “Descriptive Statistics”, and


To run a Chi-square in SPSS click “OK”.

1. Click Analyze
> Descriptive Statistics 4. Input the cells containing the data.
> Crosstabs

2. Drag and drop (at least) one variable into 5. Select any of the three in “Output
Row(s) box, and (at least) one into the option.”
Column(s) box.

3. Click on Statistics, and select Chi-square. 6. Make sure “Summary statistics” is


4. Click continue, and then Ok. checked.

To run a X^2 test for association in 7. Click “OK”.


Jamovi

To open your Excel le in SPSS

1. Click File, Open, Data, from the SPSS


menu.

2. Select type of le you want to open,


Excel *. xls *. xlsx, *.xlsm .
Generating Descriptive
Statistics
3. Select le name.
How to run Descriptive Statistics in Excel

1. Arrange the data in columns.


4. Click “Read variable names” if the
rst row of the
2. Click on the “Data” menu, and then spreadsheet contains column
choose the “Data Analysis” tab. (If headings.
fi
fi


fi


fi


5. ClickOpen
3. The result is shown in the right panel.

To run Descriptive Statistics in SPSS

1. Click Analyze

2. Add variables to the variables box

3. Click “OK”.

To run Descriptive Statistics in SPSS

1. ClickAnalyze

> Compare Means > Means

2. Add variables to the Dependent List


and Independent List

3. Click “OK”.

To run Descriptive Statistics in SPSS

1. Click Open, from Jamovi Menu 2. Select


This PC/Data Files
3. Click Browse
4. Click Open

To open your Excel le in Jamovi

1. Select Analyses

> Exploration
> Descriptives

2. Add variables to the variables and


split by





fi

You might also like