Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
4 views30 pages

Epre 412 Chapter 6 and 7

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 30

Analysing and presenting

quantitative data
issues of interpretation and
quality
Chapter 6
And chapter 7
Analysing and presenting
quantitative data
• Focus on how “statistics “is used to present data and provide
evidence
• Refers in common usage to numerical data.
• What is statistics: procedures and rules for reducing large masses of
data to manageable proportions
• Refers to the methodology for the collection, presentation, analysis
and interpretation of data
• For allowing us to draw conclusions from those data.
• Page: 138 prescribed textbook
Two types of statistics
1.descriptive statistics
• Tool for organising, tabulation, depicting and describing, summarising and
reduction of mass of data.
• To transform or summarise a set of data into either a visual overview, such
as a table or a graph.
2. inferential statistics
• It builds upon descriptive statistics
• To predict or estimate or surmise the properties of a population from a
knowledge of the properties of only a sample of the population.
• It aims to make predictions or inferences about the similarity of a sample to
the population from which the sample is drawn.
Terminology page 139
1. Categorical data: tells how many observations there are in a particular
category
• Can be sorted into groups or categories
• Example: 23 girls and 17 boys)
2. Measurement or numerical data: they are the result of measurement
• Example: person is 173 cm tall
• Test score, weight, speed
3. Variables
• Is a property of an object or event or person that can take on different
values
Terminology cont…..
• Variable: is a characteristic which differ among the observational units
on which it is defined
• Variables can be numerical: travelled kilometres
• may be non-numerical, eg. Language preference, occupation
• Example: Test scores
• Activity 6.2 (textbooks)
Think of children in your class or in your school. What are some of the
things that make them different from one another. Write down these
variables
Organising and presenting data:
page 141
1. Tabulate data
2. Organise data: arrange scores in a descending order
3. Have a frequency distribution: presenting data as a number of
learners/people/ in a particular category
Examples of frequency distribution-textbook figures 6.1-6.4
Example: frequency distribution

Table: number of learners who wrote and pass English test


race wrote Achieved 50% or higher

black 56 25

coloured 70 45
Graphic representation of data
1. Histogram: bar graph
2. Frequency polygon: line graph
3. Pictograms: use a stick figure to represent a number of people or
any variable (page, 141 prescribed book)
4. Pie chart
histogram
Frequency polygon
Pie graph
Graphing relative frequencies
(percentages)
• Relative frequency says what the frequency of a category is relative to
the whole data set, or in everyday terms, what percentage that
category comprises of the whole
• Represent the percentage of the whole (the relative frequency) rather
than the number in each category (the frequency)
• Example
Total number No of Girls Total number Percentage Percentage
of learners in passed test of girls of girl who of girls who
grade 4 passed passed in
grade 4
130 30 60 50% ?
Measures of central tendency
• Also called averages (see prescribe book, page 153)
Measures of central tendency may summarise data by quoting a “typical” or
representative score for the whole set.
Three (3) measures of tendency
a) mode: is a score which occurs more frequently in a collection/distribution.
It is the most frequent score. When all the scores in a group occurs with the
same frequency= no mode.
When group of two scores has the same frequency=two modes/bimodal
Example: 7,9,10,20,6,2,6,17,6,3
Mode= 6
Measures of central tendency
cont…..
b) The mean
The mean of a set of observation /distribution is their sum divided by
the number of the observation
The results we would have if we could share the data evenly on
categories
Example: if all leaners were to share the total walking distance to
school, between them, each learner would walk approximately …….km.
Measures of central tendency cont
( c ) median: separates the top half of a data set from the bottom half
E.g. Half of learners scored less than…..and half scored more than…
If its even numbers add the 2 middle numbers and divide by 2
(d) Standard deviation: measure of how much the data deviate from the
mean or how far the data on average is from the mean.
One way of measuring the spread of data
If standard deviation is high, data are very spread out

(e)Range: is the difference between the highest and the lowest value in the
data.
Correlation: Linear relationship
• The relationship between two different sets of scores.
• E.g. Want to know whether high IQ scores are associated with high
scores in academic attainment
• Do not think that negative correlations is undesirable or that it
indicates lack of relationship
• What is correlation: means an association or variation
• Two variables are correlated if they tend to “go together”.
• A coefficient of correlation is a statistical summary of the degree of
relationship or association between two variables
Kinds of correlation
• Positive correlation
The concomitant variation is in the same direction
An increase in one variable is accompanied by an increase in the other variable.
Perfect positive correlation is +1
Example: increase in intelligence being accompanied by increase in scholastic
achievement.
Negative correlation
With some variables concomitant change or variation is in the opposite direction e.g.
fatness and speed
Increase in one variable is followed by decrease in another
Perfect negative correlation is - 1
correlation
• Scatter plots are used to graphically show or explore whether
correlations exist or not. Example of scatterplot page164
• The strength of a relationship is expressed by means of +1 and -1
• No relationship is expressed by 0
• Example: 0.77=fairly strong relationship
• Which one is stronger: -0.6; + 0,7; - 0.9
Answer is -0.9
Validity and reliability
• Validity: In terms of measurement procedure, validity is the ability of
an instrument to measure what it is designed to measure.
• Reliability: when a research instrument is able to provide similar
results when used repeatedly under similar conditions.
• Therefore reliability indicates accuracy, stability, and predictability of a
research instrument.
• The higher the reliability the higher the accuracy
Inferential statistics
• Used to make predictions about the similarity of the sample to the
population from which the sample is drawn
Test run to test prediction:
• Probability (p value)- mathematical way of stating the degree of
confidence we have in predicting something.
• Inferential statistics can tell us the probability that the results we
have obtained in a particular study occurred by chance or not. Page
167
Chapter 7

interpretation of results and quality


Issues of interpretation and quality
A fact is a thing certainly known to have occurred or be true
What is considered relevant data varies with the research paradigm
In post positivist paradigm data must be factual or at least be as close
to being factual as it is practical to get.
What post positivists call facts are merely a way of identifying
observation of the phenomena around us.
In interpretivist and critical research, the notion of “facts” is generally
not used, exactly because it is no longer so clear what trustworthy or
verifiable.
Sources of error in data collection
• There are three sources error
1. The researcher (attribute of the researcher, affiliation of the
researcher)
2. The participants (can their behaviour)
3. The context (be sensitive to the research setting)
Sources of error in data collection
Number of issues influence the soundness or validity of data:
Those are
1. The researcher: attribute, understanding of the field, experience
and training. Beliefs, intellectual and social bias of the researcher
can influence the data collection process.
2. Participants: the Hawthorne Effect (the presence of the researcher
in class could automatically cause the teacher and learners to
behave differently), may not remember certain things, trying to give
answers that they think are socially acceptable.
JUGDING THE QUALITY OF
RESEARCH STUDY
1. quality of research question and design: example, to what extent is
the researcher using key concepts in ways which are similar to the
general acceptable meaning of the construct.
2. validity in the data collection (post positivism paradigm):
• Construct validity: refers to the extent to which the instrument and
data collection methods measure the construct they are intended to
measure
• Reliability: extent to which the test measure or instrument can be
repeated with the same or a similar group of respondents and still
produce the same results.
validity
1. Validity in data analysis (postpositivism)
Internal validity-
Researchers try to be objective, distant/ separate themselves from the
study. They try and control variables
2. Extending conclusions from the research (post positivism)
Tend to work with large sample
To what extent the findings can be generalised
Generalizability is the extend to which the conclusions of the study may
be applied beyond the sample population to the whole population of the
study (mostly called external validity)
Validity in the interpretive paradigm
• Measurement is not an issue in qualitative approaches
• Guba and Lincoln, 1994) suggested:
Trustworthiness which include
1. Credibility: do the findings reflect the “reality” and lived experiences of the
participants.
2. Transferability: to what extent can the research be transferred to another
context.
3. Dependability: this refers to when the researcher can account for why there
may be variation in the study.
4. confirmability: refers to the degree to which the analysis of the researcher
can be confirmed by someone else
• Validity in data collection (interpretivism)
• Achieved through credibility
• Credibility can be enhanced through recording interviews, to ensure
accuracy in transcription
• Having a research assistant in the same interview
• These lead to construct validity: the extent to which observation/
interviews capture what the researcher is after.
• Triangulation can also increase validity: collecting data from a number
of different sources
Validity in critical paradigm
• Criteria for validity: extent the enquiry brings transformation and
change
• Validity in data (critical paradigm): credibility and reflection of
political positioning of the participants
• Collection of data: must be respectful to participants
• Validity in data analysis
Self reflexivity (researcher’s awareness of his or her own sociocultural
or economic positioning in relations to how aware the researcher is of
how he or she is positioned to the participants) , dependability and
confirmability
Reference
• Bertram, C & Christiansen, I. (2015)

You might also like