Econometrics
Econometrics
Econometrics
This information then informs us about which elements of the sessions are
being well received, and where we need to focus attention so that attendees
are more satisfied in the future.
Let’s continue using our application training example. In this case, we’d want
to measure the historical levels of satisfaction with the events from the past
three years or so (or however long you deem statistically significant), as well
as any information possible in regards to the independent variables.
Perhaps we’re particularly curious about how the price of a ticket to the event
has impacted levels of satisfaction.
Simple linear regression is used to estimate the relationship between two quantitative
variables. You can use simple linear regression when you want to know:
1. How strong the relationship is between two variables (e.g. the relationship between rainfall and
soil erosion).
2. The value of the dependent variable at a certain value of the independent variable (e.g. the
amount of soil erosion at a certain level of rainfall).
Multiple linear regression is used to estimate the relationship between two or more
independent variables and one dependent variable. You can use multiple linear regression
when you want to know:
1. How strong the relationship is between two or more independent variables and one dependent
variable (e.g. how rainfall, temperature, and amount of fertilizer added affect crop growth).
2. The value of the dependent variable at a certain value of the independent variables (e.g. the
expected yield of a crop at certain levels of rainfall, temperature, and fertilizer addition).
Example: You are a public health researcher interested in social factors that influence
heart disease. You survey 500 towns and gather data on the percentage of people in
each town who smoke, the percentage of people in each town who bike to work, and
the percentage of people in each town who have heart disease.
Because you have two independent variables and one dependent variable, and all your
variables are quantitative, you can use multiple linear regression to analyze the
relationship between them.
In multiple linear regression, it is possible that some of the independent variables are
actually correlated with one another, so it is important to check these before developing
the regression model. If two independent variables are too highly correlated (r2 > ~0.6),
then only one of them should be used in the regression model.
Linearity: the line of best fit through the data points is a straight line, rather than a curve
or some sort of grouping factor.
To find the best-fit line for each independent variable, multiple linear regression
calculates three things:
• The regression coefficients that lead to the smallest overall model error.
• The t-statistic of the overall model.
• The associated p-value (how likely it is that the t-statistic would have occurred by
chance if the null hypothesis of no relationship between the independent and
dependent variables was true).
For example,
As an informal example, imagine that you have been dieting for a month. Your
clothes seem to be fitting more loosely, and several friends have asked if you
have lost weight. If at this point your bathroom scale indicated that you had
lost 10 pounds, this would make sense and you would continue to use the
scale. But if it indicated that you had gained 10 pounds, you would rightly
conclude that it was broken and either fix it or get rid of it. In evaluating a
measurement method, psychologists consider two general dimensions:
reliability and validity.
RELIABILITY
Test-Retest Reliability
Again, high test-retest correlations make sense when the construct being
measured is assumed to be consistent over time, which is the case for
intelligence, self-esteem, and the Big Five personality dimensions. But other
constructs are not assumed to be stable over time. The very nature of mood,
for example, is that it changes. So a measure of mood that produced a low test-
retest correlation over a period of a month would not be a cause for concern.
Interrater Reliability
VALIDITY
Validity is the extent to which the scores from a measure represent the variable they are
intended to. But how do researchers make this judgment? We have already considered
one factor that they take into account—reliability. When a measure has good test-retest
reliability and internal consistency, researchers should be more confident that the scores
represent what they are supposed to. There has to be more to it, however, because a
measure can be extremely reliable but have no validity whatsoever. As an absurd example,
imagine someone who believes that people’s index finger length reflects their self-esteem
and therefore tries to measure self-esteem by holding a ruler up to people’s index fingers.
Although this measure would have extremely good test-retest reliability, it would have
absolutely no validity. The fact that one person’s index finger is a centimeter longer than
another’s would indicate nothing about which one had higher self-esteem.
Discussions of validity usually divide it into several distinct “types.” But a good way to
interpret these types is that they are other kinds of evidence—in addition to reliability—
that should be taken into account when judging the validity of a measure. Here we
consider three basic kinds: face validity, content validity, and criterion validity.
Face validity is the extent to which a measurement method appears “on its face” to
measure the construct of interest. Most people would expect a self-esteem questionnaire
to include items about whether they see themselves as a person of worth and whether they
think they have good qualities. So a questionnaire that included these kinds of items
would have good face validity. The finger-length method of measuring self-esteem, on the
other hand, seems to have nothing to do with self-esteem and therefore has poor face
validity. Although face validity can be assessed quantitatively—for example, by having a
large sample of people rate a measure in terms of whether it appears to measure what it
is intended to—it is usually assessed informally.
Face validity is at best a very weak kind of evidence that a measurement method is
measuring what it is supposed to. One reason is that it is based on people’s intuitions
about human behavior, which are frequently wrong. It is also the case that many
established measures in psychology work quite well despite lacking face validity. The
Minnesota Multiphasic Personality Inventory-2 (MMPI-2) measures many personality
characteristics and disorders by having people decide whether each of over 567 different
statements applies to them—where many of the statements do not have any obvious
relationship to the construct that they measure. For example, the items “I enjoy detective
or mystery stories” and “The sight of blood doesn’t frighten me or make me sick” both
measure the suppression of aggression. In this case, it is not the participants’ literal
answers to these questions that are of interest, but rather whether the pattern of the
participants’ responses to a series of questions matches those of individuals who tend to
suppress their aggression.
Content validity is the extent to which a measure “covers” the construct of interest. For
example, if a researcher conceptually defines test anxiety as involving both sympathetic
nervous system activation (leading to nervous feelings) and negative thoughts, then his
measure of test anxiety should include items about both nervous feelings and negative
thoughts. Or consider that attitudes are usually defined as involving thoughts, feelings,
and actions toward something. By this conceptual definition, a person has a positive
attitude toward exercise to the extent that he or she thinks positive thoughts about
exercising, feels good about exercising, and actually exercises. So to have good content
validity, a measure of people’s attitudes toward exercise would have to reflect all three of
these aspects. Like face validity, content validity is not usually assessed quantitatively.
Instead, it is assessed by carefully checking the measurement method against the
conceptual definition of the construct.
Criterion validity is the extent to which people’s scores on a measure are correlated
with other variables (known as criteria) that one would expect them to be correlated
with. For example, people’s scores on a new measure of test anxiety should be negatively
correlated with their performance on an important school exam. If it were found that
people’s scores were in fact negatively correlated with their exam performance, then this
would be a piece of evidence that these scores really represent people’s test anxiety. But if
it were found that people scored equally well on the exam regardless of their test anxiety
scores, then this would cast doubt on the validity of the measure.
A criterion can be any variable that one has reason to think should be correlated with the
construct being measured, and there will usually be many of them. For example, one
would expect test anxiety scores to be negatively correlated with exam performance and
course grades and positively correlated with general anxiety and with blood pressure
during an exam. Or imagine that a researcher develops a new measure of physical risk
taking. People’s scores on this measure should be correlated with their participation in
“extreme” activities such as snowboarding and rock climbing, the number of speeding
tickets they have received, and even the number of broken bones they have had over the
years. When the criterion is measured at the same time as the construct, criterion validity
is referred to as concurrent validity; however, when the criterion is measured at some
point in the future (after the construct has been measured), it is referred to as predictive
validity (because scores on the measure have “predicted” a future outcome).
Criteria can also include other measures of the same construct. For example, one would
expect new measures of test anxiety or physical risk taking to be positively correlated with
existing established measures of the same constructs. This is known as convergent
validity.
Assessing convergent validity requires collecting data using the measure. Researchers
John Cacioppo and Richard Petty did this when they created their self-report Need for
Cognition Scale to measure how much people value and engage in thinking (Cacioppo &
Petty, 1982) . In a series of studies, they showed that people’s scores were positively
[1]
correlated with their scores on a standardized academic achievement test, and that their
scores were negatively correlated with their scores on a measure of dogmatism (which
represents a tendency toward obedience). In the years since it was created, the Need for
Cognition Scale has been used in literally hundreds of studies and has been shown to be
correlated with a wide variety of other variables, including the effectiveness of an
advertisement, interest in politics, and juror decisions (Petty, Briñol, Loersch, &
McCaslin, 2009) . [2]
Discriminant validity, on the other hand, is the extent to which scores on a measure
are not correlated with measures of variables that are conceptually distinct. For example,
self-esteem is a general attitude toward the self that is fairly stable over time. It is not the
same as mood, which is how good or bad one happens to be feeling right now. So people’s
scores on a new measure of self-esteem should not be very highly correlated with their
moods. If the new measure of self-esteem were highly correlated with a measure of mood,
it could be argued that the new measure is not really measuring self-esteem; it is
measuring mood instead.
When they created the Need for Cognition Scale, Cacioppo and Petty also provided
evidence of discriminant validity by showing that people’s scores were not correlated with
certain other variables. For example, they found only a weak correlation between people’s
need for cognition and a measure of their cognitive style—the extent to which they tend
to think analytically by breaking ideas into smaller parts or holistically in terms of “the
big picture.” They also found no correlation between people’s need for cognition and
measures of their test anxiety and their tendency to respond in socially desirable ways.
All these low correlations provide evidence that the measure is reflecting a conceptually
distinct construct.