BRM Unit 2
BRM Unit 2
BRM Unit 2
RESEARCH DESIGN
Decisions regarding what, when, where, how much, by what
meansconcerning an enquiry or a research study constitute a research design.
A research design is the arrangement of conditions for collection and
analysisof data in a manner that aims to combine relevance to the research
purposewith economy in procedure.
Research design is the conceptual structure within which the research
isconducted. It is simply the framework or plan for a study that is used as
aguiding and analysing the data.It constitutes the blue print for the
collection,measurement and analysis of data.
According to Green and Tull, " A research design is the specification
ofmethods and procedures for acquiring the information needed. It is the
over-all operational pattern or framework of the project that stipulates
whatinformation is to be collected from which sources by what procedures.
2. EXTRANEOUS VARIABLES:
Independent variable that are not related to the purpose of the study but may
affect the dependent variable are termed as extraneous variables.
If the researcher wants to test the hypothesis that there is a relationship
between Children's gains in social studies and their self-concept.
In this case self-concept is an independent variable and social studies
achievement is a dependent variable.
Intelligence also affect the social studies achievement but since it is not
related to the purpose of the study undertaken by the researcher as an
extraneous variable
Whatever effect is noticed on the dependent variable as a result of
extraneous variable is technically described as "experimental error"
A study should be so designed that the effect on the dependent variable is
attributed entirely to the independent variable and not to some extraneous
variable or variables.
3. CONTROL:
A good research design should minimize the effect of extraneous variables.
The technical term control is used to describe this
In experimental research the term control is used is used to refer to restrain
experimental conditions
4. CONFOUNDED RELATIONSHIP:
When the dependent variable is not free from the influence of extraneous
variables, the relationship between the dependent and independent variables
is said to be confounded by an extraneous variable.
5. RESEARCH HYPOTHESIS:
When a prediction or hypothesized relationship is to be tested by scientific
methods it is termed as research hypothesis .
A research hypothesis is a predictive statement that relates an
independentvariable to a dependent variable.
The research hypothesis should contain one independent and one dependent
variable .
8. TREATMENTS:
The different conditions under which experimental and control groups areput
are usually referred to as treatments.
E.g. - group a and group B the two treatments are the usual studies
programme and the special studies program me.
If we want to determine through an experiment the comparative impact
ofthree varieties of fertilizers on the yield of wheat, in that case the
threevarieties of wheat are treated as three treatments
9. EXPERIMENT:
The process of examining the truth of a statistical hypothesis, relating
tosome research problem, is known as an experiment.
Experiments can be of two types - absolute experiment and
comparativeexperiment.
E.g. - if we want to determine the impact of a fertilizer on the yield of a
crop,it is case of absolute experiment.
E.g. - if we want to determine the impact of one fertilizer compared to that
ofanother fertilizer , then the experiment is treated as comparative
experiment.
10.EXPERIMENTAL UNITS:
The pre determined plots or the blocks where different treatments are
usedare known as experimental units.
Experimental units should be selected very carefully.
EXPLORATORY RESEARCH
Literature research study of secondary data
Experience survey
Case study
Focus groups
Two stage design
Projective techniques .
CONCLUSIVE RESEARCH
Descriptive Research (Longitudinal study, cross sectional study) &
Experimental Or Causal Research
a. PRINCIPLES OF REPLICATION
Experiment is repeated more than once. Treatment is applied in more than
one experimental units instead of one. Therefore the statistical accuracy of
the results are increased (e.g.) effects of two varieties of rice especially the
yield.
Though conceptually it is not difficult computationally it is difficult .
Its main aim is to increase the accuracy with which the main effects and
accuracy can be estimated
b. PRINCIPLE OF RANDOMISATION
This principle provides protection when we conduct an experiment
against the effect of extraneous factors by randomization.
This principle indicates that we should design or plan the experiment in
such a way that the variations caused by extraneous factors can all be
combined under the general heading" chance".
Randomized sampling technique can be used which can lead to a better
estimate of the experimental error
The main difficulty of such a design is that with the passage of time
considerable extraneous variation may be there in its treatment effect
FACTORIAL DESIGN
Factorial designs are used in experiments where the effects of varying more
than one factor can be determined.
They are specially important in several economic and social phenomena
where usually a large no. of factors affect a particular problem
Factorial designs can be of two types - simple factorial designs and complex
factorial designs
Simple factorial design- Here we consider the effects of varying two factors
on the dependent variable and its is also termed as two-factor-factorial
design. It may be either 2x2 simple factorial design, 3x4.5x3 etc
Complex factorial design factors: When an experiment is done with more
than two factors. It is also called as Multi factor factorial design. A design
which considers three or more independent variables simultaneously.
INTERNAL VALIDITY
Internal validity is the measure of accuracy of an experiment. It measures
whether the manipulation of the independent variables or treatments actually
caused the effects on the dependent variable
Internal validity examines whether the observed effects on the test units
could have been caused by variables other than the treatment
If the observed effects are influenced or caused by extraneous variables it is
difficult to draw valid inferences about the causal relationship between the
independent and dependent variables
Internal validity is the basic minimum that must be present in an experiment
before any conclusion about treatment effects can be made
Without internal validity the experimental results are confounded
Control of extraneous variables is a necessary condition for establishing
internal validity
Internal validity is the approximate truth about inferences regarding cause-
effect or causal relationships
MATURATION
Maturation effects are effects that are a function of time and the naturally
occurring effects that coincide with growth and experience
Experiments take place over longer time spans may see lower internal
validity as subjects simply grow older or more experienced
Suppose an experiment were designed to test the impact of a new
compensation program on sales productivity. Their sales productivity might
improve because of their knowledge and experience rather than the
compensation program
TESTING
Testing effects are also called as pre-testing effects because the initial
instruments or test alerts affects their response to the experimental
treatments
Testing effects generally occur in a before-and-after study
Before-and-after experiments are a special case of repeated measures design
For e.g., students taking standardized achievement and intelligence tests for
the second time usually do better than those taking tests for the first
time.The effect of testing may increase awareness of socially approved
answers, increase attention to experimental conditions etc
INSTRUMENTATION
The threat to validity may arise due to the observer or the instrumentation
Using different observers may affect the validity of the study because they
may be a source of extraneous variation
If the same observer is used for a longer period of time, it may affect the
validity due to observer's experience,(acquire new skills or decide to reword
the questionnaire in their own terms) boredom, fatigue and anticipation of
results. If the same interviewers are used to ask questions before and after
the measurement problems may arise
SELECTION
Different selection of subjects for experimental and control groups affect the
validity
Validity considerations requires the group to be equivalent in every aspect
The problem may be overcome by randomly assigning the subjects to
experimental and control groups. In addition matching can be done.
Matching is a control procedure to ensure that experimental and control
groups are equated in one or more variables before the experiment
MORTALITY
If an experiment is conducted for a few weeks or more some sample bias
may occur due to mortality effect(sample attrition)
Sample attrition occurs when some subjects withdraw from the experiment
before it is completed
Mortality effects may occur if subjects drop from one experimental
treatment group disproportionately from other groups
E.g. - sales training experiment investigating the effects of close supervision
of sales people versus low supervision
STATISTICAL REGRESSION
Operates when groups have been selected on the basis of extreme scores on
the dependent variable
If a manager wants to test if he can increase the salesmanship qualities of the
sales personnel through the training programme he should not choose those
with extremely low or extremely high abilities for the experiment
This is because those with a low current sales abilities have a greater
probability of showing improvement and scoring closer to the mean test after
being exposed to the treatment
Likewise those with high sales abilities would also have a greater tendency
regress towards the mean- they will score lower on the posttest than on the
pre test
EXTERNAL VALIDITY
External validity is the accuracy with which the experimental results can be
generalized beyond the experimental subjects
External validity is increased when the subjects comprising the sample truly
represent some population and when the results extend or other groups of
people
The higher the external validity the more the researchers and managers
cancount on the fact that any results observed in an experiment will also
been seen in the real world market place, workplace, sales floor etc) market
segments
LABORATORY EXPERIMENTS
They have the greatest external validity problem because of the artificiality
of the setting and arrangements
The exposure of an experimental treatment such as the mockup of a new
product in a laboratory can be so different from the conditions in the real
world that projections become very difficult and risky
In addition to the problem of artificiality to laboratory experiments most of
the internal validity threats also apply to external validity. In fact, selectivity
bias can be very serious
In field experiments the test market site, the stores close to test and the
people interviewed as part of the experiment are not representative of the
entire market or population
If the subjects know that they are participating in an experiment they may
not behave in normal way
RATIO SCALES
Ratio scales have an absolute or a true zero of measurement. For e.g., the
zero point on a centimeter scale indicates the complete absence of length or
height
With ratio scales involved one can make statements like "Jyoti's" typing
performance was twice as good as that of Reetu". The ratio involved does
have significance and facilitates a kind of comparison which is not possible
in case of an interval scale .
Ratio scale represents the actual amounts of variables. Measures of physical
dimensions such as height, weight, distance etc are examples
All statistical techniques are usable with ratio scales. Geometric and
harmonic means can be used as measures of central tendency and
coefficients of variation may also be calculated
INTERVAL SCALES
In case of interval scale, the intervals are adjusted based on some rule that
has been established as a basis for making the units equal
The distance between the various categories unlike in Nominal, or numbers
unlike in Ordinal, are equal in case of Interval Scales.
The Interval Scales are also termed as Rating Scales.
An Interval Scale has an arbitrary Zero point with further numbers placed at
equal intervals.
A very good example of Interval Scale is a Thermometer. Fahrenheit scale is
an example of an interval scale. One can say that an increase in temperature
from 30 degree to 40 degree involves the same increase in temperature as an
increase from 60 degree to 70 degree. But one cannot say that 60 degree is
twice as warm as 30 degree because both numbers are dependent on the fact
that the zero on the scale is set arbitrarily at the temperature of the freezing
point of water.
Such a scale permits the researcher to say that position 5 on the scale is
above position 4 and also the distance from 5 to 4 is same as distance from 4
to 3.
Such a scale however does not permit conclusion that position is twice as
strong as position 2 because no zero position has been established.
The data obtained from the Interval Scale can be used to calculate the Mean
scores of cache attributes over all respondents. The Standard Deviation (a
measure of dispersion) can also be calculated
TEST OF SOUND MEASUREMENT
Sound measurement must meet the tests of validity, reliability and
practicality
Validity refers to the extent to which a test measures what we actually wish
to measure
Reliability has to do with the accuracy and precision of a measurement
procedure
Practicality is concerned with a wide range of factors of
economy,convenience and interpretability
TESTS OF VALIDITY
Validity indicates the degree to which an instrument measures what it is
supposed to measure
Validity can also be thought of as utility
One can certainly consider three types of validity. They are
Content validity
Criterion-related validity
Construct validity
Content Validity - It is the extent to which a measuring instrument provides
adequate coverage of the topic under study.
If the instrument contains a representative sample of the universe then the
content validity is good.
Its determination is primarily judgmental and intuitive. It can also be
determined by a panel of persons who shall judge how well the measuring
instrument meets the standards but there is no numerical value to express it
Criterion-related validity - It refers to the ability to predict some outcome or
estimate the existence of some current condition
Concerned criterion possess qualities like relevance freedom from
bias,reliability and availability
Criterion-related validity can be classified into predictive validity and
concurrent validity.
Predictive validity refers to the usefulness of a test in predicting my future
performance whereas concurrent validity refers to the usefulness of a fest in
closely relating other measures of known validity.
TESTS OF RELIABILITY
It is an important test of sound measurement. A measuring instrument is
reliable if it provides consistent results
Reliable measuring instrument does contribute to validity but a reliable
instrument need not be a valid instrument. For e.g., a scale that consistently
overweighs objects by 5 kgs is a reliable scale but it does not give valid
measure of weight
It is easier to assess reliability in comparison with validity
Two aspects of reliability (i.e) stability and equivalence draw special
attention
The stability aspect is concerned with securing consistent results with
repeated measurements of the same person and with the same instrument
The equivalence aspect considers how much error may get introduced by
different investigators or different samples of the item being studied. A good
way to test the equivalence of measurements by two investigators is to
compare their observations of the same events
TESTS OF PRACTICALITY
The practicality characteristic of a measuring instrument can be judged in
terms of economy, convenience and interpretability
Economy consideration suggests that some trade-off is needed between the
ideal research project and that which the budget can afford. Data collection
methods to be used are also dependent at times upon economic factors
Conveniencetests suggest that the measuring instrument should be easy to
administer. For this purpose one should give due attention to the proper
layout of the measuring instrument
Interpretability consideration is especially important when persons other
than the designers of the test are to interpret the results.
Unit 2
SCALES
Scaling describes the procedures of assigning numbers to various degrees of
opinion, attitude and other concepts. This can be done in two ways viz., (i) making
a judgement about some characteristic of an individual and then placing him
directly on a scale that has been defined in terms of that characteristic and (ii)
constructing questionnaires in such a way that the score of individual’s responses
assigns him a place on a scale. .
Scaling has been defined as a “procedure for the assignment of numbers (or other
symbols) to a property of objects in order to impart some of the characteristics of
numbers to the properties in question
Scale construction techniques: Following are the five main techniques by which
scales can be developed.
(i) Arbitrary approach: It is an approach where scale is developed on ad hoc
basis. This is the most widely used approach. It is presumed that such
scales measure the concepts for which they have been designed, although
there is little evidence to support such an assumption.
(ii) Consensus approach: Here a panel of judges evaluate the items chosen
for inclusion in the instrument in terms of whether they are relevant to
the topic area and unambiguous in implication.
(iii) Item analysis approach: Under it a number of individual items are
developed into a test which is given to a group of respondents. After
administering the test, the total scores are calculated for every one.
Individual items are then analysed to determine which items discriminate
between persons or objects with high total scores and those with low
scores.
(iv) Cumulative scales are chosen on the basis of their conforming to some
ranking of items with ascending and descending discriminating power.
For instance, in such a scale the endorsement of an item representing an
extreme position should also result in the endorsement of all items
indicating a less extreme position.
(v) Factor scales may be constructed on the basis of intercorrelations of
items which indicate that a common factor accounts for the relationship
between items. This relationship is typically measured through factor
analysis method
Scaling Techniques
Rating scales: The rating scale involves qualitative description of a limited number
of aspects of a thing or of traits of a person. When we use rating scales (or
categorical scales), we judge an object in absolute terms against some specified
criteria i.e., we judge properties of objects without reference to other similar
objects.
these ratings may be in such forms as “like-dislike”, “above average, average,
below average”, or other classifications with more categories such as “like very
much—like some what—neutral—dislike somewhat—dislike very much”;
“excellent—good—average—below average—poor”, “always—often—
occasionally—rarely—never”, and so on.
1) Dichotomous Scale
The dichotomous scale is used to elicit a Yes or No answer, as in the
example below. Note that a nominal scale is used to elicit the response.
Eg: Do you own a car? Yes No
Category Scale
The category scale uses multiple items to elicit a single response as
per the fol- lowing example. This also uses the nominal scale.
Eg: Where in northern California do you reside?
North Bay South Bay East Bay Peninsula Other
2) Likert scale
summated scales (or Likert-type scales) are developed by utilizing the item
analysis approach wherein a particular item is evaluated on the basis of how well it
discriminates between those persons whose total score is high and those whose
score is low
summated scales consist of a number of statements which express either a
favourable or unfavourable attitude towards the given object to which the
respondent is asked to react. The respondent indicates his agreement or
disagreement with each statement in the instrument. Each response is given a
numerical score, indicating its favourableness or unfavourableness, and the scores
are totalled to measure the respondent’s attitude.
Advantages:
(a) It is relatively easy to construct the Likert-type scale in comparison to
Thurstone-type scale because Likert-type scale can be performed without a panel
of judges.
(b) Likert-type scale is considered more reliable because under it respondents
answer each statement included in the instrument..
(c) Each statement, included in the Likert-type scale, is given an empirical test for
discriminating ability and as such, unlike Thurstone-type scale, the Likert-type
scale permits the use of statements that are not manifestly related (to have a direct
relationship) to the attitude being studied.
(d) Likert-type scale can easily be used in respondent-centred and stimulus-centred
studies i.e., through it we can study how responses differ between people and how
responses differ between stimuli. (e) Likert-type scale takes much less time to
construct, it is frequently used by the students of opinion research.
Limitations: There are several limitations of the Likert-type scale as well.
One important limitation is that, with this scale, we can simply examine
whether respondents are more or less favourable to a topic, but we cannot
tell how much more or less they are.
There is no basis for belief that the five positions indicated on the scale are
equally spaced. The interval between ‘strongly agree’ and ‘agree’, may not
be equal to the interval between “agree” and “undecided”. This means that
Likert scale does not rise to a stature more than that of an ordinal scale,
whereas the designers of Thurstone scale claim the Thurstone scale to be an
interval scale.
5) Factor Scales*
Factor scales are developed through factor analysis or on
the basis of intercorrelations of items which indicate that a
common factor accounts for the relationships between items.
Factor scales are particularly “useful in uncovering latent
attitude dimensions and approach scaling through the
concept of multiple-dimension attribute space.”
More specifically the two problems viz., how to
dealappropriately with the universe of content which is multi-
dimensional and how to uncover underlying (latent)
dimensions which have not been identified, are dealt with
through factor scales. An important factor scale based on
factor analysis is Semantic Differential (S.D.) and the other
one is Multidimensional Scaling.
This scaling consists of a set of bipolar rating scales, usually
of 7 points, by which one or more respondents rate one or
more concepts on each scale item. For instance, the S.D.
scale items for analysing candidates for leadership position
may be shown as under