Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

Psychology

Psychology is a discipline derived from philosophy which is the parent discipline. The word
psychology is derived from two greek words – ‘psyche’ meaning mind and ‘logos’ meaning
study. According to the American Psychological Association psychology is defined as the
scientific study of mind and behavior. Behavior stands for responses and actions that are
directly observed and mind is defined as the internal state and processes that aren’t
observable yet can be inferred through measurable responses

Psychology helps us to comprehend how are brain and body connected. As a scientific
discipline, it provides a very systematic approach without creating any possible biases which
leads to faulty observations.

Goals of psychology
As a science, psychology has four central goals: description, explanation, control, and
prediction
Description: Description is the most basic goal; psychologists seek to describe how people
behave, think, and feel. Simply describing the behavior of humans and other animals helps
psychologists understand the motivations behind it. Such descriptions also serve as
behavioral benchmarks that help psychologists gauge what is considered normal and
abnormal
Explanation: Psychologists strive to explain—to understand—why people act as they do.
Explanations typically take the form of hypotheses and theories that specify the causes of
behavior. The goal of explaining is to provide answers to questions about why people react
to certain stimuli in certain ways, how various factors impact personalities and mental health,
and so on. Psychologists often use experiments, which measure the impacts of variables upon
behaviors, to help formulate theories that explain aspects of human and animal behaviors.
Control: Psychology aims to change, influence, or control behavior to make positive,
constructive, meaningful, and lasting changes in people’s lives and to influence their behavior
for the better. psychologists exert control by designing experiments or other types of
research to test whether their proposed explanations are accurate.
Prediction: Making predictions about how humans and animals will think and act is the third
goal of psychology. By looking at past observed behaviour (describing and explaining),
psychologists aim to predict how that behaviour may appear again in the future, as well as
whether others might exhibit the same behaviour.
Consider Kruger et al.’s (2005) research on the first instinct fallacy. They first
determined that although students are more likely to change multiple-choice answers from
wrong to right than from right to wrong, they still erroneously believe that the best exam
strategy is to “go with your first instinct” (description). Next, they proposed a model
to explain why many students hold this belief (explanation). To test each part of their
model, Kruger et al. conducted additional studies in which they carefully controlled the
situations and questions to which participants were exposed (control). Their findings
supported the model. The knowledge gained has already led other psychologists and
educators to offer correct advice to students about answer-switching strategies (Social
Psychology Network, 2010). Now you can apply this knowledge to your own test-taking
behaviour as well.

Psychology as a science
Despite the differences in their interests, areas of study, and approaches, all psychologists
have one thing in common: They rely on scientific methods. Research psychologists use
scientific methods to create new knowledge about the causes of behavior,
whereas psychologist-practitioners, such as clinical, counselling, industrial-organizational,
and school psychologists, use existing research to enhance the everyday life of others. The
science of psychology is important for both researchers and practitioners.

The empirical methods used by scientists have developed over many years and provide a
basis for collecting, analysing, and interpreting data within a common framework in which
information can be shared. We can label the scientific method as the set of assumptions,
rules, and procedures that scientists use to conduct empirical research.
For example, if we want to know how people’s intellectual abilities change as they age, we
don’t rely on intuition, pure reasoning, or folk wisdom to obtain an answer. Rather, we
collect empirical data by exposing people to intellectual tasks and observing how they
perform. Moreover, in science these observations need to be systematic (i.e., performed
according to a system of rules or conditions) so that they will be as objective and precise as
possible (Shaughnessy et al., 2010)
the study of psychology spans many different topics at many different levels of
explanation which are the perspectives that are used to understand behavior. Lower levels of
explanation are more closely tied to biological influences, such as genes, neurons,
neurotransmitters, and hormones, whereas the middle levels of explanation refer to the
abilities and characteristics of individual people, and the highest levels of explanation relate
to social groups, organizations, and cultures (Cacioppo, Berntsen, Sheridan, & McClintock,
2000).

Different types of research methods


In psychology, descriptive research seeks to identify how humans and other animals behave,
particularly in natural settings. Such research provides valuable information about the
diversity of behavior and may yield clues about potential cause-effect relations that are later
tested experimentally. Case studies, naturalistic observation, and surveys are research
methods commonly used to describe behavior.
Case studies: A case study is an in-depth analysis of an individual, group, or event. By
studying a single case in detail, researchers typically hope to discover principles that hold
true for people or situations in general. Data may be gathered through observation,
interviews, psychological tests, physiological recordings, or task performance. The
advantage of the case study is the tremendous amount of detail it provides. It may also be the
only way to get certain kinds of information. For example, one famous case study was the
story of Phineas Gage, who, in an accident, had a large metal rod driven through his head and
survived but experienced major personality and behavioral changes during the time
immediately following the accident (Damasio et al., 1994; Ratio et al., 2004; Van Horn et al.,
2012).
The major limitation of a case study is that it is a poor method for determining cause-effect
relations. In most case studies, explanations of behavior occur after the fact and there is little
opportunity to rule out alternative explanations. A second potential drawback concerns the
generalizability of the findings. A third drawback is the possible lack of objectivity in the
way data are gathered and interpreted. Such bias can occur in any type of research, but case
studies can be particularly worrisome because they are often based largely on the researcher’s
subjective impressions.

Naturalistic observations: In naturalistic observation, the researcher observes behavior as it


occurs in a natural setting, and attempts to avoid influencing that behavior. The best way to
look at the behavior of animals or people is to watch them behave in their normal
environment. That’s why animal researchers go to where the animals live and watch them eat,
play, mate, and sleep in their own natural surroundings. With people, researchers might want
to observe them in their workplaces, in their homes, or on playgrounds. For example, if
someone wanted to know how adolescents behave with members of the opposite sex in a
social setting, that researcher might go to the mall on a weekend night.
naturalistic observation does not permit clear causal conclusions. In the real world, many
variables simultaneously influence behavior, and they cannot be disentangled with this
research technique. There also is the possibility of bias in how researchers interpret what they
observe(observer bias). Finally, even the mere presence of an observer may disrupt a person’s
or animal’s behavior. Thus, researchers may disguise their presence so that participants are
not aware of being observed.(observer effect).

Survey: In the survey method, researchers will ask a series of questions about the topic they
are studying. In survey research, information about a topic is obtained by administering
questionnaires or interviews to many people. Political polls are a well-known example, but
surveys also ask about participants’ behaviors, experiences, and attitudes on wide-ranging
issues. For example, in a carefully conducted national survey of 12- to 17-year-old
Americans, 40 percent of these adolescents reported that they had witnessed violence either at
home or in their community, and 8 percent and 23 percent, respectively, indicated that they
had personally been the victims of sexual and physical assault (Hanson et al., 2006).
To draw valid conclusions about a population from a survey, the sample must be
representative: a representative sample is one that reflects the important characteristics of the
population. To obtain a representative sample, survey researchers typically use a procedure
called random sampling, in which every member of the population has an equal probability of
being chosen to participate in the survey. A common variation of this procedure, called
stratified random sampling, is to divide the population into subgroups based on
characteristics such as gender or ethnic identity. Suppose the population is 55 percent female.
In this case, 55 percent of the spaces in the sample would be allocated to women and 45
percent to men. Random sampling is then used to select the individual women and men who
will be in the survey
There also are several major drawbacks to surveys. First, survey data cannot be used to draw
conclusions about cause and effect. Second, surveys rely on participants’ self-reports, which
can be distorted by social desirability bias, by interviewer bias, by people’s inaccurate
perceptions of their own behavior, and by misinterpreting survey questions. Third,
unrepresentative samples can lead to faulty generalizations about how an entire population
would respond. And finally, even when surveys use proper random sampling procedures,
once in a while—simply by chance—a sample that is randomly chosen will turn out not to be
representative of the larger population

Laboratory observation: Sometimes observing behavior in animals or people is just not


practical in a natural setting. For example, a researcher might want to observe the reactions of
infants to a mirror image of themselves and to record the reactions with a camera mounted
behind a one-way mirror. That kind of equipment might be difficult to set up in a natural
setting. In a laboratory observation, the researcher would bring the infant to the equipment,
controlling the number of infants and their ages, as well as everything else that goes on in the
laboratory. As mentioned previously, laboratory settings have the disadvantage of being an
artificial situation that might result in artificial behavior—both animals and people often react
differently in the laboratory than they would in the real world.
The main advantage of this method is the degree of control that it gives to the observer. Both
naturalistic and laboratory observations can lead to the formation of hypotheses that can later
be tested.

Experimental Psychology
Experimental psychology is the study of psychological issues that uses experimental
procedures. The concern of experimental psychology is discovering the processes underlying
behaviour and cognition. Experimental psychologists conduct research with the help of
experimental methods.
According to Eysenck (1996) “An experiment is the planned manipulation of variables in
which at least one of the variables that is the independent variable is altered under the
predetermined conditions during the experiment.” According to Jahoda, “Experiment is a
method of testing hypothesis.” According to Festinger and Katz, “The essence of experiment
may be described as observing the effect of dependent variable after the manipulation of
independent variable.” According to Bootzin (1991), “An experiment is a research method
designed to control the factors that might affect a variable under study, thus allowing
scientists to establish cause and effect.”

Importance of experimental psychology:


In our daily lives we all collect and use psychological data to understand others as well as our
own behaviour. The kind of everyday and non scientific data that shapes our expectation and
beliefs and directs our behavior towards others is called common sense psychology. As
common sense psychologist our results are affected by two factors ie the source of
psychological information and our inferential strategies. Thus the inferences we make is
based on our own life experiences and suffer from personal biases and schemas. This is when
experimental psychology comes to play. Experimental psychology is the aspect of
psychological science that explores the human mind and its perceptions and behaviors through
experimental methodologies and subsequent interpretation of the obtained results. Again,
evidence-based practice in psychology is the integration of the best available research with
clinical expertise in the context of patient characteristics, culture, and preferences (American
Psychologist, 2006). This definition is in line with the one advocated by the Institute of
Medicine (2001) that says, “Evidence-based practice is the integration of best research evidence
with clinical expertise and patient values” (Sackett, Straus, Richardson, Rosenberg, & Haynes,
2000

History of experimental psychology:


Experimental Psychology can well be understood by studying the history of those who were
the forerunners in this field.
WILHELM WUNDT
Wilhelm Wundt was a psychologist, a physiologist, and a psychophysicist. He is called the
Founder or Father of Modern Psychology and the first man who without reservation is
properly called a psychologist (Boring, 1969). In his first world recognised Experimental
laboratory at Leipzig (Germany) in 1879, he did experiments on sensations, emotions,
reaction time, feelings, ideas, psychophysics, etc. His main contribution in psychology was
recognition of psychology as a science and he did work scientifically in his laboratory and
studied several psychological problems experimentally. He wrote the first Psychology
textbook, “Principles of Physiological Psychology” in 1874.
ERNST HEINRICH WEBER
Ernst Heinrich Weber, the German anatomist (physiologist) and a psychologist. He is
considered the founder of experimental psychology, and also called the founder of sensation,
physiology, and psychophysics. Weber is best known for his work on sensory response to
weight, temperature, and pressure. In 1834, he conducted research on the lifting of weights.
From his researches, he discovered that the experience of the differences in the intensity of
sensations depends on percentage differences in the stimuli rather than absolute differences.
This is known as the just-noticeable difference (j.n.d), difference threshold or limen. He
formulated the first principle of psychophysics and named it “Just Noticeable Difference
(JND)”. He explained the qualitative relationship between stimulus and response, called
Weber’s law
GUSTAV THEODOR FECHNER
Gustav Theodor Fechner (April 19, 1801-November 28, 1887) was a German experimental
psychologist. An early pioneer in experimental psychology and founder of psychophysics, he
inspired many 20th century scientists and philosophers. He is also credited with
demonstrating the nonlinear relationship between psychological sensation and the physical
intensity of a stimulus. He had found out what amount of physical energy can create different
intensity of sensation. Fechner did excellent work in the field of psychophysics. He modified
Weber’s experiments. He “rediscovered” Weber’s notion of differential threshold. He
formalised Weber’s law and saw it as a way to unite body and mind (sensation and
perception), bringing together the day view and the night view, reconciling them.
HERMANN VON HELMHOLTZ
Hermann Von Helmholtz was a German physicist and a physiologist who made significant
contributions to several widely varied areas of modern science. He did most prestigious work
in the field of the physiological psychology of sensation. In physiology and psychology, he is
known for his mathematics of the eye, theories of vision, ideas on the visual perception of
space, colour vision research, and on the sensation of tone, perception of sound, and
empiricism. He had modified the Thomas Young’s theory of colour vision, which is today
known as “Young Helmholtz” colour vision theory. Thomas Young in 1801 proposed theory
of colour vision called Trichromatic theory. According to this theory, there are basically three
colours—Red, Green, and Blue. Thomas Young, an English physicist, concluded that mixing
of three lights Red, Green, and Blue is enough to produce all combinations of colours visible
to a normal human eye.
JAMES MC KEEN CATTELL
James Mc Keen Cattell did researches in the field of Reaction time and Associations. For the
measurement of perception, he had invented an instrument called Tachistoscope. He
constructed several tests for the measurement of individual differences (personality,
intelligence, creativity, aptitudes, attitudes, and level of aspiration) and mental abilities. He
had also worked in the field of sensation and psychophysics.
HERMANN EBBINGHAUS
Hermann Ebbinghaus was the first social scientist to conduct first experimental study on
memory and learning process. He did several experiments on himself at first by using the
nonsense syllables. In fact, he was the first to introduce nonsense syllables in memory
experiments. Even today, “Ebbinghaus Curve of Forgetting” is greatly considered.

Experimental psychology in Indian context


Indian psychology is a school of psychology and not a system of philosophy. As a science, it
uses observation as the main tool for its study and research. Speculations and models have
their place, but in the final analysis, it is the discovery and application of facts that matter.
Concepts should relate to observations, and categories help in their analysis and organization.
It has been realized for long that science is not the same in all disciplines. It takes different
forms depending on the problems to be investigated (Demos 1960). The usual methods of
natural sciences are not sufficient to do justice to “the full range of behaviour and experience
of man as a person” (Giorgi 1970). Therefore, a pluralistic methodology is called for in
dealing with the complexity of human nature. Experimental methods are important and
necessary in the field of psychology, but their “putative supremacy” over others is
questionable. It would not be wrong to say that Indian psychology is not averse to using
hypothetico-deductive methodology, where applicable. In fact, a number of theories
emanating from Indian thought are quite amenable for such a methodology. One example is
the work of Naidu and Pandey (1992), which involved testing hypothesis derived from the
theory of niṣkāma karma in the Bhagavad Gītā. Most schools of Indian thought up hold
pratyakṣa (observation) and inference as valid methods for truth-testing (pramāṇas).
Again, repeatability presupposes observation under identical conditions. The difficulty of
providing identical conditions in human functioning rules out the possibility of precise
replicability. Human behavior is often shaped by events that are not easy to replicate. Further,
the requirement of replicability tends to exclude a significant amount of information relating
to human functioning (Buck 1976). Despite all these qualifications, we may note that some
areas of Indian psychology are eminently suitable for experimentation and laboratory
research. This is an already recognized fact. Meditation research is a case in point. Meditation
is an important research topic in Indian psychology. There is voluminous experimental data
collected under careful conditions, exploring a variety of variables. These include
investigation of neurophysiological correlates of meditative states, the influence of
meditation on cognitive and other kinds of skills, and the benefits of meditation for human
health and wellness. At the same time, we may note also some of the limitations of such
research. For example, there is need for greater conceptual clarity and operationalization than
is the case. Similarly, experimental control and replication deserve special attention (Rao
2011b). Equally important is the recognition that some areas of research are beyond
experimental manipulation and are closed to objectifying.

Types of experiments:
1)On the basis of the setting:
a) Lab experiments : A laboratory experiment is an experiment conducted under highly
controlled conditions (not necessarily a laboratory), where accurate measurements are
possible.
The researcher decides where the experiment will take place, at what time, with which
participants, in what circumstances and using a standardized procedure.
Participants are randomly allocated to each independent variable group. It is easier to
replicate (i.e. copy) a laboratory experiment. This is because a standardized procedure is
used. They allow for precise control of extraneous and independent variables. This allows a
cause and effect relationship to be established. The artificiality of the setting may produce
unnatural behavior that does not reflect real life, i.e. low ecological validity. This means it
would not be possible to generalize the findings to a real life setting. Demand characteristics
or experimenter effects may bias the results and become confounding variables.
b) Field experiments: Field experiments are done in the everyday (i.e. real life) environment
of the participants. The experimenter still manipulates the independent variable, but in a real-
life setting (so cannot really control extraneous variables). behavior in a field experiment is
more likely to reflect real life because of its natural setting, i.e. higher ecological validity than
a lab experiment. There is less likelihood of demand characteristics affecting the results, as
participants may not know they are being studied. This occurs when the study is covert. The
limitation is that there is less control over extraneous variables that might bias the results.
This makes it difficult for another researcher to replicate the study in exactly the same way.
2) On the basis of degree of control:
a) True experiments : In true experiments The researcher randomly assigns subjects to control
and treatment groups. The researcher usually have control over design of the treatment and it
requires use of control and treatment groups. For example: To run a true experiment, you
randomly assign half the patients in a mental health clinic to receive the new treatment. The
other half—the control group—receives the standard course of treatment for depression.
Every few months, patients fill out a sheet describing their symptoms to see if the new
treatment produces significantly better (or worse) effects than the standard one.

b) Quasi experiments: in quasi experiments some non random method is used to assign
subjects to the group. The researcher often does not have control over the treatment, but
instead studies pre-existing groups that received different treatments after the fact and control
groups are not required. For example: You discover that a few of the psychotherapists in the
clinic have decided to try out the new therapy, while others who treat similar patients have
chosen to stick with the normal protocol.
You can use these pre-existing groups to study the symptom progression of the patients
treated with the new therapy versus those receiving the standard course of treatment.

Although the groups were not randomly assigned, if you properly account for any systematic
differences between them, you can be reasonably confident any differences must arise from
the treatment and not other confounding variables.

STEPS IN AN EXPERIMENT:
1) Research problem: The first step in the experimental method is raising a problem or
simply say finding a problem. In everyday life, many factors influence our behavior.
Sometimes we are puzzled and ask questions to ourselves as to why people act as they do.
These inquiries lead us to raise problems before us. Experiments can help find answers to
these questions. Experiments start with a problem, without its presence there is no question
for experimentation. It explores to find the solution.

2) Hypothesis formulation: Hypothesis: A hypothesis tells what relationship a researcher


expects to find between an independent variable (IV) and a dependent variable (IV) to study
cause-and effect relationship.
A hypothesis is a tentative set of beliefs about the nature of the world, a statement about what
you expect to happen if certain conditions are true” (Halpern, 1989).
Types of hypothesis:
a)Alternative hypothesis: The alternative hypothesis states that there is a relationship between
the two variables being studied (one variable has an effect on the other).
An experimental hypothesis predicts what change(s) will take place in the dependent variable
when the independent variable is manipulated.
It states that the results are not due to chance and that they are significant in terms of
supporting the theory being investigated
b) Null hypothesis: The null hypothesis states that there is no relationship between the two
variables being studied (one variable does not affect the other). There will be no changes in
the dependent variable due to the manipulation of the independent variable.
It states results are due to chance and are not significant in terms of supporting the idea being
investigated.
c) Directional hypothesis: A directional (one-tailed) hypothesis predicts the nature of the
effect of the independent variable on the dependent variable. It predicts in which direction the
change will take place. (i.e. greater, smaller, less, more)
E.g., adults will correctly recall more words than children.
d) Non-directional hypothesis: A non-directional (two-tailed) hypothesis predicts that the
independent variable will have an effect on the dependent variable, but the direction of the
effect is not specified. It just states that there will be a difference.
E.g., there will be a difference in how many numbers are correctly recalled by children and
adults.
3) Identifying Variables:
Variable: According to Postman and Egan (1949), “A variable is a characteristic or attribute
that can take a number of values.” According to D’Amato (2004), “Any measurable attribute
of objects, things, or beings is called a variable.” The measurability attribute need not be
quantitative; it can be qualitative also such as race, sex, and religion.”
Variables are of several types:
1) Independent variable: According to D’Amato (2004), Independent variable is “Any
variable manipulated by experimenter either directly or through selection in order to
determine its effect on a behavioral variable.” According to Townsend, “Independent variable
is that variable which is manipulated by the experimenter in his attempt to ascertain its
relationship to an observed phenomenon.” According to Ghorpade, “Independent variable is
usually the cause whose effects are being studied. The experimenter changes or varies
independent variable to find out what effects such change produced on dependent variable.”
Some experts, depending upon the method of manipulation used have tried to divide the
independent variable into Type-E independent variable and Type-S independent variable
(D’Amato, 1970). Type-E independent variable is one which is directly or experimentally
manipulated by the experimenter and Type-S independent variable is one which is not
manipulated directly by the experimenter or researcher as these are difficult to be
manipulated directly but manipulated through the process of selection only.
2) Dependent Variable: According to D’Amato, “Any measured behavioral variable of
interest in the psychological investigation is dependent variable.” According to Townsend,
“A dependent variable is that factor which appears, disappears, or varies as the experimenter
introduces, or removes, or varies the independent variable.” According to Postman & Eganhe
‘phenomenon’ with which we wish to explain and predict is dependent variable.
3)Confounding variable: A confounding variable is any variable, other than the IV that is not
equivalent in all conditions. These are variables that are mistakenly manipulated along with
the IV. Confounding variables can lead researchers to draw incorrect conclusions.
Confounding variables has its effect on both IV and DV.

4) Extraneous variable: Independent and dependent variables are not the only variables
present in many experiments. In some cases, extraneous variables may also play a role. This
type of variable is one that may have an impact on the relationship between the independent
and dependent variables.

For example, an experiment on the effects of sleep deprivation on test performance, other
factors such as age, gender, and academic background may have an impact on the results. In
such cases, the experimenter will note the values of these extraneous variables so any impact
can be controlled for.

5)Intervening variable: Intervening variables, also sometimes called intermediate or mediator


variables, are factors that play a role in the relationship between two other variables. 2

For example: sleep problems in university students are often influenced by factors such as
stress. As a result, stress might be an intervening variable that plays a role in how much sleep
people get, which may then influence how well they perform on exams.

6) Controlled variable: In many cases, extraneous variables are controlled for by the
experimenter. A controlled variable is one that is held constant throughout an experiment.

In the case of participant variables, the experiment might select participants that are the same
in background and temperament to ensure that these factors don't interfere with the results.
Holding these variables constant is important for an experiment because it allows researchers
to be sure that all other variables remain the same across all conditions.

4) Preparing a research design

i) Experimental design: Experimental design refers to the ways participants are assigned to
the different conditions of an experiment. Your experimental design can involve subjecting
the same group of participants to all levels of your independent variable or just one level. An
experimental design is a way to plan out an experiment so that the results of the research is
objective and valid.

ii) Importance of experimental designs: Experimental design methods allow the experimenter
to understand better and evaluate the factors that influence a particular system by means of
statistical approaches. Such approaches combine theoretical knowledge of experimental
designs and a working knowledge of the particular factors to be studied

iii) Types of experimental designs: There are various type of experimental design.
a) Within subject designs: Single-group or within-subjects design is a technique in which
each subject serves as his own control. This single group of subjects serves under all values
or levels or conditions of the research or the independent variable, that is the variable under
study. For example, the experimenter wants to determine whether nicotine (found in tobacco)
has a deleterious or harmful or damaging effect on motor coordination. One powerful means
of studying this problem is as follows: the experimenter would choose a group of subjects
and submit them to a series of motor coordination tests, one test daily. Before some of the
tests, the subjects would be given a dose of nicotine, and before others they would receive a
placebo, an innocuous or harmless substance administered in the same way as the drug
(nicotine) and no-drug conditions. If the drug has a harmful effect on motor coordination,
experimenter should observe that, in general, the performance of subjects is poorer when
tested under the drug (nicotine) than when tested after receiving the placebo. Because each
subject is tested under both conditions, the experimenter need not concern herself or himself
with individual differences in motor coordination ability. Single-group design or within-
subjects design produce quite representative results possible to generalise.

b) Between subject designs: With separate groups (between-subjects) designs, a separate


group of subjects serves under each of the conditions of the research. If the only way in
which the experimenter could obtain subjects with different amounts of nicotine in them was
to choose smokers and non-smokers from the general population then he would be forced to
use a separate-groups (or between-subjects) design; that is, one group of subjects (non-
smokers) would be tested under the no-drug condition. By comparing the performance of the
two groups of subjects, the experimenter can evaluate the effect of nicotine on motor
coordination. The situation with respect to individual differences in motor coordination
ability is drastically changed. The experimenter is now importantly interested in any
dimension of individual differences that might significantly affect motor coordination, such
as age, sex, and occupation.

c) Counterbalancing design: Counterbalancing is a technique used to deal with order effects


when using a repeated measures design. With counterbalancing, the participant sample is
divided in half, with one half completing the two conditions in one order and the other half
completing the conditions in the reverse order. E.g., the first 10 participants would complete
condition A followed by condition B, and the remaining 10 participants would complete
condition B and then A. Any order effects should be balanced out by this technique.

d) Longitudinal design: A correlational research study of this kind called a longitudinal study
examines variables over a lengthy period of time. This investigation may last for several
weeks, months, or even years. Longitudinal research can last for several decades in various
circumstances. Finding correlations between variables that are unrelated to different
background variables is done using longitudinal design. The same group of people will be
observed repeatedly using this observational research method. Data may be acquired
regularly during the course of the study after the first data collection at its beginning. This
enables researchers to track the evolution of several factors.

e) Cross sectional design: A cross-sectional study is a type of research design in which you
collect data from many different individuals at a single point in time. In cross-sectional
research, you observe variables without influencing them. For example:You want to know
how many families with children in New York City are currently low-income so you can
estimate how much money is required to fund a free lunch program in public schools.
Because all you need to know is the current number of low-income families, a cross-sectional
study should provide you with all the data you require.
Sometimes a cross-sectional study is the best choice for practical reasons – for instance, if
you only have the time or money to collect cross-sectional data, or if the only data you can
find to answer your research question was gathered at a single point in time.

As cross-sectional studies are cheaper and less time-consuming than many other types of
study, they allow you to easily collect data that can be used as a basis for further research.

f) Quasi experiment design: A quasi-experimental design, like a real experiment, seeks to


prove a connection between an independent and dependent variable. A quasi-experiment,
however, does not rely on random assignment, unlike an actual experiment. Instead, non-
random criteria are used to classify participants into groups. When real trials cannot be
conducted for moral or practical reasons, quasi-experimental design is a helpful tool.

g) Pre test and post test designs: In an experiment with a pretest-posttest design,
measurements are made of subjects both before and after they receive a treatment. Both
experimental and quasi-experimental research can make use of pretest-posttest designs,
which may or may not include control groups.

5) Sampling techniques: It is crucial how we choose a sample of people to participate in our


research. The population to which we can generalise our research findings will depend on the
method we choose to choose participants (random sampling). It will be determined whether
bias exists in our treatment groups by the method we utilise to randomly assign participants
to various treatment conditions. If we do a poor job at the sampling stage of the research
process, the integrity of the entire project is at risk.

a) Probability sampling/structured sampling/quantitative sampling: With probability


sampling, each member of the population has a chance of being chosen. Every population
subset has an equal chance of being represented in the sample since probability sampling uses
random selection. Probability samples are more representative, and researchers can more
accurately extrapolate their findings to the entire group

Simple Random Sampling: As the name implies, simple random sampling is the most basic
form of probability sampling. Using a computer programme or random number generator,
psychology researchers randomly choose a sample from among all the members of a
population.

Random Sampling with Stratification: A simple random sample is taken from each subgroup
once the population has been divided into segments for stratified random sampling. For
instance, research might segment the populace into subgroups according to race, sex, or age
before selecting a straightforward random sample from each of these categories. Stratified
random sampling helps guarantee that specific groups are fairly represented in the sample and
frequently offers statistical accuracy that is higher than ordinary random sampling.
Cluster sampling: In cluster sampling, a population is divided into smaller clusters, frequently
based on boundaries or location. Then, all of the subjects in a cluster are measured after a
random sample of these clusters is chosen.

b) Non probability sampling /qualitative sampling: Nonprobability sampling entails choosing


participants using techniques that do not guarantee that every demographic subset will be
represented equally. For instance, a study might use volunteers to fill its participant pool.

One issue with this type of sample is that volunteers and non-volunteers may differ on certain
characteristics, which could make it challenging to generalise the findings to the full
community.

Convenience sampling: Choosing study participants based on their availability and


convenience is known as convenience sampling.

Purposive Sampling: In a purposeful sample, people who fit certain requirements are sought
out. For instance, a researcher might be curious to know what college grads between the ages
of 20 and 35 think about a particular subject. They might conduct phone interviews where
they specifically look for and speak with persons who fit their criteria.

Quota sampling: In quota sampling, particular percentages of each segment within a


population are purposefully sampled.

6) Data collection: The process of gathering, measuring, and analysing precise insights for
research using accepted, established methods is known as data collection. Based on the
evidence gathered, a researcher can evaluate their hypothesis. No of the subject of study,
gathering data is typically the first and most crucial phase in the research process. Depending
on the type of data needed, different disciplines of research require different approaches to
data gathering.

a)Qualitative data collection: Qualitative data collection plays an important role in


monitoring and evaluation as it helps you delve deeper into a particular problem and gain a
human perspective on it. It provides in depth information on some of the more intangible
factors like experiences, opinions, motivations, behaviours or descriptions of a process, event
or a particular context relevant to your project. So, in other words, a qualitative approach uses
people’s stories, experiences and feelings to measure change. Types of qualitative data
collection:

 Open-Ended Surveys: allow for a systematic collection of information from a


defined population, usually by means of interviews or questionnaires
administered to a sample of units in the population. Qualitative surveys
include a set of open-ended questions that aim to gather information from
people about their characteristics, knowledge, attitudes, values, behaviours,
experiences and opinions on relevant topics. Surveys can be collected via
pen/paper forms or digitally via online/offline data collection apps.
 Open ended interviews: are useful when you want an in-depth understanding
of experiences, opinions or individual descriptions of a process. Can be done
individually or in groups. In groups, you will ask fewer questions than in an
individual interview since everyone has to have the opportunity to answer and
there are limits to how long people are willing to sit still. In-person interviews
can be longer and more in-depth.
 Community interviews/meeting: is a form of public meeting open to all
community members. Interaction is between the participants and the
interviewer, who moderates the meeting and asks questions following a
prepared interview guide. This is ideal for interacting with and gathering
insights from a big group of people.
 Focus group discussions (FGDs): is ideal when you want to interview a
small group of people (6-12 individuals) to informally discuss specific topics
relevant to the issues being examined. A moderator introduces the topic and
uses a prepared interview guide to lead the discussion and extract insights,
opinions and reactions but s/he can improvise with probes or additional
questions as warranted by the situation. The composition of people in an FGD
depends upon the purpose of the research, some are homogenous, others
diverse. FGDs tend to elicit more information than individual interviews
because people express different views, beliefs and opinions and engage in a
dialogue with one another.
 Case study: is an in-depth analysis of individuals, organisations, events,
projects, communities, time periods or a story. As it involves data collection
from multiple sources, a case study is particularly useful in evaluating
complex situations and exploring qualitative impact. A case study can also be
combined with other case studies or methods to illustrate findings and
comparisons. They are usually presented in written forms, but can also be
presented as photographs, films or videos.
 Observation: It is a good technique for collecting data on behavioural
patterns, physical surroundings, activities and processes as it entails recording
what observers see and hear at a specified site. An observation guide is often
used to look for consistent criteria, behaviours, or patterns. Observations can
be obtrusive or unobtrusive. It is ‘obtrusive’ when observations are made with
the participant’s knowledge and ‘unobtrusive’ when observations are done
without the knowledge of the participant.
 Ethnography: Ethnographic research involves observing and studying
research topics in a specific geographic location to understand cultures,
behaviors, trends, patterns and problems in a natural setting. Geographic
location can range from a small entity to a big country. Researchers must
spend a considerable amount of time, usually several weeks or months, with
the group being studied to interact with them as a participant in their
community. This makes it a time-consuming and challenging research method
and cannot be limited to a specific period.
 Visual techniques: in this method, participants are prompted to construct
visual responses to questions posed by the interviewers, the visual content can
be maps, diagrams, calendars, timelines and other visual displays to examine
the study topics. This technique is especially effective where verbal methods
can be problematic due to low-literate or mixed-language target populations,
or in situations where the desired information is not easily expressed in either
words or numbers.
 Literature review and document review: is a review of secondary data
which can be either qualitative or quantitative in nature e.g. project records
and reports, administrative databases, training materials, correspondence,
legislation and policy documents, as well as videos, electronic data or photos
that are relevant to your project. This technique can provide cost-effective and
timely baseline information and a historical perspective of the project or
intervention.
 Oral histories: it’s the process of establishing historical information by
interviewing a select group of informants and drawing on their memories of
the past. Oral history strives to obtain interesting and provoking historic
information from different perspectives, most of which cannot be found in
written sources. The insights from oral history can be discussed, debated, and
utilized in numerous capacities.

b)Quantitative data collection: The quantitative approach uses numbers and statistics to
quantify change and is often expressed in the form of digits, units, ratios, percentages,
proportions, etc. Compared to the qualitative approach, the quantitative approach is more
structured, straightforward and formal. Quantitative approach is utilized to derive answers to
the questions ‘how much’ or ‘how many’. Types of quantitative data collection:
 A structured closed-ended interview: this type of interview systematically
follows carefully organised questions that only allow a limited range of
answers, such as “yes/no” or expressed by a rating/number on a scale. For
quantitative interviews to be effective, each question must be asked the same
way to each respondent, with little to no input from the interviewer.
 Closed ended surveys and questionnaires: is an ideal choice when you want
simple, quick feedback which can easily translate into statistics for analysis. In
quantitative research, surveys are structured questionnaires with a limited
number of closed-ended questions and rating scales used to generate numerical
data or data that can be separated under ‘yes’ or ‘no’ categories. These can be
collected and analysed quickly using statistics such as percentages.
 Experimental research: is guided by hypotheses that state an expected
relationship between two or more variables, so an experiment is conducted to
support or disconfirm this experimental hypothesis. Usually, one set of
variables is manipulated (treatment group) and applied to the other set of
dependent variables (control group) to measure their effect on the latter. The
effect of the independent variables on the dependent variables is observed and
recorded to draw a reasonable conclusion regarding the relationship between
the two groups. This research is mainly used in natural sciences.
 Correlational research: is a non-experimental research that studies the
relationship between two or more variables that are similar and interdependent
and assesses their statistical relationship – how one variable affects the other
and vice versa but with no influence from any extraneous variable. It uses
mathematical analysis to analyse collected data and the results are presented in
a diagram or generated in statistics.
 Causal-comparative: also known as quasi-experimental research, compares
two variables that are not related. Variables are not manipulated. One variable
is dependent and the other independent. Variables not randomly assigned.
 Statistical data review: entails a review of population censuses, research
studies and other sources of statistical data.
 Laboratory testing: are precise measurement of a specific objective
phenomenon, e.g. infant weight or water quality test.
7) Analysis of data and drawing conclusions: Once the study is complete and the
observations have been made and recorded the researchers need to analyse the data and draw
their conclusions. Typically, data are analysed using both descriptive and inferential statistics.
Descriptive statistics are used to summarize the data and inferential statistics are used to
generalize the results from the sample to the population. In turn, inferential statistics are used
to make conclusions about whether or not a theory has been supported, refuted, or requires
modification.

Evaluation
Advantages of the Experimental Method
i)Experimental method is the only method that allows the experimenter to infer cause-and-
effect relationship.
(ii) In experimental method, the experimenter can exercise control over other confounding
variables.
(iii) It helps in conducting a systematic, objective, précised, planned, wellorganised, and a
well-controlled scientific study.
(iv) This method makes any subject a science because a subject is a science not by “what” it
studies but by “how” it studies. This method makes Psychology a science.
Disadvantages of the Experimental Method
i)Its control is its weakness. It makes the set up or the situation artificial. A situation in which
all the variables are carefully controlled is not a normal, natural situation. As a result, the
researcher or the experimenter may have difficulty generalising the findings from
observations in an experiment to the real world (Christensen, 1992). For example, researcher
may not be able to generalise from a study conducted that examinee’s memory for non-sense
syllables presented on a computer screen in a psychological laboratory and draw conclusions
about the student’s learning about introductory Psychology in a college classroom. It is very
difficult to know and control all the intervening variables.
(ii) All the psychological phenomena can’t be studied by this method.
(iii) The experimental method is costly in terms of money and time. A welle stablished
laboratory and trained personnel are needed to conduct experiments.

Ethical issues
1) Informed consent: Whenever possible investigators should obtain the consent of
participants. In practice, this means it is not sufficient to simply get potential participants to
say “Yes”. They also need to know what it is that they are agreeing to. In other words, the
psychologist should, so far as is practicable explain what is involved in advance and obtain
the informed consent of participants.
Before the study begins the researcher must outline to the participants what the research is
about, and then ask their consent (i.e. permission) to take part. An adult (18ys +) capable of
giving permission to participate in a study can provide consent. Parents/legal guardians of
minors can also provide consent to allow their children to participate in a study.

2) Debriefing: After the research is over the participant should be able to discuss the
procedure and the findings with the psychologist. They must be given a general idea of what
the researcher was investigating and why, and their part in the research should be explained.
Participants must be told if they have been deceived and given reasons why. They must be
asked if they have any questions and those questions should be answered honestly and as
fully as possible.
Debriefing should take place as soon as possible and be as full as possible; experimenters
should take reasonable steps to ensure that participants understand debriefing.
“The purpose of debriefing is to remove any misconceptions and anxieties that the
participants have about the research and to leave them with a sense of dignity, knowledge,
and a perception of time not wasted” (Harris, 1998).

3)Protection of participants: Researchers must ensure that those taking part in research will
not be caused distress. They must be protected from physical and mental harm. This means
you must not embarrass, frighten, offend or harm participants. Normally, the risk of harm
must be no greater than in ordinary life, i.e. participants should not be exposed to risks
greater than or additional to those encountered in their normal lifestyles.
The researcher must also ensure that if vulnerable groups are to be used (elderly, disabled,
children, etc.), they must receive special care. For example, if studying children, make sure
their participation is brief as they get tired easily and have a limited attention span.
Researchers are not always accurately able to predict the risks of taking part in a study and in
some cases, a therapeutic debriefing may be necessary if participants have become disturbed
during the research (as happened to some participants in Zimbardo prisoners/guard study.

4) Deception: This is where participants are misled or wrongly informed about the aims of
the research. Types of deception include (i) deliberate misleading, e.g. using confederates,
staged manipulations in field settings, deceptive instructions; (ii) deception by omission, e.g.,
failure to disclose full information about the study, or creating ambiguity. The researcher
should avoid deceiving participants about the nature of the research unless there is no
alternative – and even then this would need to be judged acceptable by an independent expert.
However, there are some types of research that cannot be carried out without at least some
element of deception.
For example, in Milgram study of obedience the participants thought they there giving
electric shocks to a learner when they answered a question wrong. In reality, no shocks were
given and the learners were confederates of Milgram.
This is sometimes necessary in order to avoid demand characteristics (i.e. the clues in an
experiment which lead participants to think they know what the researcher is looking for).
Another common example is when a stooge or confederate of the experimenter is used (this
was the case in both the experiments carried out by Asch).
However, participants must be deceived as little as possible, and any deception must not
cause distress. Researchers can determine whether participants are likely to be distressed
when deception is disclosed, by consulting culturally relevant groups. If the participant is
likely to object or be distressed once they discover the true nature of the research at
debriefing, then the study is unacceptable.
If you have gained participants’ informed consent by deception, then they will have agreed to
take part without actually knowing what they were consenting to. The true nature of the
research should be revealed at the earliest possible opportunity, or at least during debriefing.
Some researchers argue that deception can never be justified and object to this practice as it
(i) violates an individual’s right to choose to participate; (ii) is a questionable basis on which
to build a discipline; and (iii) leads to distrust of psychology in the community.

5)Confidentiality: Participants, and the data gained from them must be kept anonymous
unless they give their full consent. No names must be used in a lab report.
What do we do if we find out something which should be disclosed (e.g. criminal act)?
Researchers have no legal obligation to disclose criminal acts and have to determine which is
the most important consideration: their duty to the participant vs. duty to the wider
community.
Ultimately, decisions to disclose information will have to be set in the context of the aims of
the research.

6) Withdrawal from investigation: Participants should be able to leave a study at any time
if they feel uncomfortable. They should also be allowed to withdraw their data. They should
be told at the start of the study that they have the right to withdraw. They should not have
pressure placed upon them to continue if they do not want to (a guideline flouted in Milgram
research).
Participants may feel they shouldn’t withdraw as this may ‘spoil’ the study. Many
participants are paid or receive course credits, they may worry they won’t get this if they
withdraw Even at the end of the study the participant has a final opportunity to withdraw the
data they have provided for the research.

You might also like