Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
37 views61 pages

Psychological Process

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 61

PSYCHOLOGICAL PROCESS I

I. INTRODUCTION TO THE PSYCHOLOGICAL PROCESS


DEFINITIONS
Psychology is the systematic, scientific study of behaviours and mental processes. The behaviours refer to observable/overt
actions or responses/reactions in both humans and animals, which might include eating, speaking, laughing, running, reading,
and sleeping, and the mental processes, which are not directly observable/covert, refer to a wide range of complex mental
processes, such as thinking, imagining, studying, and dreaming.
GOALS OF PSYCHOLOGY
Every science has the common goal of learning how things work. The goals specifically aimed at uncovering the mysteries of
human and animal behaviour are description, explanation, prediction, and control. The scientific approach is a way to accomplish
these goals of psychology.
Describe (what is happening?); the first goal of psychology is to describe the different ways that organisms behave. As
psychologists begin to describe the behaviours and mental processes of autistic children, such as difficulties in learning language,
they begin to understand how autistic children behave. After describing behaviour, psychologists try to explain behaviour, the
second goal.
The first step in understanding anything is to describe it. Description involves observing behaviour and noting everything about
it: what is happening, where it happens, to whom it happens, and under what circumstances it seems to happen. For example,
a psychologist might wonder why so many computer scientists seem to be male. She makes further observations and notes that
many “non-techies” stereotypically perceive the life and environment of a computer scientist as someone who lives and breathes
at the computer and surrounds himself with computer games, junk food, and science-fiction gadgets, characteristics that add up
to a very masculine ambiance. That’s what seems to be happening. The psychologist’s observations are a starting place for the
next goal: Why do females seem to avoid going into this environment?
Explain (why is it happening?); the second goal of psychology is to explain the causes of behaviour. The explanation of
autism has changed as psychologists learn more about this complex problem. In the 1950s, psychologists explained that children
became autistic if they were reared by parents who were cold and rejecting. In the 1990s, researchers discovered that autism is
caused by genetic and biological factors that result in a mal-developed brain. Being able to describe and explain behaviour helps
psychologists reach the third goal, which is to predict behaviour.
Based on her observations, the psychologist might try to come up with a tentative explanation, such as “women feel they do
not belong in such stereotypically masculine surroundings.” In other words, she is trying to understand or find an explanation
for the lower proportion of women in this field. Finding explanations for behaviour is a very important step in the process of
forming theories of behaviour. A theory is a general explanation of a set of observations or facts. The goal of description
provides the observations, and the goal of explanation helps build the theory.
Predict (when will it happen again?); the third goal of psychology is to predict how organisms will behave in certain situations.
However, psychologists may have difficulty predicting how autistic children will behave in certain situations unless they have
already described and explained their behaviours. For example, from the first two goals, psychologists know that autistic children
are easily overwhelmed by strange stimuli and have difficulty paying attention. Based on this information, psychologists can
predict that autistic children will have difficulty learning in a school environment because there are too many activities and
stimuli in the classroom. However, if psychologists can predict behaviour, then they can often control behaviour.
Determining what will happen in the future is a prediction. In the original Cheryan et al. study, the prediction is clear: If we
want more women to go into computer science, we must do something to change either the environment or the perception of
the environment typically associated with this field. This is the purpose of the last of the four goals of psychology: changing or
modifying behaviour.
Control (how can it be changed?); for some psychologists, the fourth goal of psychology is to control an organism’s
behaviour. However, the idea of control has both positive and negative sides. The positive side is that psychologists can help
people, learn to control undesirable behaviours by teaching better methods of self-control and ways to deal with situations and
relationships). The negative side is the concern that psychologists might control people’s behaviours without their knowledge
or consent.
The focus of control, or the modification of some behaviour, is to change behaviour from an undesirable one (such as women
avoiding a certain academic major) to a desirable one (such as more equality in career choices). Professor Cheryan suggests that
changing the image of computer science may help increase the number of women choosing to go into this field.
HISTORICAL APPROACHES
An approach refers to a focus or perspective, which may use a particular research method or technique. The approaches to
understanding behaviour include biological, cognitive, behavioural, psychoanalytic, humanistic, cross-cultural, and, most
recently, evolutionary.
Structuralism
It really all started to come together in a laboratory in Leipzig, Germany, in 1879. It was here that Wilhelm Wundt, a physiologist,
attempted to apply scientific principles to the study of the human mind.
In his laboratory, students from around the world were taught to study the structure of the human mind. Wundt believed that
consciousness, the state of being aware of external events, could be broken down into thoughts, experiences, emotions, and
other basic elements. In order to inspect these non-physical elements, students had to learn to think objectively about their own
thoughts, after all, they could hardly read someone else’s mind. Wundt called this process objective introspection, the process
of objectively examining and measuring one’s own thoughts and mental activities. This was really the first attempt by anyone to
bring objectivity and measurement to the concept of psychology. This attention to objectivity, together with the establishment
of the first true experimental laboratory in psychology, is why Wundt is known as the father of psychology.
In fact, his laboratory was housed in several rooms in a shabby building that contained rather simple equipment, such as
platforms, various balls, telegraph keys, and metronomes. He would ask subjects to drop balls from a platform or listen to a
metronome and report their own sensations. Wundt and his followers were analysing their sensations, which they thought were
the key to analysing the structure of the mind. For this reason, they were called structuralists and their approach was called
structuralism. Structuralism was the study of the most basic elements, primarily sensations and perceptions, that make
up our conscious mental experiences or early perspective in psychology associated with Wilhelm Wundt and Edward
Titchener, in which the focus of the study is the structure or basic elements of the mind (i.e., was based on the notion
that the task of psychology is to analyse consciousness into its basic elements and investigate how these elements are
related). Just as a person might assemble hundreds of pieces of a jigsaw puzzle into a completed picture, structuralists tried to
combine hundreds of sensations into a complete conscious experience. Perhaps Wundt’s greatest contribution was his method
of introspection. Introspection was a method of exploring conscious mental processes by asking subjects to look inward and
report their sensations and perceptions. For example, after listening to a beating metronome, the subjects would be asked to
report whether their sensations were pleasant, unpleasant, exciting, or relaxing. However, introspection was heavily criticized
for being an unscientific method because it was solely dependent on subjects’ self-reports, which could be biased, rather than
on objective measurements. Although Wundt’s approach was the first, it had little impact on modern psychology. The modern-
day cognitive approach also studies mental processes but with different scientific methods and much broader interests than
those of Wundt. It wasn’t long before Wundt’s approach was criticized for being too narrow and subjective in primarily studying
sensations. These criticisms resulted in another new approach, called functionalism.
One of Wundt’s students was Edward Titchener (1867–1927), an Englishman who eventually took Wundt’s ideas to Cornell
University in Ithaca, New York. Titchener expanded on Wundt’s original ideas, calling his new viewpoint structuralism because
the focus of the study was the structure of the mind. He believed that every experience could be broken down into its individual
emotions and sensations. Although Titchener agreed with Wundt that consciousness could be broken down into its basic
elements, Titchener also believed that objective introspection could be used on thoughts as well as on physical sensations. In
1894, one of Titchener’s students at Cornell University became famous for becoming the first woman to receive a Ph.D. in
psychology. Her name was Margaret F. Washburn, and she was Titchener’s only graduate student for that year. In 1908 she
published a book on animal behaviour that was considered an important work in that era of psychology, The Animal Mind.
Structuralism was a dominant force in the early days of psychology, but it eventually died out in the early 1900s, as the
structuralists were busily fighting among themselves over just which key elements of experience were the most important. A
competing view arose not long after Wundt’s laboratory was established, shortly before structuralism came to America.
Functionalism
For twelve years, William James laboured over a book called the Principles of Psychology, which was published in 1890 and
included almost every topic that is now part of psychology textbooks: learning, sensation, memory, reasoning, attention, feelings,
consciousness, and a revolutionary theory of emotions. Unlike Wundt, who saw mental activities as composed of basic elements,
James viewed mental activities as having developed through ages of evolution because of their adaptive functions, such as
helping humans survive. James was interested in the goals, purposes, and functions of the mind, an approach called
functionalism. Functionalism, which was the study of function rather than the structure of consciousness, was interested
in how our minds adapt to our changing environment or an early perspective in psychology associated with William
James, in which the focus of study is how the mind allows people to adapt, live, work, and play (i.e., was based on the
belief that psychology should investigate the function or purpose of consciousness, rather than its structure).
Functionalism did not last as a unique approach, but many of James’s ideas grew into current areas of study, such as emotions,
attention, and memory. In addition, James suggested ways to apply psychological principles to teaching, which had a great impact
on educational psychology. For all these reasons, James is considered the father of modern psychology.
He was heavily influenced by Charles Darwin’s ideas about natural selection, in which physical traits that help an animal adapt
to its environment and survive are passed on to its offspring. It is interesting to note that one of James’s early students was Mary
Whiton Calkins, who completed every course and requirement for earning a Ph.D. but was denied that degree by Harvard
University because she was a woman. In 1905, she became the first female president of the American Psychological Association.
In the new field of psychology, functionalism offered an alternative viewpoint to structuralists. But like so many of psychology’s
early ideas, it is no longer a major perspective. Instead, one can find elements of functionalism in the modern fields of educational
psychology (studying the application of psychological concepts to education) and industrial/ organizational psychology (studying
the application of psychological concepts to businesses, organizations, and industry), as well as other areas in psychology.
Functionalism also played a part in the development of one of the more modern perspectives, evolutionary psychology.
Gestalt
The whole is greater than the sum of its parts, Max Wertheimer, like James, objected to the structuralist point of view, but for
different reasons. Wertheimer believed that psychological events such as perceiving and sensing could not be broken down into
any smaller elements and still be properly understood. For example, just as a melody is made up of individual notes that can
only be understood if the notes are in the correct relationship to one another, perception can only be understood as a whole,
entire event. Hence the familiar slogan, “The whole is greater than the sum of its parts.” Wertheimer and others believed that
people naturally seek out patterns (“wholes”) in the sensory information available to them. Wertheimer and others devoted their
efforts to studying sensation and perception in this new perspective, Gestalt psychology. Gestalt is a German word meaning
“an organized whole” or “configuration,” which fits well with the focus on studying whole patterns rather than small pieces of
them. The Gestalt approach emphasized that perception is more than the sum of its parts and studied how sensations are
assembled into meaningful perceptual experiences, or an early perspective in psychology focusing on perception and
sensation, particularly the perception of patterns and whole figures. Today, Gestalt ideas are part of the study of cognitive
psychology, a field focusing not only on perception but also on learning, memory, thought processes, and problem-solving; the
basic Gestalt principles of perception are still taught within this newer field. The Gestalt approach has also been influential in
psychological therapy, becoming the basis for a therapeutic technique called Gestalt therapy.
Behaviourism
One reason psychoanalysis struggled to gain acceptance within psychology was that it conflicted in many basic ways with the
tenets of behaviourism, a new school of thought that gradually became dominant within psychology between 1913 and the late
1920s. Founded by John B. Watson (1878–1958), behaviourism is a theoretical orientation based on the premise that
scientific psychology should study only observable behaviour. It is important to understand what a radical change this
definition represented. Watson (1913, 1919) proposed that psychologists abandon the study of consciousness altogether and
focus exclusively on behaviours that they could observe directly. Behaviour refers to any overt (observable) response or activity
by an organism.
By the early 1900s, psychologist John B. Watson had tired of the arguing among the structuralists; he challenged the functionalist
viewpoint, as well as psychoanalysis, with his own “science of behaviour,” or behaviourism. Watson wanted to bring psychology
back to a focus on scientific inquiry, and he felt that the only way to do that was to ignore the whole consciousness issue and
focus only on observable behaviour, something that could be directly seen and measured. Watson was certainly aware of Freud’s
work and his views on unconscious repression. Freud believed that all behaviour stems from unconscious motivation, whereas
Watson believed that all behaviour is learned. Freud stated that a phobia, an irrational fear, is really a symptom of an underlying,
repressed conflict and cannot be “cured” without years of psychoanalysis to uncover and understand the repressed material.
Watson believed that phobias are learned through the process of conditioning and set out to prove it. Along with his colleague
Rosalie Rayner, he took a baby, known as “Little Albert,” and taught him to fear a white rat by making a loud, scary noise every
time the infant saw the rat until finally just seeing the rat caused the infant to cry and become fearful. Even though “Little
Albert” was not afraid of the rat at the start, the experiment worked very well, in fact, he later appeared to be afraid of other
fuzzy things including a rabbit, a dog, and a sealskin coat. Behaviourism is still a major perspective in psychology today. It has
also influenced the development of other perspectives, such as cognitive psychology.
MODERN APPROACHES
Biological

The biological approach examines how the genes, hormones, and nervous system interact with the environments to
influence learning, personality, memory, motivation, emotions, and other traits and abilities. Psychobiologists, and
researchers who use the biological approach, have shown that genetic factors influence a range of human behaviours. The genes
use a chemical alphabet to write instructions for the development of the brain and body and the manufacture of chemicals that
affect mental health, learning, emotions, and everything people do. For example, it is known that autism runs in families, and
this genetic involvement is supported by the finding that if one identical twin has autism, then there is as high as a 90% chance
the other twin will have signs of autism. Researchers recently identified a number of genes involved in autism and are now using
genetic screening to help identify the causes of autism. Also using the biological approach, researchers found that social problems
associated with autism are linked to less activity in brain cells responsible for human empathy (mirror neurons). These cells
allow to put ourselves in other people’s shoes and experience how they feel. Reduced activity in these cells helps explain why
children with autism misunderstand verbal and non-verbal cues suggesting different emotions felt by others, including joy,
sadness, and anger, and why they have difficulty empathizing with others. Essentially, psychobiologists study how the brain
affects the mind, and vice versa. They may study an experience that many students are familiar with, called test anxiety.
Cognitive

Cognitive psychology, a modern perspective in psychology that focuses on memory, intelligence, perception, problem-
solving, and learning, which focuses on how people think, remember, store, and use information, became a major force
in the field in the 1960s. It wasn’t a new idea, as the Gestalt psychologists had themselves supported the study of mental
processes of learning. The development of computers and discoveries in biological psychology all stimulated interest in studying
the processes of thought. The cognitive perspective with its focus on memory, intelligence, perception, thought processes,
problem-solving, language, and learning has become a major force in psychology. From the cognitive perspective, the relatively
new field of cognitive neuroscience includes the study of the physical workings of the brain and nervous system when engaged
in memory, thinking, and other cognitive processes. Cognitive neuroscientists use tools for imaging the structure and activity of
the living brain, such as magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), and positron
emission tomography (PET). The continually developing field of brain imaging is important in the study of cognitive processes.
Behavioural

The behavioural approach analyses how organisms learn new behaviours or modify existing ones, depending on whether
events in their environments reward or punish these behaviours. Psychologists use behavioural principles to teach people
to be more assertive or less depressed, to toilet-train young children, and to change many other behaviours. Psychologists use
behavioural principles to train animals to press levers, to use symbols to communicate, and to perform behaviours on cue in
movies and television shows. Largely through the creative work and original ideas of B. F. Skinner (1989), the behavioural
approach has grown into a major force in psychology. Skinner’s ideas stress the study of observable behaviours, the importance
of environmental reinforcers (reward and punishment), and the exclusion of mental processes. His ideas often referred to as
strict behaviourism, continue to have an impact on psychology. However, some behaviourists, such as Albert Bandura (2001a),
disagree with strict behaviourism and have formulated a theory that includes mental or cognitive processes in addition to
observable behaviours. According to Bandura’s social cognitive approach, our behaviours are influenced not only by
environmental events and reinforcers but also by observation, imitation, and thought processes. Behaviourists have developed
a number of techniques for changing behaviours that can be applied to both animals and humans.
Ivan Pavlov, like Freud, was not a psychologist. He was a Russian physiologist who showed that a reflex (an involuntary reaction)
could be caused to occur in response to a formerly unrelated stimulus.
Psychoanalytic

The psychoanalytic approach, a modern version of psychoanalysis that is more focused on the development of a sense
of self and the discovery of motivations behind a person’s behaviour other than sexual motivations, is based on the belief
that childhood experiences greatly influence the development of later personality traits and psychological problems. It
also stresses the influence of unconscious fears, desires, and motivations on thoughts and behaviours. Or, an insight
therapy based on the theory of Freud, emphasizing the revealing of unconscious conflicts; Freud’s term for both the
theory of personality and the therapy based on it. It should be clear by now that psychology didn’t start in one place and at
one particular time. People of several different viewpoints were trying to promote their own perspectives on the study of the
human mind and behaviour in different places all over the world. The medical profession took a whole different approach to
psychology. In the late 1800s, Sigmund Freud had become a noted physician in Austria while the structuralists were arguing, the
functionalists were specializing, and the Gestalts were looking at the big picture. Freud was a neurologist, a medical doctor who
specializes in disorders of the nervous system; he and his colleagues had long sought a way to understand the patients who were
coming to them for help. Freud’s patients suffered from nervous disorders for which he and other doctors could find no physical
cause. Therefore, it was thought, the cause must be in the mind, and that is where Freud began to explore, i.e., on the basis of
insights from therapy sessions, Freud proposed some revolutionary ideas about the human mind and personality development.
He proposed that there is an unconscious (unaware) mind into which we push, or repress, all of our threatening urges and
desires. He believed that these repressed urges, in trying to surface, created nervous disorders in his patients. Freud stressed the
importance of early childhood experiences, believing that personality was formed in the first 6 years of life; if there were
significant problems, those problems must have begun in the early years, i.e., one hallmark of Sigmund Freud’s psychoanalytic
approach is the idea that the first five years have a profound effect on later personality development. In addition, Freud reasoned
that thoughts or feelings that make a person to feel fearful or guilty, that threaten self-esteem, or that come from unresolved
sexual conflicts are automatically placed deep into our unconscious. In turn, this unconscious, threatening thoughts and feelings
give rise to anxiety, fear, or psychological problems. Because Freud’s patients could not uncover their unconscious fears, he
developed several techniques, such as dream interpretation, to bring hidden fears to the surface. Freud’s belief in an unconscious
force that influenced human thought and behaviour was another of his revolutionary ideas. Many of Freud’s beliefs, such as the
existence of unconscious feelings and fears, have survived, while other ideas, such as the all-importance of a person’s first five
years, have received less support. Many of Freud’s terms, such as id, ego, superego, and libido, have become part of our everyday
language. Unlike the biological, cognitive, and behavioural approaches, the psychoanalytic approach would search for hidden or
unconscious forces underlying test anxiety.
Some of his well-known followers were Alfred Adler, Carl Jung, Karen Horney, and his own daughter, Anna Freud. Anna Freud
began what became known as the ego movement in psychology, which produced one of the best-known psychologists in the
study of personality development, Erik Erikson. Freud’s ideas are still influential today, although in a somewhat modified form.
He had a number of followers in addition to those already named, many of whom became famous by altering Freud’s theory to
fit their own viewpoints, but his basic ideas are still discussed and debated. While some might think that Sigmund Freud was
the first person to deal with people suffering from various mental disorders, the truth is that mental illness has a fairly long (and
not very pretty) history. For more on the history of mental illness. Freudian psychoanalysis, the theory, and therapy based on
Freud’s ideas have been the basis of much modern psychotherapy (a process in which a trained psychological professional helps
a person gain insight into and change his or her behaviour), but another major and competing viewpoint has actually been more
influential in the field of psychology as a whole.
Humanistic
Often called the “third force” in psychology, humanism was really a reaction to both psychoanalytic theory and behaviourism.
In contrast to the psychoanalytic focus on sexual development and behaviourism’s focus on external forces in guiding personality
development, some professionals began to develop a perspective that would allow them to focus on people’s ability to direct
their own lives. Humanists held the view that people have free will, the freedom to choose their own destiny, and strive for self-
actualization, the achievement of one’s full potential. Two of the earliest and most famous founders of this view were Abraham
Maslow (1908-1970) and Carl Rogers (1902-1987). Today, humanism exists as a form of psychotherapy aimed at self-
understanding and self-improvement.
Beginning in the 1950s, the diverse opposition to behaviourism and psychoanalytic theory blended into a loose alliance that
eventually became a new school of thought called “humanism”. The humanistic approach emphasizes that each individual
has great freedom in directing his or her future, a large capacity for achieving personal growth, a considerable amount
of intrinsic worth, and enormous potential for self-fulfilment. In psychology, humanism is a theoretical orientation that
emphasizes the unique qualities of humans, especially their freedom and their potential for personal growth. Humanists
take an optimistic view of human nature. The humanistic approach emphasizes the positive side of human nature, its creative
tendencies, and its inclination to build caring relationships. This concept of human nature freedom, potential, and creativity is
the most distinctive feature of the humanistic approach and sets it far apart from the behavioural and psychoanalytic approaches.
The humanistic approach officially began in the early 1960s with the publication of the Journal of Humanistic Psychology. One
of the major figures behind establishing the journal and the humanistic approach was Abraham Maslow, who had become
dissatisfied with the behavioural and psychoanalytic approaches. To paraphrase Maslow (1968), the humanistic approach was
to be a new way of perceiving and thinking about the individual’s capacity, freedom, and potential for growth. Many of
humanism’s ideas have been incorporated into approaches for counselling and psychotherapy. Because of its free-will concept
of human nature and lack of experimental methods, many behaviourists regard the humanistic approach as more of a philosophy
of life than a science of human behaviour. The humanistic approach also applies to dealing with a student’s problems, such as
test anxiety and procrastination.
Rogers (1951) argued that human behaviour is governed primarily by each individual’s sense of self, or “self-concept” which
animals presumably lack. Both he and Maslow (1954) maintained that to fully understand people’s behaviour, psychologists
must take into account the fundamental human drive toward personal growth. They asserted that people have a basic need to
continue to evolve as human beings and to fulfil their potential. In fact, humanists argued that many psychological disturbances
are the result of thwarting these uniquely human needs. Fragmentation and dissent have reduced the influence of humanism in
recent decades. Some advocates, though, have predicted a renaissance for the humanistic movement (Taylor, 1999). To date,
humanists’ greatest contribution to psychology has probably been their innovative treatments for psychological problems and
disorders. For example, Carl Rogers pioneered a new approach to psychotherapy called person-centred therapy that remains
extremely influential today.
Cross-cultural

The cross-cultural approach studies the influence of cultural and ethnic similarities and differences on psychological
and social functioning, or is the study of similarities and differences in behaviour among individuals who have developed
in different cultures. The search for relationships between cultural context and human behaviour is carried out within three
general frames of reference. First, culture-comparative psychology carries out studies of individuals in different cultures in a search for
systematic relationships between features of cultures and behavioural development and expression in all behavioural domains.
It asserts that an important distinction exists between two levels of phenomena: group culture and individual behaviour.
Second, cultural psychology has strong links to cultural anthropology (particularly in the field of “culture and personality”) and has
focused mainly on social behaviour and cognition. It tends to dismiss the distinction between the cultural and behavioural levels
of phenomena, claiming that they are closely intertwined. Third, indigenous psychology examines the close ties between deep-rooted
aspects of cultures (particularly historical, philosophical, and religious beliefs) and behaviour. It has arisen mainly in societies
that are not part of the Western world. Researchers seek to understand their own people in their own cultural terms rather than
through the concepts and methods of Western psychology.
These three perspectives, although originally fairly distinct, are now converging into one coherent discipline (termed “cross-
cultural psychology”) that seeks to portray and interpret similarities and differences in human behaviour across cultures and to
discover general principles that may contribute to the emergence of global psychology.
Evolutionary

The evolutionary perspective focuses on the biological bases of universal mental characteristics that all humans share.
The most recent modern approach to psychology emerges out of evolutionary theory and is called the evolutionary approach.
The evolutionary approach studies how evolutionary ideas, such as adaptation and natural selection, explain human behaviours
and mental processes. Although the evolutionary approach is relatively new, research has already examined how evolution
influences a variety of behaviours and mental processes, such as aggression, mate selection, fears, depression, and decision-
making.
The evolutionary perspective focuses on the biological bases for universal mental characteristics that all humans share. It seeks
to explain general mental strategies and traits, such as why people lie, how attractiveness influences mate selection, why the fear
of snakes is so common, or why people universally like music and dancing. This approach may also overlap with biopsychology
and the sociocultural perspective. In this perspective, the mind is seen as a set of information-processing machines, designed by
the same process of natural selection that Darwin (1859) first theorized, allowing human beings to solve the problems faced in
the early days of human evolution, the problems of the early hunters and gatherers
Eclectic

Rather than strictly focusing on one of the seven approaches, most of today’s psychologists use an eclectic approach, which
means they use different approaches to study the same behaviour. By combining information from the biological, cognitive,
behavioural, psychoanalytic, humanistic, cross-cultural, and evolutionary approaches, psychologists stand a better chance of
reaching their four goals of describing, explaining, predicting, and controlling behaviour.
SCOPE AND APPLICATION/AREAS OF SPECIALIZATION

(1) Clinical and counselling psychology includes the assessment and treatment of people with psychological
problems, such as grief, anxiety, or stress. Some clinical and counselling psychologists work with a variety of
populations, whereas others may specialize in specific groups like children or the elderly. They may work in
hospitals, community health centres, private practice, or academic settings.
(2) Developmental psychology examines moral, social, emotional, and cognitive development throughout a
person’s entire life. Some developmental psychologists focus on changes in infancy and childhood, while others
trace changes through adolescence, adulthood, and old age. They work in academic settings and may consult on
day-care or programs for aging.
(3) Social psychology involves the study of social interactions, stereotypes, prejudices, attitudes, conformity,
group behaviours, aggression, and attraction. Many social psychologists work in academic settings, but some
work in hospitals and federal agencies as consultants and in business settings as personnel managers.
(4) Experimental psychology includes the areas of sensation, perception, learning, human performance,
motivation, and emotion. Experimental psychologists conduct much of their research under carefully controlled
laboratory conditions, with both animal and human subjects. Most work in academic settings, but some also work
in business, industry, and government.
(5) Physiological psychologists or psychobiologists study the biological basis of learning and memory; the
effects of brain damage; the causes of sleep and wakefulness; the basis of hunger, thirst, and sex; the effects
of stress on the body; and the ways in which drugs influence behaviour. Biological psychology or
psychobiology involves research on the physical and chemical changes that occur during stress, learning, and
emotions, as well as how our genetic makeup, brain, and nervous system interact with our environments and
influence our behaviours. Psychobiologists work in academic settings, hospitals, and private research laboratories.
(6) Psychometrics involves the construction, administration, and interpretation of psychological tests. It focuses
on the measurement of people’s abilities, skills, intelligence, personality, and abnormal behaviours. To
accomplish their goals, psychologists in this area focus on developing a wide range of psychological tests, which
must be continually updated and checked for usefulness and cultural biases. Some of these tests are used to assess
people’s skills and abilities, as well as to predict their performance in certain careers and situations, such as college
or business.
(7) Cognitive psychology involves how we process, store, and retrieve information and how cognitive processes
influence our behaviours. Cognitive research includes memory, thinking, language, creativity, and decision-
making. A relatively new area that combines cognitive and biological approaches and is called cognitive
neuroscience.
(8) Industrial/organizational psychology examines the relationships between people and their work
environments. These psychologists may be involved in personnel selection, help improves employee relationships,
or increase employee job satisfaction. Industrial/organizational psychologists usually work in businesses, industry,
and academic settings.
(9) Sport psychology is a proficiency that uses psychological knowledge and skills to address optimal
performance and well-being of athletes, developmental and social aspects of sports participation, and
systemic issues associated with sports settings and organizations. Career prospects for practicing sports
psychologists include educational counselling (where psychologists work with coaches, sports administrators, family
members, and other supporters to build the optimal environment for athletes to excel) and clinical counselling.
(10) Peace psychology is the area of specialization in the study of psychology that seeks to develop theory and
practices that prevent the violence and conflicts, and mitigate the effects they’ve on society. It also focuses
to study and develop viable methods of promoting peace. It is now global in scope. It recognizes that the violence
can be cultural, which occurs when beliefs are used to justify either direct or structural violence.
(11) Educational psychology is defined as the branch of psychology which studies the behaviour of the learner
in relation to his educational needs and his environment. It is applied in the form that is centred on the process
of teaching and learning, and is this which helps the teacher in better learning and the learner in better learning.
(12) Military psychology is defined as the application of research techniques and principles of psychology to the
resolution of problems to either optimize the behavioural capabilities of one’s own military forces or
minimize the enemies’ behavioural capabilities to conduct war. It is an area of study and application of
psychological principle and methods to the military environment.
(13) Criminal psychology is that branch of psychology which is concerned with the collection, examination and
presentation of evidence for judicial purpose. It would seem from this explanation that the criminal psychology
is concerned with the investigative and court process.
RESEARCH METHODS
Observation

Observational methods in psychological research entail the observation and description of a subject's behaviour, which
is a careful as well as an accurate measurement. Researchers utilizing the observational method can exert varying amounts
of control over the environment in which the observation takes place.

(1) Naturalistic observation, observing the behaviour of animals or people behave in their normal environment or
is a systematic study of behaviour to increase reliability. Replicate in research means repeating a study or experiment
to see if the same results will be obtained in an effort to demonstrate the reliability of results.
Advantages
i. It allows researchers to get a realistic picture of how behaviour occurs because they are actually watching that
behaviour in its natural setting.
ii. Participant observation is a naturalistic observation in which the observer becomes a participant in the group
being observed.
iii. A blind or blinded experiment is a scientific experiment where some of the people involved are prevented from
knowing certain information that might lead to conscious or subconscious bias on their part, thus invalidating
the results.
Disadvantages
i. Observer effect is the tendency of people or animals to behave differently from normal when they know they
are being observed.
ii. Observer bias is the tendency of observers to see what they expect to see.
iii. Observations that are made at one time in one setting may not hold true for another time, even if the setting
is similar because the conditions are not going to be identical time after time, researchers don’t have that kind
of control over the natural world.
(2) Laboratory observation, refers to observing the behaviour of subjects that are in a controlled environment.
Advantage
i. The degree of control that it gives to the observer.
Disadvantage
i. In a more controlled arranged environment, like a laboratory, they might get behaviour that is contrived or
artificial rather than genuine.
Experiment

An experiment is a method for identifying cause-and-effect relationships by following a set of rules and guidelines that
minimize the possibility of error, bias, and chance occurrences. The only method that will allow researchers to determine
the cause of a behaviour is the experiment. Or, it is deliberate manipulation of a variable to see if corresponding changes in
behaviour result, allowing the determination of cause-and-effect relationships.
Advantage
i. It has the greatest potential for identifying cause-and-effect relationships with less error and bias than either surveys
or case studies.
Disadvantages
i. Information obtained in one experimental situation or laboratory setting may not apply to other situations.
ii. Placebo effect, the phenomenon in which the expectations of the participants in a study can influence their
behaviour.
iii. Experimenter effect, the tendency of the experimenter’s expectations for a study to unintentionally influence the
results of the study.
iv. Single-blind study, in which the subjects do not know if they are in the experimental or the control group.
v. Double-blind study, in which neither the experimenter nor the subjects know if the subjects are in the experimental
or the control group.
The steps involved in designing an experiment are;
i. Selection, the researchers might start by selecting the children they want to use in the experiment.
ii. Variable, decide on the variable the researchers want to manipulate (which would be the one they think causes
changes in behaviour) and the variable they want to measure to see if there are any changes (this would be the effect
on the behaviour of the manipulation). Often deciding on the variables in the experiment comes before the selection
of the participants or subjects. The independent variable in an experiment is manipulated by the experimenter, whereas
the dependent variable in an experiment represents the measurable response or behaviour of the subjects in the
experiment.
iii. Groups, the best way to control for confounding variables is to have two groups of participants. The group that is
exposed to the independent variable (the violent cartoon in the example) is called the experimental group because it is
the group that receives the experimental manipulation. The other group that gets either no treatment or some kind
of treatment that should have no effect (like the group that watches the non-violent cartoon in the example) is
called the control group because it is used to control for the possibility that other factors might be causing the effect
that is being examined.
iv. Randomization, the random assignment of participants to one or the other condition is the best way to ensure
control over other interfering, or extraneous, variables. Random assignment means that each participant has an
equal chance of being assigned to each condition.
Co-relational

A correlation is an association or relationship in the occurrence of two or more events. Correlation is actually a statistical
technique, a particular way of organizing numerical information so that it is easier to look for patterns in the information. For
example, if one twin has hyperactivity, a correlation will tell us the likelihood that the other twin also has hyperactivity. The
likelihood or strength of a relationship between two events is called a correlation coefficient. A correlation coefficient is a
number that indicates the strength of a relationship between two or more events: the closer the number is to –1.00 or
+1.00, the greater the strength of the relationship. A variable is anything that can change or vary scores on a test, the
temperature in a room, gender, and so on.
Advantage
i. A correlation will tell researchers if there is a relationship between the variables, how strong the relationship is, and
in what direction the relationship goes. If researchers know the value of one variable, they can predict the value of
the other.
Disadvantage
i. The biggest error that people make concerning correlation is to assume that it means one variable is the cause of
the other, i.e., correlation does not prove causation.
Case study

Another descriptive technique is called the case study, in which one individual is studied in great detail or an in-depth
analysis of the thoughts, feelings, beliefs, or behaviours of a single person. In a case study, researchers try to learn everything
they can about that individual. For example, Sigmund Freud based his entire theory of psychoanalysis on case studies of his
patients in which he gathered information about their childhoods and relationships with others from the very beginning of their
lives to the present.
Advantages
i. It provides a tremendous amount of detail.
ii. It may also be the only way to get certain kinds of information. For example, one famous case study was the story
of Phineas Gage, who, in an accident, had a large metal rod driven through his head and survived but experienced
major personality and behavioural changes during the time immediately following the accident. Researchers
couldn’t study that with naturalistic observation, and an experiment is out of the question. Case studies are good
ways to study things that are rare.
Disadvantages
i. Researchers can’t really apply the results to other similar people. In other words, they can’t assume that if another
person had the same kind of experiences growing up, he or she would turn out just like the person in their case
study.
ii. People are unique and have too many complicating factors in their lives to be that predictable. So, what researchers
find in one case won’t necessarily apply or generalize to others.
iii. Another weakness of this method is that case studies are a form of detailed observation and are vulnerable to bias
on the part of the person conducting the case study, just as observer bias can occur in naturalistic or laboratory
observation.
Survey

A survey is a way to obtain information by asking many individuals, either person to person, by telephone, or by mail, to
answer a fixed set of questions about particular subjects. In the survey method, researchers will ask a series of questions
about the topic they are studying. Surveys can be conducted in person in the form of interviews or on the telephone, on the
Internet, or with a questionnaire. The questions used in interviews or on the telephone can vary, but usually, the questions in a
survey are all the same for everyone answering the survey. In this way, researchers can ask lots of questions and survey literally
hundreds of people.
Advantages
i. While guarding against error and bias, surveys can be a useful research tool to quickly and efficiently collect
information on behaviours, beliefs, experiences, and attitudes from a large sample of people and can compare
answers from various ethnic, age, socioeconomic, and cultural groups
ii. It is an efficient way to obtain much information from a large number of people.
Disadvantages
i. The surveys may get very different results depending on how questions are worded.
ii. The sex or race of the questioner can also affect how people answer the questions.
iii. The surveys can be biased by how questions are worded and by interviewing a group of people who do not represent
the general population.
iv. The information can contain errors or be biased because people may not remember accurately or answer truthfully.
Content analysis

Content Analysis is a research method used to determine a specific pattern of words and concepts given within the text
or set of documents. It is used for analysing various aspects of content for exploring mental models alongside the cognitive,
linguistic, cultural, social, and historical significance of content. It is a quantitative as well as a qualitative method that offers a
more objective evaluation of the content. It will, for sure, be more accurate than the comparison based on the impressions of
any listener. It is more effective than a review or evaluation.
Advantages
i. To analyse communication and social interaction without the direct involvement of the participants.
ii. It follows a systematic strategy that can be easily reproduced by other researchers, generating results with high
reliability.
iii. Plus, it can be conducted at any time, any location at a little cost.
Disadvantages
i. It can be reductive.
ii. It is subjective.
iii. It is liable to increased mistakes.
II. ATTENTION
DEFINITION OF ATTENTION
Attention is the means by which a person actively process, a limited amount of information from the enormous amount of
information available through the senses, stored memories, and other cognitive processes. It includes both conscious and
unconscious processes. In many cases, conscious processes are relatively easy to study. Unconscious processes are harder to
study, simply because a person isn’t conscious of them.
Consciousness includes both the feeling of awareness and the content of awareness, some of which may be under the focus of
attention. Therefore, attention and consciousness form two partially overlapping sets. At one time, psychologists believed that
attention was the same thing as consciousness. Now, however, they acknowledge that some active attentional processing of
sensory information, and the remembered information, proceeds without the conscious awareness.
Consciousness includes both the feeling of awareness and the content of awareness, some of which may be under the focus of
attention. Therefore, attention and consciousness form two partially overlapping sets. Conscious attention serves three purposes
in playing a causal role for cognition.
(1) It helps in monitoring the interactions with the environment. Through such monitoring, a person can maintain the
awareness of how well he’s adapting to the situation in which he find himself.
(2) It assists a person in linking his past (memories) and his present (sensations) to give us a sense of continuity of
experience. Such continuity may even serve as the basis for personal identity.
(3) It helps in controlling and planning for the future actions. One can do so based on the information from monitoring
and from the links between past memories and present sensations.
FACTORS AFFECTING ATTENTION
There are determining factors that can affect the functioning of attention and can define which stimulus you will direct your
attention to. These can be external or internal.
External factors (external determiners); come from surroundings and make concentration on relevant stimuli easier or more
difficult, i.e., these are typically external to the situations or stimuli, which makes them their strongest efforts to get the attention.
Some external factors are:
(1) Nature; a picture attracts attention more rapidly than words.
(2) Intensity; it refers to the strength of the stimulus, the more intense a stimulus is (strength of stimulus) the more likely
to give attention resources to it, e.g., loud noise, bright light, strong smell, etc.
(3) Size; the bigger a stimulus is the more attention resources it captures.
(4) Location; the more distant a stimulus is, the less likely to give attention resources to it, i.e., less distant or nearer stimuli
are given much more attention than to the distant stimuli.
(5) Movement; moving stimuli capture more attention that ones that remain static.
(6) Novelty; newer or strange stimuli attract more of the attention.
(7) Change; if a different stimulus appears that breaks the dynamic, the attention will be directed to the new stimulus, the
change be in the motion, size, quality, intensity, or extensity of the stimulus.
(8) Colour; colourful stimuli are more attention grabbing than black and white ones.
(9) Contrast; stimuli that contrast against a group attract more of the attention.
(10) Repetition; the more a stimulus is repeated, the more it captures attention, e.g., advertisement.
(11) Emotional burden: positive just as much as negative stimuli attract the attention more than neutral ones.

Internal factors (internal determiners); come from the individual and, as a result, depend on each person, i.e., anything other
than the individual defines what stimuli are to be paid attention to when uninterested. Some internal factors are:
(1) Interests; people concentrate more on stimuli that interests them, as the interest and attention go hand-in-hand.
(2) Motives and needs; the more desire or need or motivated to a stimulus, the more attended, e.g., a hungry person
attend readily to food.
(3) Attitude, likes and dislikes; the attitude, beliefs, preferences, etc. determines the attention of a stimulus.
(4) Mood at the moment; the mood, purpose in hand or the goal determines the attention at the moment
(5) Emotion; stimuli that provoke stronger emotions attract more attention. However, it must be kept in mind that positive
moods contribute to focusing attention resources, but negative moods make concentration more difficult.
(6) Effort required by the task; people make a prior evaluation of the effort required to do a task and depending on this,
it will attract more or less attention.
(7) Organic state; depends on the physical state that the person is in. So, states of tiredness, discomfort, fever, etc. will
make mobilising attention more difficult. If, on the other hand, a person is in a state relating to survival, for example,
thirst or hunger, stimuli related with the satiation of these needs will attract more attention resources.
(8) Trains of thought; when thoughts follow a determined course, based on concrete ideas, the appearance of stimuli
related to these will capture more of the attention.
(9) Mindset or readiness; the mindset makes the concentration much better and prevents from restlessness and deviation.

TYPES OF ATTENTION
Span of attention
Span of attention is the amount of information an observer can grasp from a complex stimulus at a single momentary exposure.
The maximum span of attention of an adult is 20 minutes.
The types of the span of attention includes;
(1) Visual span; the number of items of a specifies character that can be correctly reproduced or reported immediately
after the first presentation. It can be single dots, grouped dots, non-sense syllables, meaningful words, numbers, lines,
etc. The maximum number of stimuli observed at a glance is six of seven.
(2) Auditory span; is measured by metronome. The maximum number of sounds one can hear at a time is eight.
Factors influencing the span of attention;
(1) Objective factors; grouping, nature of stimuli, exposure time, arranged and meaningful material, rhythm, practice,
effects of background, etc.
(2) Subjective factors; past experience, familiarity, mental set, age, intelligence, interest, previous experience, etc.

The sustained attention is concerned with concentration. It refers to the ability to maintain attention on an object or event for
longer durations. It is also known as “vigilance”. Sometimes people have to concentrate on a particular task for many hours. Air
traffic controllers and radar readers provide with good examples of this phenomenon. They have to constantly watch and
monitor signals on screens. The occurrence of signals in such situations is usually unpredictable, and errors in detecting signals
may be fatal. Hence, a great deal of vigilance is required in those situations.
Factors influencing sustained attention; several factors can facilitate or inhibit an individual’s performance on tasks of sustained
attention.
(1) Sensory modality; performance is found to be superior when the stimuli (called signals) are auditory than when they
are visual.
(2) Clarity of stimuli; intense and long-lasting stimuli facilitate sustained attention and result in better performance.
(3) Temporal uncertainty; when stimuli appear at regular intervals of time, they are attended better than when they appear
at irregular intervals.
(4) Spatial uncertainty; stimuli that appear at a fixed place are readily attended, whereas those that appear at random
locations are difficult to attend
Distraction of attention
Distraction is the process of diverting the attention of an individual or group from a desired area of focus and thereby blocking
or diminishing the reception of desired information. Or, distractions are any stimulus whose presence interferes with the process
of attention or draws away attention from the object which a person wish to attend. The stimuli which cause the distraction is
called the distractors. Distractions take attention away from what an operator needs to do when performing a task. Distraction
may be caused by a number of factors, including: the lack of ability to pay attention; lack of interest in the object of attention;
or the intensity of the distractor, novelty or attractiveness of something other than the object of attention.
Distractions come from both external sources, and internal sources. External distractions include factors such as visual triggers,
social interactions, music, text messages, and phone calls. There are also internal distractions such as hunger, fatigue, illness,
worrying, and daydreaming. Both external and internal distractions contribute to the interference of focus.
(1) External factors; noise, improper lighting, unfavourable temperature, inadequate ventilation, defective method of
teaching, improper teaching aid, improper behaviour, etc.
(2) Internal factors; emotional-disturbance, ill-health, boredom, lack of motivation, feelings of fatigue, interesting
thoughts not related to the matter in hand, etc.
Shifting of attention
Shifting of attention is defined as the ability to flexibly shift back and forth between multiple tasks, operations, or mental sets.
It is the attention that passes from one stimulus to another stimulus or from one aspect of the stimulus to another aspect of the
stimulus. It is a usual phenomenon mainly found in children, as children generally lack the ability of concentration. Therefore,
shifts are more frequent. The ability to voluntarily focus or shift attention as needed develops during the early elementary school
years, between 7 and 9 years of age. Attention shifting continues to improve throughout middle childhood and becomes
relatively mature by the beginning of adolescence. It can also be seen in adults. For example; while reading a book, the attention
shifts from word to word, sentence to sentence, paragraph to paragraph and from page to page. Similarly, while playing cards,
the attention shifts from card to card.
Steps involved in the shifting of attention; the shifting of attention could be described as involving three logical steps or
processes associated with visual selective attention
(1) Dis-engaging
(2) Shifting
(3) Engaging

To shift attention from one location or object to another, attention needs to disengage or reorient from the location or object
where it is currently deployed, shift and then engage it on another location or object.
Division of attention
Divided attention, involves the processing of multiple inputs, is the distribution of attention among two or more tasks can occur.
It is also defined as simultaneously attending to different stimuli at the same time.
Early work in the area of divided attention had participants view a videotape in which the display of a basketball game was
superimposed on the display of a hand-slapping game. Participants could successfully monitor one activity and ignore the other.
However, they had great difficulty in monitoring both activities at once, even if the basketball game was viewed by one eye and
the hand-slapping game was watched separately by the other eye. Neisser and Becklen hypothesized that improvements in
performance eventually would have occurred as a result of practice. They also hypothesized that the performance of multiple
tasks was based on skill resulting from practice. They believed it not to be based on special cognitive mechanisms.

The Psychological Refractory Period, PRP paradigm is a method for investigating dual-task interference, it’s a sort of a
“double-reaction-time” task in which two different stimuli are signals are presented in rapid succession, and each of the stimuli
requires a separate fast response. Either the task on its own would be trivially simple, but when paired, responding becomes
quite challenging. The Stimulus Onset Asynchrony, SOA is the time that lapses between the presentation of the two stimuli.
The following year, investigators used a dual-task paradigm to study divided attention during the simultaneous performance of
two activities: reading short stories and writing down dictated words. The researchers would compare and contrast the response
time (latency) and accuracy of performance in each of the three conditions. Of course, higher latencies mean slower responses.
As expected, initial performance was quite poor for the two tasks when the tasks had to be performed at the same time. However,
Spelke and her colleagues had their participants practice to perform these two tasks 5 days a week for many weeks (85 sessions
in all). To the surprise of many, given enough practice, the participants’ performance improved on both tasks. They showed
improvements in their speed of reading and accuracy of reading comprehension, as measured by comprehension tests. They
also showed increases in their recognition memory for words they had written during dictation. Eventually, participants’
performance on both tasks reached the same levels that the participants previously had shown for each task alone.
Selective attention
Many of the early experiments involved the idea of a “filter” that acted on incoming information, keeping some information
out and letting some information in for further processing. These early experiments used mainly auditory stimuli. Later research
also included visual stimuli. The term selective attention refers to the fact that people usually focus their attention on one or a
few tasks or events rather than on many. To say people mentally focus their resources implies that they shut out (or at least
process less information from) other, competing tasks.
Colin Cherry (1953) referred to this phenomenon as the cocktail party problem, the process of tracking one conversation in the
face of the distraction of other conversations. He observed that cocktail parties are often settings in which selective attention is
salient. He studied selective attention in a more carefully controlled experimental setting. He devised a task known as shadowing.
Cherry presented a separate message to each ear, known as dichotic presentation. In a dichotic listening experiment, different
messages are presented to the two ears. In a selective attention experiment, participants are instructed to pay attention to the
message presented to one ear (the attended message), repeating it out loud as they are hearing it, and to ignore the message
presented to the other ear (the unattended message). Participants are usually able to accomplish this task easily, repeating the
message with a delay of a few seconds between hearing a word and saying it. This procedure of repeating a message out loud is
called shadowing. The shadowing procedure is used to ensure that participants are focusing their attention on the attended
message. As Cherry’s participants shadowed the attended message, the other message was stimulating auditory receptors within
the unattended ear. However, when asked what they had heard in the unattended ear, participants could say only that they could
tell there was a message and could identify it as a male or female voice. They could not report the content of the message. Other
dichotic listening experiments have confirmed this lack of awareness of most of the information being presented to the
unattended ear. For example, Neville Moray (1959) showed that participants were unaware of a word that had been repeated 35
times in the unattended ear.
THEORIES OF ATTENTION
Filter theory
Donald Broadbent (1958) created a model of attention to explain how the selective attention is achieved. This early selection
model, which introduced the flow diagram to cognitive psychology, proposed that information passes through the following
stages.

(1) Sensory memory holds all of the incoming information for a fraction of a second and then transfers all of it to the
next stage.
(2) The filter identifies the attended message based on its physical characteristics, such as like the speaker’s tone of voice,
pitch, speed of talking, and accent; and lets only this message pass through to the detector in the next stage. All other
messages are filtered out.
(3) The detector processes information to determine higher-level characteristics of the message, such as its meaning.
Because only the important, attended information has been let through the filter, the detector processes all of the
information that enters it.
(4) Short-term memory receives the output of the detector. Short-term memory holds information for 10–15 seconds
and also transfers information into long-term memory, which can hold information indefinitely.
Broadbent’s model has been called a bottleneck model because the filter restricts information flow, much as the neck of a
bottle restricts the flow of liquid. When one pours liquid from a bottle, the narrow neck restricts the flow, so the liquid escapes
only slowly even though there is a large amount in the bottle. Applying this analogy to information, Broadbent proposed that
the filter restricts the large amount of information available to a person so that only some of this information gets through to
the detector. But unlike the neck of a bottle, which lets through the liquid closest to the neck, Broadbent’s filter lets information
through based on specific physical characteristics of the information, such as the rate of speaking or the pitch of the speaker’s
voice.
Broadbent’s model provided testable predictions about selective attention, some of which turned out not to be correct. For
example, according to Broadbent’s model, information in the unattended message should not be accessible to consciousness.
However, Neville Moray (1959) did an experiment in which his participants shadowed the message presented to one ear and
ignored the message presented to the other ear. But when Moray presented the listener’s name to the other, unattended ear,
about a third of the participants detected it. This phenomenon, in which a person is selectively listening to one message among
many yet hears his or her name or some other distinctive message such as “Fire!” that is not being attended, is called the cocktail
party effect.
Moray’s participants had recognized their names even though, according to Broadbent’s theory, the filter is supposed to let
through only one message, based on its physical characteristics. Clearly, the person’s name had not been filtered out and, most
important, it had been analyzed enough to determine its meaning.
Attenuation theory
Anne Treisman (1964) proposed a modification of Broadbent’s theory. Treisman proposed that selection occurs in two stages,
and she replaced Broadbent’s filter with an attenuator. The attenuator analyzes the incoming message in terms of (1) its physical
characteristics: whether it is high-pitched or low-pitched, fast or slow; (2) its linguistic/language: how the message groups into
syllables or words; and (3) its semantic/meaning: how sequences of words create meaningful phrases.
This is similar to what Broadbent proposed, but in Treisman’s attenuation theory of attention, language and meaning can also
be used to separate the messages. Treisman proposed, however, that the analysis of the message proceeds only as far as is
necessary to identify the attended message. For example, if there are two messages, one in a male voice and one in a female
voice, then analysis at the physical level is adequate to separate the low-pitched male voice from the higher-pitched female voice.
If, however, the voices are similar, then it might be necessary to use meaning to separate the two messages.

Once the attended and unattended messages have been identified, both messages are let through the attenuator, but the attended
message emerges at full strength and the unattended messages are attenuated, they are still present, but are weaker than the
attended message. Because at least some of the unattended message gets through the attenuator, Treisman’s model has been
called a leaky filter model. The final output of the system is determined in the second stage, when the message is analyzed by
the dictionary unit. The dictionary unit contains stored words, each of which has a threshold for being activated. A threshold
is the smallest signal strength that can barely be detected. Thus, a word with a low threshold might be detected even when it is
presented softly or is obscured by other words. According to Treisman, words that are common or especially important, such
as the listener’s name, have low thresholds, so even a weak signal in the unattended channel can activate that word. Uncommon
words or words that are unimportant to the listener have higher thresholds, so it takes the strong signal of the attended message
to activate these words. Thus, according to Treisman, the attended message gets through, plus some parts of the weaker
unattended message.
In conclusion, according to Treisman (1964), people process only as much as is necessary to separate the attended from the
unattended message. If the two messages differ in physical characteristics, then a person process both messages only to this
level and easily reject the unattended message. If the two messages differ only semantically, a person process both through the
level of meaning and select which message to attend to based on this analysis. Processing for meaning takes more effort,
however, so a person do this kind of analysis only when necessary. Messages not attended to are not completely blocked but
rather weakened. Parts of the message with permanently lowered thresholds can still be recovered, even from an unattended
message.
Theories like Broadbent’s and Treisman’s are sometimes called early selection theories of selective attention because they propose a
filter that operates at an early stage in the flow of information, in many cases eliminating information based only on physical
characteristics of the stimulus.

The contrasts here between filter theory and attenuation theory:


Filter theory Attenuation theory
Filter theory allows for only one kind of analysis, i.e., Attenuation theory allows for many different kinds of
the physical characteristics. analyses of all messages, i.e., the physical characteristics, the
linguistics meaning, and the semantic process.
Filter theory holds that unattended messages, once Attenuation theory holds that unattended messages are
processed for physical characteristics, are discarded weakened but the information they contain is still available.
and fully blocked.
Late-selection theory
Broadbent’s (1958) filter theory holds that no information about the meaning of an unattended message gets through the filter
to be retained for future use. Treisman’s (1964) attenuation theory allows for some information about meaning getting through
to conscious awareness. Deutsch and Deutsch (1963) proposed a theory, called the late-selection theory, that goes even
further. Later elaborated and extended by Norman (1968), this theory holds that all messages are routinely processed for at least
some aspects of meaning, that selection of which message to respond to thus, happens late in processing.
All material is processed up to this point, and information judged to be most important is elaborated more fully. This elaborated
material is more likely to be retained; unelaborated material is forgotten. A message’s importance depends on many factors,
including its context and the personal significance of certain kinds of content (such as the name). Also relevant is the observer’s
level of alertness: At low levels of alertness (such as when asleep), only very important messages (such as the sound of new-
born’s cry) capture attention. At higher levels of alertness, less important messages (such as the sound of a television program)
can be processed. Generally, the attentional system functions to determine which of the incoming messages is the most
important; this message is the one to which the observer will respond.
Multimode theory
Multimode theory was developed by Johnston and Heinz (1978). This theory believes that attention is a flexible system that
allows selection of a stimulus over others at three stages. At stage one the sensory representations (e.g., visual images) of stimuli
are constructed; at stage two the semantic representations (e.g., names of objects) are constructed; and at stage three the sensory
and semantic representations enter the consciousness. It is also suggested that more processing requires more mental effort.
When the messages are selected on the basis of stage one processing (early selection), less mental effort is required than when
the selection is based on stage three processing (late selection). For example, there are several food items on the table. A person
selects one item over the other food items (visual image). Now he represents it with the word ‘ice cream’ (semantic
representation) finally he connects to the item as an ice cream (the visual and semantic representation the term and what the
item is). At stage one he just notices the food item so less mental effort is required, whereas by stage three more mental effort
is required.
Attention-capacity and Mental effort
Broadbent (1958) originally described attention as a bottleneck that squeezed some information out of the processing area. To
understand the analogy, think about the shape of a bottle. The smaller diameter of the bottle’s neck relative to the diameter of
the bottle’s bottom reduces the spillage rate. The wider the neck, the faster the contents can spill. Applying this analogy to
cognitive processes, the wider the bottleneck, the more information can spill through to be processed further at any point in
time. Modern cognitive psychologists often used different metaphors when talking about attention. For example, some compare
attention to a spotlight that highlights whatever information the system is currently focused on (Johnson & Dark, 1986).
Accordingly, psychologists are now concerned less with determining what information can’t be processed than with studying
what kinds of information people choose to focus on. Just a spotlight’s focal point can be moved from one area of a stage to
another, so can attention be directed and redirected to various kinds of incoming information. Just as a spotlight illuminates
best what is at its center, so too is cognitive processing usually enhanced when attention is directed toward a task.
Attention, like a spotlight, has fuzzy boundaries. Spotlights can highlight more than one object at a time, depending on the size
of the objects. Attention, too, can be directed at more than one task at a time, depending on the capacity demands of each task.
Of course, the spotlight metaphor is not a perfect one, and some researchers think it has many shortcomings. For example, the
spotlight metaphor assumes that attention is always directed at a specific location, which may not be the case.
Daniel Kahneman (1973) presented a slightly different model for what attention is. He viewed attention as a set of cognitive
processes for categorizing and recognizing stimuli. The more complex the stimulus, the harder the processing, and therefore
the more resources are engaged. However, people have some control over where they direct their mental resources: They can
often choose what to focus on and devote their mental effort to.
Essentially, this model depicts the allocation of mental resources to various cognitive tasks. An analogy could be made to an
investor depositing money in one or more of several different bank accounts, here, the individual deposits mental capacity to
one or more of several different tasks. Many factors influence this allocation of capacity, which itself depends on the extent and
type of mental resources available. The availability of mental resources, in turn, is affected by the overall level of arousal, or state
of alertness.
Kahneman (1973) argued that one effect of being aroused is that more cognitive resources are available to devote to various
tasks. Paradoxically, however, the level of arousal also depends on a task’s difficulty. This means people are less aroused while
performing easy tasks, such as adding 2 and 2, than they’re when performing more difficult tasks, such as multiplying a Social
Security number by pi. People, therefore, bring fewer cognitive resources to easy tasks, which, fortunately, require fewer
resources to complete. Arousal thus affects the capacity (the sum total of the mental resources) for tasks.
But the model still needs to specify how a person allocate the resources to all the cognitive tasks that confront them. The
allocation policy is affected by an individual’s enduring dispositions (for example, the preference for certain kinds of tasks over
others), momentary intentions (the vow to find the meal card right now, before doing anything else), and evaluation of the
demands on one’s capacity (the knowledge that a task one need to do right now will require a certain amount of his attention).
Essentially, this model predicts that a person pay more attention to things he’s interested in, are in the mood for, or have judged
important. In Kahneman’s (1973) view, attention is part of what the layperson would call mental effort. The more effort
expended, the more attention we are using.
Kahneman’s view raises the question of what limits the ability to do several things at once; the arousal, alertness, and effort.
A related factor is alertness as a function of time of day, hours of sleep obtained the night before, and so forth. Sometimes
people can attend to more tasks with greater concentration. At other times, such as when tired and drowsy, focusing is hard.
Effort is only one factor that influences performance on a task. Greater effort or concentration results in better performance of
some tasks those that require resource-limited processing, performance of which is constrained by the mental resources or
capacity allocated to it (Norman & Bobrow, 1975).
Taking a midterm is one such task. On some other tasks, one cannot do better no matter how hard one tries. An example is
trying to detect a dim light or a soft sound in a bright and noisy room. Even if a person concentrate as hard as he can on such
a task, his vigilance may still not help him detect the stimulus. Performance on this task is said to be data limited, meaning that
it depends entirely on the quality of the incoming data, not on mental effort or concentration. Norman and Bobrow pointed
out that both kinds of limitations affect our ability to perform any cognitive task.

It may be surprising to discover that people can show remarkable levels of change blindness, an inability to detect changes in
objects or scenes that are being viewed. Change blindness is a perceptual phenomenon that occurs when a change in a visual
stimulus is introduced and the observer does not notice it. For example, observers often fail to notice major differences
introduced into an image while it flickers off and on again. Closely related to change blindness is inattentional blindness, which
is a phenomenon in which people are not able to see things that are actually there. For e.g., even though a person thinks he’s
paying attention to the road, he fail to notice a car swerve into his lane of traffic, resulting in a traffic accident. Change and
inattentional blindness are of major importance in traffic situations or during medical screenings, for example, where an
overlooked motorcycle or a mass in the body can have potentially fatal consequences.
Schema theory
Ulric Neisser (1976) offered a completely different conceptualization of attention, called schema theory. He argued that humans
don’t filter, attenuate, or forget unwanted material. Instead, they never acquire it in the first place. Neisser compared attention
to apple picking. The material people attend to is like apples they pick off a tree, i.e., they grasp it. Unattended material is
analogous to the apples they don’t pick. To assume the unpicked apples were “filtered out” of their grasp would be ridiculous;
a better description is that they simply were left on the tree. Likewise, Neisser believes, with unattended information: it is simply
left out of the cognitive processing.
Neisser and Becklen (1975) performed a relevant study of visual attention. They created a selective looking task by having
participants watch one of two visually superimposed films. One film showed a hand game, two pairs of hands playing a familiar
hand-slapping game. The second film showed three people passing or bouncing a basketball, or both. Participants in the study
were asked to shadow (attend to) one of the films and to press a key whenever a target event (such as a hand slap in the first
film or a pass in the second film) occurred.
Neisser and Becklen (1975) found, first, that participants could follow the correct film rather easily, even when the target event
occurred at a rate of 40 per minute in the attended film. Participants ignored occurrences of the target event in the unattended
film. Participants also failed to notice unexpected events in the unattended film. For example, participants monitoring the
ballgame failed to notice that in the hand game film, one of the players stopped hand slapping and began to throw a ball to the
other player. Neisser (1976) believed that skilled perceiving rather than filtered attention explains this pattern of performance.
Neisser and Becklen argued that once picked up, the continuous and coherent motions of the ballgame (or of the hand game)
guide further pickup; what is seen guides further seeing. It is implausible to suppose that special “filters” or “gates”, designed
on the spot for this novel situation, block the irrelevant material from penetrating deeply into the “processing system”. The
ordinary perceptual skills of following visually given events are simply applied to the attended episode and not to the other.
AUTOMATIC & CONTROLLED (ATTENTIONAL) PROCESSING
Automatic processes like writing the name involve no conscious control. For the most part, they are performed without
conscious awareness. Nevertheless, a person may be aware that he’s performing them. They demand little or no effort or even
intention. Multiple automatic processes may occur at once, or at least very quickly, and in no particular sequence. Thus, they are
termed parallel processes.
In contrast, controlled processes are accessible to conscious control and even require it. Such processes are performed serially,
for example, when a person want to compute the total cost of a trip he’s about to book online. In other words, controlled
processes occur sequentially, one step at a time. They take a relatively long time to execute, at least as compared with automatic
processes.
Posner & Snyder, 1975 postulated three attributes that characterize the automatic processes
(1) They are concealed from consciousness
(2) They are unintentional
(3) They consume few attentional resources
An alternative view of attention suggests a continuum of processes between fully automatic processes and fully controlled
processes. For one thing, the range of controlled processes is so wide and diverse that it would be difficult to characterize all
the controlled processes in the same way (Logan, 1988). Also, some automatic processes are easy to retrieve into consciousness
and can be controlled intentionally, whereas others are not accessible to consciousness and/or cannot be controlled intentionally.
The characteristics of controlled versus automatic processes are;
Characteristics Controlled process Automatic process
Amount of intentional Require intentional effort Require little or no intention or effort (and
effort intentional effort may even be required to avoid
automatic behaviors)
Degree of conscious Require full conscious Generally, occur outside of conscious awareness,
awareness awareness although some automatic processes may be
available to consciousness
Use of attentional Consume many attentional Consume negligible attentional resources
resources resources
Type of processing Performed serially (one step Performed by parallel processing (i.e., with many
at a time) operations occurring simultaneously or at least in
no particular sequential order)
Speed of processing Relatively time-consuming Relatively fast
execution, as compared with
automatic processes
Relative novelty of tasks Novel and unpracticed tasks Familiar and highly practiced tasks, with largely
or tasks with many variable stable task characteristics
features
Level of processing Relatively high levels of Relatively low levels of cognitive processing
cognitive processing (minimal analysis or synthesis)
(requiring analysis or
synthesis)
Difficulty of tasks Usually, difficult tasks Usually relatively easy tasks, but even relatively
complex tasks may be automatized, given sufficient
practice
Process of acquisition With sufficient practice, many routine and relatively stable procedures may become
automatized, such that highly controlled processes may become partly or even
wholly automatic; naturally, the amount of practice required for automatization
increases dramatically for highly complex tasks
Many tasks that start off as controlled processes eventually become automatic ones as a result of practice (LaBerge, 1975, 1990;
Raz, 2007). This process is called automatization (also termed proceduralization). For example, driving a car is initially a
controlled process. Once a persom master driving, however, it becomes automatic under normal driving conditions. Such
conditions involve familiar roads, fair weather, and little or no traffic. Similarly, when a person first learn to speak a foreign
language, he need to translate word-for-word from his native tongue. Eventually, however, he begin to think in the second
language. This thinking enables him to bypass the intermediate-translation stage. It also allows the process of speaking to become
automatic. The conscious attention can revert to the content, rather than the process, of speaking. A similar shift from conscious
control to automatic processing occurs when acquiring the skill of reading. However, when conditions change, the same activity
may again require conscious control. In the driving example, if the roads become icy, one will likely need to pay attention to
when he need to brake or accelerate. Both tasks usually are automatic when driving.
According to Sternberg’s theory of triarchic intelligence (1999), relatively novel tasks that have not been automatized, such as
visiting a foreign country, mastering a new subject, or acquiring a foreign language, make more demands on intelligence than do
tasks for which automatic procedures have been developed. A completely unfamiliar task may demand so much of the person
as to be overwhelming.
AUTOMACITY & THE EFFECT OF PRACTICE
As a person become well practiced doing something, that act takes less of his attention to perform. Typing is a good example.
More formally said, an important variable that governs the number of things one can do simultaneously is the capacity a given
task consumes. Adding 2 and 3 consumes little of my capacity, leaving some for other tasks (such as planning dinner tonight
and wondering if all the ingredients are at home). The capacity of any given task is affected by the difficulty of the task and the
familiarity with the task. The practice is thought to decrease the amount of mental effort a task requires.
Stroop Task
A famous demonstration of the effects of practice on the performance of cognitive tasks was given by John Ridley Stroop
(1935). Stroop presented participants with a series of color bars (red, blue, green, brown, purple) or color words (red, blue,
green, brown, purple) printed in conflicting colors (the word red, for example, might be printed in green ink). Participants were
asked to name, as quickly as possible, the ink color of each item in the series. When shown bars, they did so quickly, with few
errors and apparently little effort. Things changed dramatically, however, when the items consisted of words that named colors
other than that of the ink in which the item was printed. Participants stumbled through these lists, finding it difficult not to read
the word formed by the letters.
According to Stroop (1935), the difficulty stems from the following: Adult, literate participants have had so much practice
reading that the task requires little attention and is performed rapidly. In fact, according to Stroop, literate adults read so quickly
and effortlessly that not reading words is hard. Thus, when confronted with items consisting of words, participants couldn’t help
reading them. This kind of response, one that takes little attention and effort and is hard to inhibit, is described as automatic.
The actual task given to participants, to name colors, was one they had practiced much less. Participants in one of Stroop’s
(1935) subsequent experiments, given eight days of practice at the naming task, in fact showed less interference in performing
the so-called Stroop task and became faster at naming colors with all stimuli. Moreover, a summary of the literature suggests
that Stroop interference begins when children learn to read, peaking at around second or third grade (when reading skills
develop) and then declining over the adult years until about age 60 (MacLeod, 1991). Virtually everyone who can read fluently
shows a robust Stroop effect from an early age.
NEUROPSYCHOLOGICAL STUDIES OF ATTENTION
Cognitive neuroscientists are interested in examining which areas of the human brain are active when a person is attending to a
stimulus or event. Researchers have long suspected the parietal lobe of the brain is one such location. Clinical neurologists
have documented the phenomenon of sensory neglect (sometimes called hemineglect) in patients who have parietal lobe
damage. These patients often ignore or neglect sensory information located in the visual field opposite to the damaged
hemisphere. Thus, if an area of the right parietal lobe is the damage site (as it often is), the patient overlooks information in the
left visual field. This neglect may include, for example, neglecting to wash one side of the face or body, neglecting to brush the
teeth on one side of the mouth, or eating from only one side of the plate. In clinical studies, patients showing hemineglect have
been studied in more detail. Typically, they are presented with stimuli and asked to copy them. Although the parietal lobe is one
brain region known to be associated with attention, it is not the only one. Areas of the frontal lobe as well play a role in people’s
ability to select motor responses and develop plans.
In 2007, Posner teamed up with Mary Rothbart and they conducted a review of neuroimaging studies in the area of attention to
investigate whether the many diverse results of studies conducted pointed to a common direction. They found that what at first
seemed like an unclear pattern of activation could be effectively organized into areas associated with the three subfunctions of
attention: alerting, orienting, and executive attention. The researchers organized the findings to describe each of these functions
in terms of the brain areas involved, the neurotransmitters that modulate the changes, and the results of dysfunction within this
system.
(1) Alerting; is defined as being prepared to attend to some incoming event, and maintaining this attention. Alerting also
includes the process of getting to this state of preparedness. The brain areas involved in alerting are the right frontal
and parietal cortexes as well as the locus coeruleus. The neurotransmitter norepinephrine is involved in the
maintenance of alertness. If the alerting system does not work properly, people develop symptoms of ADHD; in the
process of regular aging, dysfunctions of the alerting system may develop as well.
(2) Orienting; defined as the selection of stimuli to attend to. This kind of attention is needed when a person perform a
visual search. It may be able to observe this process by means of a person’s eye movements, but sometimes attention
is covert and cannot be observed from the outside. The orienting network develops during the first year of life. The
brain areas involved in the orienting function are the superior parietal lobe, the temporal parietal junction, the
frontal eye fields, and the superior colliculus. The modulating neurotransmitter for orienting is acetylcholine.
Dysfunction within this system can be associated with autism.
(3) Executive attention; includes processes for monitoring and resolving conflicts that arise among internal processes.
These processes include thoughts, feelings, and responses. The brain areas involved in this final and highest order of
attentional process are the anterior cingulate, lateral ventral, and prefrontal cortex as well as the basal ganglia. The
neurotransmitter most involved in the executive attention process is dopamine. Dysfunction within this system is
associated with Alzheimer’s disease, borderline personality disorder, and schizophrenia.
Networks of Visual Attention
Much work on brain processes of attention has centered on visual attention. Researchers have identified more than 32 areas of
the brain that become active during visual processing of an attended stimulus (LaBerge, 1995).
Posner and Raichle (1994) focused on three networks or systems of visual attention. In a series of studies, Posner and his
colleagues used the following task. A participant is seated in front of a visual display, fixating on a central point. On either side
of the point are two boxes. On each trial, one box brightens or an arrow appears, indicating on which side of the screen the
participant should expect to see the next stimulus. The purpose of this cue is to encourage the participant to focus his or her
attention at a particular location. The participant’s task is to respond as fast as possible when he detects the stimulus. Sometimes
no cue is given, and at other times an incorrect cue is given, to assess the benefit of having attention focused in either the correct
or an incorrect location. Posner and Raichle (1994) argued that to perform this task, a person needs to execute three distinct
mental operations. She first must disengage her attention from wherever it was previously directed. Brain activity in the
posterior parietal lobe is heightened during this process. Once disengaged, the attention must be refocused on the spatial location
of the new to-be attended stimulus. Posner and Raichle called this the move operation. They reported that patients with brain
damage in the superior colliculus, a major structure of the midbrain, have difficulty moving their attention from one location
to another. Finally, according to Posner and Raichle, when attention is redirected, neural processing of the new location is
enhanced; stimulus information presented at the to-be-attended location is emphasized, and the brain circuitry underlying this
operation becomes more active. Patients with damage to the pulvinar do not show the enhanced processing of which other
people are capable when attending to a stimulus in a particular location.
Posner and Raichle’s (1994) description of attentional networks postulated that distinct areas of the brain underlie distinct
cognitive processes. Posner more recently has described three different attentional networks that recruit individual cognitive
processes (such as moving or disengaging). These are the alerting network, responsible for achieving and maintaining an alert
state; the orienting network, which selects information from sensory input; and the executive control network, which resolves
conflicts among different responses. Posner believes that the alerting network is associated with the frontal and parietal regions
of the right hemisphere; the orienting network with areas of both the parietal and frontal areas; and the executive control
network with the frontal lobes, especially the prefrontal cortex.
III. PERCEPTION
SENSATION VERSUS PERCEPTION
Basic differences
A sensation is the first awareness of some outside stimulus. An outside stimulus activates sensory receptors, which in turn
produce electrical signals that are transformed by the brain into meaningless bits of information.
A perception is the experience people have after the brain assembles and combines thousands of individual and meaningless
sensations into a meaningful pattern or image. However, the perceptions are rarely exact replicas of the original stimuli. Rather,
the perceptions are usually changed, biased, coloured, or distorted by the unique set of experiences. Thus, perceptions are the
personal interpretations of the real world.
One important feature of perceptions is that they are rarely exact copies of the real world. For example, people who listen to
the same song or music can react very differently (happy, relaxed, agitated, bored). To study how personal preferences for music
can bias the perceptions, researchers assigned students who preferred listening to classical music over other types of music to
groups that were instructed to sit and relax while listening to either 20 minutes of classical music or 20 minutes of rock music.
Researchers used physiological measures to record anxiety levels both before and after subjects listened to music. Findings
showed that only those subjects who listened to their favorite kind of music (classical music) had a decrease in anxiety levels
(Salamon et al., 2003).
Changing sensations in to perceptions
It is most unlikely that, people have ever experienced a “pure” sensation because the brain automatically and instantaneously
changes sensations into perceptions. Perceptions do not exactly mirror events, people, situations, and objects in the
environment. Rather, perceptions are interpretations, which means that the perceptions are changed or biased by the personal
experiences, memories, emotions, and motivations. To understand how sensations become perceptions, the perceptual process
has been divided into a series of discrete steps that, in real life, are much more complex and interactive. There are five steps in
forming perceptions.
(1) Stimulus, since normally people experience only perceptions, they’re not aware of many preceding steps. The first step
begins with some stimulus, which is any change of energy in the environment, such as light waves, sound waves,
mechanical pressure, or chemicals. The stimulus activates sense receptors in the eyes, ears, skin, nose, or mouth.
(2) Transduction, after entering into the people’s eyes, the light waves are focused on the retina, which contains
photoreceptors that are sensitive to light. The light waves are absorbed by photoreceptors, which change physical energy
into electrical signals, called transduction. The electrical signals are changed into impulses that travel to the brain. Sense
organs do not produce sensations but simply transform energy into electrical signals.
(3) Brain: primary areas, the impulses from sense organs first go to different primary areas of the brain. For example,
impulses from the ear go to the temporal lobe, from touch to the parietal lobe, and from the eye to areas in the occipital
lobe. When impulses reach primary areas in the occipital lobe, they are first changed into sensations.
(4) Brain: association areas, each sense sends its particular impulses to a different primary area of the brain where
impulses are changed into sensations, which are meaningless bits of information, such as shapes, colors, and textures.
The “sensation” impulses are then sent to the appropriate association areas in the brain. The association areas change
meaningless bits into meaningful images, called perceptions, such as a dog. The impulses from the eyes would be
changed into visual sensations by the primary visual area and into perceptions by the visual association areas.
(5) Personalized perceptions, each of the individual has a unique set of personal experiences, emotions, and memories
that are automatically added to the perceptions by other areas of the brain. As a result, the perceptions are not a mirror
but a changed, biased, or even distorted copy of the real world (Goldstein, 2010). For example, the visual areas of a
child’s brain automatically assemble many thousands of sensations into a meaningful pattern, which in this case is a
dog. Now, however, the child doesn’t see just an ordinary white and brown dog because other brain areas add her
emotional experience of being bitten. Thus, the child perceives this white and brown, four-legged creature to be a “bad
dog.” For this same reason, two people can look at the same dog and have very different perceptions, such as cute dog,
great dog, bad dog, smelly dog, or friendly dog. Thus, the perceptions are personalized interpretations rather than true
copies of objects, animals, people, and situations in the real world.
PSYCHO-PHYSICS
The first experimental psychologists were interested mainly in sensation and perception. They called their area of interest psycho-
physics, the study of how physical stimuli are translated into psychological experience. A particularly important contributor to
psychophysics was Gustav Fechner, who published pioneering work on the subject in 1860. Fechner was a German scientist
working at the University of Leipzig, where Wilhelm Wundt later founded the first formal laboratory and journal devoted to
psychological research. Unlike Wundt, Fechner was not a “campaigner” interested in establishing psychology as an independent
discipline. However, his ground breaking research laid the foundation that Wundt built upon.
The term psychophysics owes its origin and name to GT Fechner (1801-1887) who defined it as "an exact science of the
functional relations of dependency between body and mind." For the first time he set out to explore the quantitative relationship
between the magnitude of sensation occurring in the mind and the magnitude of the physical stimulus that produces the
sensation. For investigating the quantitative relationship between the magnitude of sensation and the magnitude of physical
stimulus, he developed some experimental methods, which are still in use today. Before examining these methods, it is essential
to explain the meaning of some common terms used in psychophysical measurements such as threshold, point of subjective
equality, point of objective equality, etc.
Gustav Fechner, a physiologist who published the Elements of Psychophysics (1860) described a number of methods for precisely
measuring the relationships between the stimulus (physics) and perception (psycho). These methods are still used today to
measure quantitative relationships between stimuli and perception. Because there are also a number of other, non-quantitative,
methods that are used to measure the relationship between the stimulus and perception. The term psychophysics is used more
broadly to refer of to any measurement of the relationship between the stimulus and perception. An example of research at the
psycho-physical level would be an experiment in which an experimenter asks an observer to decide whether two very similar
patches of color are the same or different. In Gustav Fechner's book Elements of Psychophysics, he described a number of
quantitative methods to measure the relationship between stimuli and perception. These methods are; the method of limits, the
method of adjustment, and the method of constant stimuli are called the classical psychophysical methods because they were
the original methods that were used to measure the stimulus-perception relationship.
PERCEPTUAL THRESHOLDS
Sensation begins with a stimulus, any detectable input from the environment. What counts as detectable, though, depends on
who or what is doing the detecting. Implicit in Fechner’s question is a concept central to psychophysics: the threshold. A
threshold is a dividing point between energy levels that do and do not have a detectable effect.

The concept of threshold was first introduced by Johann Herbart in 1824 when he defined the threshold of consciousness. The
Latin equivalent of the term threshold is limen. Threshold refers to that boundary value on a stimulus dimension, which separates
the stimulus that produces a response from the stimulus that makes no response or a different response. The threshold in
psychophysical measurement is ordinarily divided into absolute threshold and difference threshold.
Absolute threshold
Absolute threshold, or stimulus threshold (abbreviated to RL from its German equivalent Reiz Limen), refers to that minimal
stimulus value which produces a response 50% of the time. A physical stimulus value which is below that minimal value fails to
elicit a response. The stimulus threshold, i.e., an absolute threshold for a specific type of sensory input is the minimum stimulus
intensity that an organism can detect. Absolute thresholds define the boundaries of an organism’s sensory capabilities. Andreas
has defined absolute threshold as "a boundary point in sensation, separating sensory experience from no such experience when
physical stimulus values reach a particular point." Thus, absolute threshold defines the minimum limit for responding to
stimulation. RL for a single physical stimulus is not the same for different individuals. It varies from individual to individual and
sometimes from one situation to another for the same individual. Hence, some people may perceive a stimulus at a low value
while others may perceive the same stimulus at a higher level. That is why RL is statistically defined as the mean of RL, taken
over several trials by the same subject for the same stimulus.
Fechner and his contemporaries used a variety of methods to determine humans’ absolute threshold for detecting light. They
discovered that absolute thresholds are anything but absolute. When lights of varying intensity are flashed at a subject, there is
no single stimulus intensity at which the subject jumps from no detection to completely accurate detection. Instead, as stimulus
intensity increases, subjects’ probability of responding to stimuli gradually increases. Thus, researchers had to arbitrarily define
the absolute threshold as the stimulus intensity detected 50% of the time. Using this definition, investigators found that under
ideal conditions, human abilities to detect weak stimuli are greater than appreciated. For example, on a clear, dark night, in the
absence of other distracting lights, people could see the light of a candle burning 30 miles in the distance.
Difference threshold
When Fechner published the Elements of Psychophysics, he not only described his methods for measuring the absolute threshold
but also described the work of Ernst Weber (1795-1878), a physiologist, who, a few years before the publication of Fechner’s
book, measured another type of threshold, the difference threshold. The difference threshold, or differential threshold
(abbreviated to DL from its German equivalent Differenz Limen), is the difference between the two stimuli, which can be
perceived 50% of the time, or is the smallest difference between two stimuli that a person can detect. Thus, DL defines the
individual's capacity to respond to difference in sensitivity. To measure the difference threshold, one of Fechner's methods,
described earlier, can be used, except instead of asking participants to indicate whether or not they detect a stimulus, they are
asked to indicate whether they detect a difference between two stimuli. For example, to measure the difference threshold for
sensing weight, Weber had observers lift a small "standard" weight and then lift a slightly heavier "comparison" weight and
judge which was heavier. Weber found that when the difference between the standard and comparison weights was small, his
observers found it difficult to detect the difference in the weights, but they easily detected larger differences. That much is not
surprising, but Weber went further. He found that the size of the DL depended on the size of the standard weight. Fechner
was also interested in people’s sensitivity to differences between stimuli.
The difference threshold is also sometimes referred to as a just noticeable difference (abbreviated to JND) which is the smallest
difference between the two stimuli that can be detected by the subject. A stimulus must be increased or decreased by one JND
in order that the change be perceived. Usually, for calculating DL, two stimuli are presented to the subject. One of them has a
constant value throughout its presentation and is known as the standard stimulus (S) and the second stimulus is varied
throughout its presentation and is known as the comparable stimulus (C).
Suppose the experimenter takes an S of 100-g weight and starts presenting C at a very low value (say 30 g). He may then go on
increasing the value of C by a very small value (say at the rate of 5 g in each presentation) until it becomes indistinguishable
from the S. This is called the lower difference threshold. Likewise, the experimenter may start presenting the C at a much higher value
than S (say, 140 g) and then go on decreasing the C by a very small value (say by 5 g) until it can no longer be distinguished from
S. This is known as the upper difference threshold. The upper and lower thresholds represent the upper and lower limits of the
interval of uncertainty or IU respectively. The IU indicates the span or range where the responses of the subjects are uncertain.
The DL or JND is half the difference between the upper and lower threshold, i.e., DL = Upper threshold - Lower threshold/2
or IU/2. The DL, like the RL, taken over several trials by the same subject for the same physical stimulus is not identical in
different situations.
METHODS
The scale used for weighing the objects is known as the physical scale and the order in which the objects are arranged in terms
of the measured weights is known as the physical continuum. The order in which the objects are arranged on the basis of
judgements made by the subjects regarding the weight is known as the psychological continuum. Several attempts have been
made to investigate the relationship between the ordering of objects on a known physical continuum and the ordering of the
same objects on a psychological continuum formed by judgements of individuals. The methods used in studying such a
relationship are known as psychophysical scaling methods.
Method of limit/Method of just noticeable difference/Method of minimal changes/Method of serial exploration
The method of limits is a popular method of determining the threshold. The method was so named by Kraepelin in 1891
because a series of comparable stimulus (C) ends when the subject has reached that limit where he changes his judgement. For
computing threshold by this method, two modes of presenting stimulus are usually adopted-the increasing mode and the
decreasing mode. The increasing mode is called the ascending series and the decreasing mode is called the descending series.
For computing DL, the stimulus, is varied in possible small steps in the ascending and descending series and the subject is
required to say in each step whether the stimulus, is smaller (-), equal to (=), or larger (+) than the S. For computing RL, no
standard stimulus (S), is needed and the subject simply reports whether or not he has detected change in the stimulus presented
in the ascending and descending series. In computing both the DL and the RL the stimulus sequence is varied with a minimum
change in its magnitude in each presentation. Hence, Guilford (1954) prefers to call this method the method of minimal changes.
Besides these variable errors, the RL may also be affected by two constant errors; the error of habituation and the error of
anticipation. The error of habituation (sometimes called the error of perseverance) may be defined as a tendency of the subject
to go on saying "yes" in a descending series or "no" in an ascending series. In other words, when the error of habituation is in
operation, the subject falls into a habit of giving certain responses even after a clear change in the stimulus has occurred. One
natural consequence of this error is to inflate the mean of the ascending series over the mean of the descending series. The error
of anticipation (sometimes also called the error of expectation) is the opposite of the error of habituation and accordingly, may
be defined as the tendency to expect a change from "yes" to "no" in the descending series and "no" to "yes" in the ascending
series before the change in stimulus is apparent. The conscious tendency that works in the mind of the subject when such an
error is in operation, is that he has said "yes" many times and, therefore, should now say "no" (in the descending series); likewise,
he thinks that he has said "no" many times and, therefore, he should now say "yes" (in the ascending series). The natural
consequence of such an error is to inflate the mean of the descending series over the mean of the ascending series. The primary
purpose of giving alternate ascending and descending series is to cancel out these two types of constant errors.
Since these two types of constant errors work in opposite directions, both cannot exist within the same subject. Practice and
fatigue may affect the data obtained by the method of limits. These two effects may be shown by comparing the mean of the
first half with the mean of the second half of the total series taken. Guilford (1954) has recommended that these effects can be
analyzed in an even better way by the ANOVA (Analysis of variance).
Method of constant stimuli/Method of right and wrong case/Method of frequency
In this method a number of fixed or constant stimuli are presented to the subject several times in a random order. The method
of constant stimuli can also be employed for determining the RL or DL. For determining RL, the different values of the stimulus
are presented to the subject in a random order (and not in an order of regular increase or decrease as is done in the method of
limits) and he has to report each time whether he perceives or does not perceive the stimulus. Though the different values of
stimulus are presented irregularly, the same values are presented throughout the experiment a large number of times, usually
from 50 to 200 times each, in a predetermined order unknown to the subjects. The mean of the reported values of the stimulus
becomes the index of RL. The procedure involved is known as the method of constant stimuli.
For calculating DL, in each presentation the two stimuli (one standard and one variable) are presented to the subject
simultaneously or in succession. On each trial the subject is required to say whether one stimulus is "greater" or "less" than the
other. In case of uncertainty, he reports, "doubtful" or "equal" and in such a situation sometimes the experimenter forces him
to guess in order to avoid doubtful judgements. The procedure involved is known as the method of constant stimulus
differences and not the method of constant stimuli. If the S, and the C, are to be presented in succession, for half of the trials,
the S, is presented first and for the remaining half, the order is reversed. This is done to control a constant error (that is, time
error), which may occur if the S, is presented either before or after the C, throughout the trials. In this sense, the method of
constant stimulus differences differs from the method of limits where the C, are presented by regularly increasing (ascending
series) and decreasing (descending series) their value. In the method of constant stimuli, a smaller value of the C, may be abruptly
followed by a larger value of S, or vice versa. The subject cannot estimate the likely C, to be given for making a judgment.
Though the different C, are presented in an irregular order, they remain constant throughout the presentations in the experiment,
i.e., the same value is presented throughout all presentations in the experiment in an irregular order. This method is also known
as the method of right or wrong cases because in each case the subject has to report whether he perceives the stimulus (right)
or he does not perceive the stimulus (wrong).
This method has one distinct advantage over the method of limits. Since in the method of limits the stimuli are presented in a
regular increasing or decreasing manner, the two constant errors, i.e., the error of habituation and the error of expectation, are
inevitable. But these two errors are safely avoided in the constant methods because the presentations of the stimuli are random
or irregular.
Method of average error /Method of adjustment /Method of reproduction/Method of equivalent stimuli
The method of average error is the oldest method of psychophysics and is a sort of gift to psychophysics and astronomy. In
this method the subject is provided with an S and a C. The C is either greater or lesser in intensity than the S. He is required to
adjust the C, until it appears to him to be equivalent to the S.
The difference between S and C defines the error in each judgement. A large number of such judgements are obtained and the
arithmetic mean (or average) of those judgements is calculated. Hence, the name, method of average error or mean error is
given. The obtained mean is the value for Point of Subjective Equality (PSE). The difference between the S, and PSE indicates
the presence of the Constant Error (CE). If the PSE (or average adjustment) is larger than the S, then the CE is positive and
indicates overestimation of the standard stimulus. On the other hand, if the PSE is smaller than the S, then the CE is negative
and indicates underestimation of the standard stimulus.
The method of average error is distinguished from the two preceding methods in one important way. In the preceding two
methods the control over changes in the stimulus was entirely in the hands of the investigator or the experimenter. But in the
method of average error the subjects themselves are permitted to control the variations or changes in the stimulus. In using the
method of average error, care must be taken to see that the probability of systematic or constant error as well as variable error
is minimised. This can be ensured by the following means:
(1) In half of the total trials, the C should be set at a value larger than the S, and in the remaining half, the C, should be set
at a value smaller than the S. In this way, the direction of adjustment should be counterbalanced so that movement
error may be minimized or cancelled. The Movement Error (ME) is the error produced by the subject's bias for moving
the comparable stimulus inward or outward.
(2) The spatial presentation of the C and the S may also result in a systematic error called Space Error (SE). The space
error is defined as the error which is produced by the subject's bias in adjusting the C with S when the former is placed
either to the left or right of the latter in all the rials. Hence, for controlling SE, the C should be presented by placing it
to the right of the S in half of the total trials and reversing its position in the remaining half.
(3) The initial value of the C should be randomly changed from trial to trial so that the subject may not get any unnecessary
cues known as 'extraneous cues' in adjusting or equating the C to the S.
The method of average error is also known by several other names. In this method the subjects adjust the C to the S by making
an active manipulation of the C. Hence, the method is also known as the method of adjustment. The purpose of the method
is to determine equivalent stimuli by active adjustment of the C by the subject in each trial and hence, the method also goes by
the name of method of equivalent stimuli. In this method the subject tries to reproduce a given C, in a way which may seem
equivalent to the S. Hence, it is also known as the method of reproduction.
The main purpose of this method is to calculate PSE, although DL can also be calculated. The method will be illustrated with
data obtained from the experiment on the Muller-Lyer illusion.
WEBER’S LAW
Weber's law, named after its discoverer EH Weber, was the first systematic attempt to formulate a principle which governed
the relationship between psychological experience and physical stimulus. For a long time, this law has been the focus of
psychophysical experimentation.
When the magnitude of the standard stimulus is increased, the size of change needed for discrimination between the standard
and the comparable stimulus (that is, JND) is also increased. Thus, the greater the magnitude of the standard stimulus, the
greater the size of the JND or DL. Weber's law is a common mathematical statement of this fact. This relationship between the
size of the standard stimulus and the size of JND is technically known as Weber's Law. For example, if addition of one candle
makes a just noticeable difference to an already lighted room having ten candles, it would take 10 candles to make the same
difference in a lighted room having 100 candles and a room lighted with 1000 candles, 100 candles will produce just the same
noticeable difference. Thus, it is obvious that JND bears a constant ratio with the standard stimulus. In the words of Underwood,
"The law states that for a given stimulus dimension, the DL bears a constant ratio to the point on the dimension (standard
stimulus) at which the DL was measured." The law may be stated in terms of the following equation:
∆𝑅
=𝑘
𝑅
Where, ∆𝑅 = 𝐷𝐿; R is the standard stimulus and k is the constant.
𝐷𝐿
= 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡
𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑠𝑡𝑖𝑚𝑢𝑙𝑢𝑠
The constant in Weber's law is always a fraction and is known as the Weber fraction or proportion. It indicates the proportion
by which the standard stimulus must be increased in order to produce the just noticeable difference or to detect a change. If the
addition of 8g makes a difference in weight sensitivity of 10g, the Weber fraction will be 8/10=0.8, which indicates that the
intensity of the standard stimulus must be increased by 0.8 in order to perceive a difference between that stimulus and the other
stimulus. If the Weber fraction is 0.8, a stimulus value of, say, 100 will be increased by 0.8 of its magnitude, that is by 80, that
is, (08 x100) to be just noticeably different from 100. In other words, it should be 180. Likewise, if the stimulus value is 1000,
its value should be increased by 800 in order to produce the just noticeable difference. Thus, the ratio remains constant
irrespective of the strength of the standard stimulus.
Weber's law, in general, has been regarded as a good measure of the overall sensitivity in different sense modalities. Generally,
if the Weber fraction is larger, the DLs for the given stimulus dimensions will also be larger. One advantage of the Weber
fraction is its direct comparability. Since the fraction is not dependent upon a physical unit in terms of which the standard
stimulus and the DL are measured, the fractions can be compared across the different stimulus dimensions. Weber's fraction
has been commonly reported to be 0.020 for heaviness, 0.030 for line length, 0.079 for brightness, 0.023 for finger span, 0.014
for electric shock, and 0.084 for a salty taste.
One general difficulty with Weber's law is that its precision is lost where the standard stimulus reaches the extremes, i.e., when
the standard stimulus becomes either very weak or very strong, the precision is lost to a great extent. Not only that, the Weber
fraction is also influenced by the way the stimuli are presented. When the presentation is such that the standard stimulus is
followed by the comparable stimulus, the fraction reaches its maximum precision. When the order is reversed or modified, its
precision is adversely affected. For these reasons, several alternative laws have been derived for fitting the function of DL and
the standard stimulus.
FECHNER’S LAW
Fechner derived his law (called the Fechner Law) from Weber's Law. Fechner's law is an indirect method of scaling judgement
where DL is used as the unit of the equal-interval scale. Fechner was of the view that DL for each successive unit or psychological
step can be determined by using a constant multiple. For example, suppose the Weber fraction is 0.25 (or 1/4) for a particular
sensation. If one stimulus value is 20 units the other stimulus should be 1/4 of 20 or (0.25 × 20) = 5 units more than 20, i.e., it
should be 0.25 for producing a just noticeable difference. Again, the other stimulus value at the next psychological step to be
just noticeably different from 25 units should be 0.25 × 25 = 6.25 more than 25 units, that is, it should be 31.25. Likewise, at
the next psychological step the other stimulus value should be 31.25 + 7.81=39.06 in order to produce the just noticeable
difference from the stimulus value of 31.25. Thus, the stimulus value required for each successive step or psychological step in
order to produce a just noticeable difference should be 14 times the preceding one. In this way for each successive unit large
increments in stimulus value are needed to produce equal increments in psychological sensations. This increase in psychological
sensation as a function of increment in stimulus value can also be easily described by a logarithmic relationship because it entails
multiplication by a constant. For example, in the above example for each successive step, the DL can also be found by using a
constant multiple, 5/4, such as for the first step starting with the stimulus value of 20, it would be 4 × 20=25; for the second
step it would be 54 ×25 = 31.25; and for the third step, another stimulus for having DL would be 4 × 3125=39.06, and so on.
It is obvious that increments in the stimulus value occur by a process of multiplication but increments in the resulting
psychological sensation at each successive step occur by the process of addition. The former type of increment is known as
geometrical progression and the latter type of increment is known as arithmetical progression. When one of the two variables increases
in geometrical progression and the other in arithmetical progression, the relationship is termed as a logarithmic relationship.
Fechner's law states that the stimulus values and the resulting psychological sensation have a logarithmic relationship so that
when the former increases in geometrical progression, the latter increases in arithmetical progression. In other words, the law
states that the magnitude of sensation (or response) varies directly with the logarithm of the stimulus value. This law of Fechner
has been paraphrased by one author, “the sensation plods along step by step while the stimulus leaps ahead by ratios”. Fechner
has given several formulas for showing this relationship but the most common is:

R = k × log 𝑆
Where R is the magnitude of sensation of response, k is the Weber’s constant, and S is the magnitude of stimulus value.
In formulating the above law, Fechner made two important assumptions:
(1) The DL or JND indicates equal increments in psychological sensation irrespective of the absolute level at which it is
produced.
(2) Psychological sensation is the sum of all those JND steps, which come before its origin.
Fechner's law had been very influential, particularly in psychology's early days because the law showed that it was possible to
relate the things of mind to those of body in precise, quantitative terms.
SIGNAL DETECTION THEORY
Modern psychophysics has a more complicated view of how stimuli are detected. Signal-detection theory proposes that the
detection of stimuli involves decision processes as well as sensory processes, both of which are influenced by a variety of factors
besides stimulus intensity. In comparison to classical models of psychophysics, signal-detection theory is better equipped to
explain some of the complexities of perceived experience in the real world.
According to signal detection theory, complex decision mechanisms are involved whenever people try to determine if they have
or have not detected a specific stimulus. For instance, imagine that a radiologist, while scanning a patient's X-ray, he thinks he
detect a faint spot on the film, but he’s not quite sure. If he concludes that the spot is an abnormality, he must order more scans
or tests-an expensive and time-consuming alternative. If further testing reveals an abnormality, such as cancer, he may have
saved the patient's life. If no abnormality is detected, though, he’ll be blamed for wasting resources and unnecessarily upsetting
the patient. Alternatively, if he decides the spot is not an abnormality, then there's no reason to order more tests. If the patient
remains healthy, then he has done the right thing. However, if the spot is really cancerous tissue, the results could be fatal.
Clearly, his decision is likely to be influenced by the rewards and costs associated with each choice alternative.
In situations, there are four possible outcomes, hits (detecting signals when they are present), misses (failing to detect signals
when they are present), false alarms (detecting signals when they are not present), and correct rejections (not detecting signals
when they are absent). Given these possibilities, signal-detection theory attempts to account for the influence of decision-making
processes on stimulus detection.
Response Absent Present
Stimulus Present Miss Hit
Absent Correct Rejection False Alarm

Interestingly, an incident very similar to this scenario was recently reported in the media. Two pathologists from Boston area in
the US misread the tissue biopsies of twenty patients; the doctors informed the men they did not have cancer, when in fact they
had the disease. The mistakes were revealed during a review of 279 tests performed between 1995 and 1997. Please note that it
is not clear whether motivational factors (e.g., cost considerations) contributed to the apparent misdiagnosis. The doctors' error
does illustrate, however, that deciding whether they have detected a given stimulus is not always easy. Indeed, these decisions
involve much more than a simple determination of the relationship between the amount of physical energy present in a stimulus
and the resulting psychological sensations.
SUBLIMINAL PERCEPTION
This issue centers on the concept of subliminal perception, i.e., the registration of sensory input without conscious awareness
(limen is another term for threshold, so subliminal means below threshold). Subliminal perception has become tied up in highly
charged controversies relating to money, sex, religion, and rock music. The controversy began in 1957 when an executive named
James Vicary placed hidden messages such as “Eat popcorn” in a film showing at a theatre in New Jersey. The messages were
superimposed on only a few frames of the film, so they flashed by quickly and imperceptibly. Nonetheless, Vicary claimed in
the press that popcorn sales increased by 58%, creating great public controversy. Since then, Wilson Brian Key, a former
advertising executive, has written several books claiming that sexual words and drawings are embedded subliminally in magazine
advertisements to elicit favorable unconscious reactions from consumers. Taking the sexual manipulation themes a step further,
entrepreneurs are now marketing music audiotapes containing subliminal messages that are supposed to help seduce
unsuspecting listeners. Furthermore, subliminal self-help tapes intended to facilitate weight loss, sleep, memory, self-esteem,
and the like have become a $50 million industry.
Research on subliminal perception was sporadic in the 1960s and 1970s because scientists initially dismissed the entire idea as
preposterous. However, empirical studies have begun to accumulate since the 1980s. Quite a number of these studies have
found support for the existence of subliminal perception. For example, in one recent study, Karremans, Stroebe, and Claus
(2006) set out to determine whether participants’ inclination to consume a particular drink (Lipton iced tea) could be influenced
without their awareness. Subjects were asked to work on a visual detection task that was supposedly designed to determine
whether people could spot small changes in visual stimuli. For half of the participants, subliminal presentations (23/1000 of a
second) of the words LIPTON ICE were interspersed among these visual stimuli. Control subjects were given subliminal
presentations of neutral words. After the visual detection task, all participants took part in a study of “consumer behavior” and
their inclination to drink Lipton iced tea was assessed. As predicted, participants exposed subliminally to LIPTON ICE were
significantly more interested in consuming Lipton iced tea, especially among those who indicated that they were thirsty.
Sensory Adaptation
It is the reduced sensitivity to unchanging stimuli over time.
Sensory adaptation has several advantages. If it did not occur, people would constantly be distracted by the stream of sensations
people experience each day. People would not adapt to the clothing, rubbing the skin, to the feel of our tongue in the mouth,
or to bodily processes such as eye blinks and swallowing. However, sensory adaptation is not always beneficial and can even be
dangerous. After about a minute, for example, the sensitivity to most odors drops by nearly 70 percent. Thus, in situations where
smoke or harmful chemicals are present, sensory adaptation may actually reduce the sensitivity to existing dangers. In general,
though, the process of sensory adaptation allows people to focus on important changes in the world around, and that ability to
focus on and respond to stimulus change is usually what is most important for survival.
RULES OF ORGANIZATION
Gestalt approach to figure-ground segregation
In the early 1900s, two groups of psychologists engaged in a heated debate over how perceptions are formed. One group, called
the structuralists, strongly believed that people added together thousands of sensations to form a perception. Another group,
called the Gestalt psychologists, just as strongly believed that sensations were not added but rather combined according to a set
of innate rules to form a perception. And gestalt won the debate.
The structuralists believed that people add together hundreds of basic elements to form complex perceptions. They also believed
that you can work backward to break down perceptions into smaller and smaller units, or elements. Structuralists spent hundreds
of hours analyzing how perceptions, such as a falling ball, might be broken down into basic units or elements. They believed
that once they understood the process of breaking down perceptions, they would know how basic units are recombined to form
perceptions. Thus, structuralists believed that you add together basic units to form perceptions, much as people would add a
column of numbers to get a total. For example, structuralists would say that you add together hundreds of basic units, such as
colors, bricks, leaves, branches, tiles, pieces of glass, and bits of steel, to form the perception of a scene. However, the
structuralists’ explanation of adding bits to form a perception was hotly denied by Gestalt psychologists. The Gestalt
psychologists said that perceptions were much too complex to be formed by simply adding sensations together; instead, they
believed that perceptions were formed according to a set of rules.
Gestalt psychologists believed that the brains follow a set of rules that specify how individual elements are to be organized into
a meaningful pattern, or perception. Unlike structuralists, Gestalt psychologists said that perceptions do not result from adding
sensations. Rather, perceptions result from our brain’s ability to organize sensations according to a set of rules, much as the
brain follows a set of rules for organizing words into meaningful sentences. To emphasize their point, Gestalt psychologists
came up with a catchy phrase, “Thee whole is more than the sum of its parts,” to mean that perceptions are not merely combined
sensations. The Gestalt psychologists went one step further; they came up with a list of organizational rules.
Organizational rules
Rules of organization, which were identified by Gestalt psychologists, specify how the brain combine and organize individual
pieces or elements into a meaningful perception.
(1) One of the most basic rules in organizing perceptions is picking out the object from its background. The figure-ground
rule states that, in organizing stimuli, people tend to automatically distinguish between a figure and a ground: The figure,
with more detail, stands out against the background, which has less detail. There is some evidence that our ability to
separate figure from ground is an innate response. For example, individuals who were blind from an early age and had
their sight restored as adults were able to distinguish between figure and ground with little or no training. The figure-
ground rule is one of the first rules that the brain uses to organize stimuli into a perception. However, in the real world,
the images and objects we usually perceive are not reversible because they have more distinct shapes.
(2) The similarity rule states that, in organizing stimuli, people group together elements that appear similar. The similarity
rule causes people to group the dark blue dots together and prevents from seeing a figure as a random arrangement of
light and dark dots.
(3) The closure rule states that, in organizing stimuli, people tend to fill in any missing parts of a figure and see the figure
as complete. For example, the closure rule explains why a person can fill in letters missing on a sign or pieces missing
in a jigsaw puzzle.
(4) The proximity rule states that, in organizing stimuli, people group together objects that are physically close to one
another. And, automatically group circles that are close together.
(5) The simplicity rule (law of good pragnanz) states that stimuli are organized in the simplest way possible. This rule
says that people tend to perceive complex figures as divided into several simpler figures.
(6) The continuity rule states that, in organizing stimuli, people tend to favor smooth or continuous paths when
interpreting a series of points or lines. For example, the rule of continuity predicts that people do not see a line that
begins at A and then turns abruptly to C or to D.
(7) The common region rule says that items within a boundary are perceived as a group and assumed to share some
common characteristic or functionality.

Perceptual constancy
Perceptual constancy refers to the tendency to perceive sizes, shapes, brightness, and colors as remaining the same even though
their physical characteristics are constantly changing. There are four kinds of perceptual constancy; size, shape, brightness, and
color.
(1) Size constancy refers to the tendency to perceive objects as remaining the same size even when their images on the
retina are continually growing or shrinking. As a car drives away, it projects a smaller and smaller image on the retina.
Although the retinal image grows smaller, people do not perceive the car as shrinking because of size constancy. A
similar process happens as a car drive toward a person. As the same car drives closer, it projects a larger image on the
retina. However, because of size constancy, people do not perceive the car as becoming larger.
Size constancy is something people have learned from experience with moving objects. People have learned that objects
do not increase or decrease in size as they move about.
(2) Shape constancy refers to the tendency to perceive an object as retaining its same shape even though when viewed it
from different angles, its shape is continually changing its image on the retina. When a person looks down at a
rectangular book, it projects a rectangular shape on the retina. However, if the book is moved farther away, it projects
trapezoidal shapes on the retina, but people still perceive the book as rectangular because of shape constancy.
(3) Brightness constancy refers to the tendency to perceive brightness as remaining the same in changing illumination.
Color constancy refers to the tendency to perceive colors as remaining stable despite differences in lighting.
For example, if a person looks at a young girl’s sweater in bright sunlight, it would be a bright yellow. If he looks at her
same yellow sweater in dim light, he would still perceive the color as a shade of yellow, although it is duller. Because
of color constancy, colors seem about the same even when lighting conditions change. However, if the light is very
dim, objects will appear mostly gray because he lose color vision in very dim light.
DEPTH PERCEPTION
Depth perception refers to the ability of the eye and brain to add a third dimension, depth, to all visual perceptions, even though
images projected on the retina are in only two dimensions, height and width. It is impossible for most sighted people to imagine
a world without depth, since they rely on depth perception to move and locate objects in space. The cues for depth perception
are divided into two major classes: binocular and monocular. It seems to develop very early in infancy, if it is not actually present
at birth. People who have had sight restored have almost no ability to perceive depth if they were blind from birth. Depth
perception, like the constancies, seems to be present in infants at a very young age. Various cues exist for perceiving depth in
the world. Some require the use of only one eye (monocular cues) and some are a result of the slightly different visual patterns
that exist when the visual fields of both eyes are used (binocular cues).
Monocular cue
Monocular depth cues are produced by signals from a single eye. Monocular cues most commonly arise from the way objects
are arranged in the environment.
(1) Linear perspective is a monocular depth cue that results as parallel lines come together, or converge, in the distance.
As someone look down a long stretch of road, the parallel lines formed by the sides of the road appear to come together,
or converge, at a distant point. This convergence is a monocular cue for distance and is called linear perspective.
(2) Relative size is a monocular cue for depth that results when we expect two objects to be the same size and they are
not. In that case, the larger of the two objects will appear closer and the smaller will appear farther away. A person
expects the electric towers in the photo above to be the same size. However, since the electric towers in the front appear
larger, he perceives them as closer, while the electric towers in the back appear smaller and thus farther away. The
relative size of objects is a monocular cue for distance.
(3) Interposition is a monocular cue for depth perception that comes into play when objects overlap. The overlapping
object appears closer, and the object that is overlapped appears farther away. One can easily perceive which fish are in
front and which are in back, even though all the fish are about the same size. i.e., he can identify and point out which
fish are closest to him and which are farthest away by using the monocular depth cue of overlap, which is called
interposition.
(4) Light and shadow make up monocular cues for depth perception: Brightly lit objects appear closer, while objects in
shadows appear farther away. People can notice how the brightly lit edges of the footprints appear closer, while the
shadowy imprint in the sand appears to recede. Also, the sunny side of the sand dune seems closer, while the back side
in shadows appears farther away. The monocular depth cues shown here involve the interplay of light and shadows.
(5) Texture gradient is a monocular depth cue in which areas with sharp, detailed texture are interpreted as being closer
and those with less sharpness and poorer detail are perceived as more distant. People notice how the wide, detailed
surface cracks in the mud seem closer, while the less detailed and narrower cracks appear farther away. These sharp
changes in surface details are monocular depth cues created by texture gradients.
(6) Atmospheric perspective is a monocular depth cue that is created by the presence of dust, smog, clouds, or water
vapor. People perceive clearer objects as being nearer, and perceive hazy or cloudy objects as being farther away. One
of the depth cues that people may have overlooked is created by changes in the atmosphere.
(7) Motion parallax is a monocular depth cue based on the speed of moving objects. People perceive objects that appear
to be moving at high speed as closer to us than those moving more slowly or appearing stationary.
(8) Accommodation occurs with both eyes, but it is still a monocular cue, because one eye alone would give the same
information as would both. Accommodation refers to the feedback people receive from the muscles in the eye that
cause the lens to bulge or get thinner.
Binocular cues
Binocular depth cues depend on the movement of both eyes (bi means “two”; ocular means “eye”).
(1) Convergence refers to a binocular cue for depth perception based on signals sent from muscles that turn the eyes. To
focus on near or approaching objects, these muscles turn the eyes inward, toward the nose. The brain uses the signals
sent by these muscles to determine the distance of the object. People can experience convergence by holding a finger
in front of the nose and slowly bringing it closer to the nose. The finger appears to move closer to the nose because
the muscles that are turning the eyes inward produce signals corresponding to convergence. The more the eyes turn
inward or converge, the nearer the object appears in space.
(2) The second binocular cue comes from having an eye on each side of the face. One reason it’s an advantage to have an
eye on each side of the face is that each eye has a slightly different view of the world, which provides another binocular
cue for depth perception called retinal disparity.
Retinal disparity refers to a binocular depth cue that depends on the distance between the eyes. Because of their
different positions, each eye receives a slightly different image. The difference between the right and left eyes’ images
is the retinal disparity. The brain interprets a large retinal disparity to mean a close object and a small retinal disparity
to mean a distant object.
Illusions
An illusion is a perceptual experience in which the perceive an image as being so strangely distorted that, in reality, it cannot and
does not exist. An illusion is created by manipulating the perceptual cues so that the brain can no longer correctly interpret
space, size, and depth cues. The illustration somehow activates motion-detecting neurons in the visual pathway. Patterns in
illustrations like this one fool the visual system into seeing motion when it doesn’t really exist. The motor perception area of the
brain actually shows heightened activity as people move their eyes while looking at these types of illusions.
The moon illusion has intrigued people for centuries because it is so impressive. When a full moon is near the horizon, it
appears (or gives the illusion of being) as much as 50% larger than when it is high in the sky. People perceive this 50% increase
in size even though the size of both moons on the retinas is exactly the same. For over 50 years, researchers have proposed
different theories for the moon illusion. Currently, no single theory can explain the moon illusion completely and it is believed
that several factors contribute to it. The most important factor has to do with how the view of the landscape surrounding the
moon influences our depth perception. When people view the moon on the horizon, they see it in relation to the landscape
(trees, mountains, buildings), which consists of depth information. In contrast, because they view the elevated moon through
empty space, there are no cues to indicate distance. Thus, the brains perceive the moon on the horizon to be farther away than
the elevated moon. Consequently, since the size of both moons on the retinas is exactly the same and the moon on the horizon
is perceived as being farther away, the brain compensates to correct this inconsistency by inflating the perception of the size of
the moon on the horizon. Consistent with this theory, researchers found that subjects estimated the horizon moon to be much
farther away and interpreted its size as being larger. Likewise, subjects estimated the elevated moon to be closer and perceived
it as being smaller.
An Ames room is a distorted room that creates an optical illusion. Likely influenced by the writings of Hermann Helmholtz, it
was invented by American scientist Adelbert Ames Jr. in 1946, and constructed in the following year. An Ames room is viewed
with one eye through a peephole. Through the peephole, the room appears to be an ordinary rectangular cuboid, with a back
wall that is vertical and at right angles to the observer's line of sight, two vertical side walls parallel to each other, and a horizontal
floor and ceiling. The observer will see that an adult standing in one corner of the room along the back wall appears to be a
giant, while another adult standing in the other corner along the back wall appears to be a dwarf. And if an adult moves from
one corner of the room to the other, they will appear to dramatically change in size. The true shape of the room is that of an
irregular hexahedron: depending on the design of the room, all surfaces can be regular or irregular quadrilaterals, so that one
corner of the room is farther from an observer than the other. The illusion of an ordinary room is because most information
about the true shape of the room does not reach the observer's eye. The geometry of the room is carefully designed, using
perspective, so that, from the peephole, the image projected onto the retina of the observer's eye is the same as that of an
ordinary room. Once the observer is prevented from perceiving the real locations of the parts of the room, the illusion that it is
an ordinary room occurs.
The Poggendorff illusion is a geometrical-optical illusion that involves the misperception of the position of one segment of a
transverse line that has been interrupted by the contour of an intervening structure. It is named after Johann Christian
Poggendorff, the editor of the journal, who discovered it in the figures Johann Karl Friedrich Zöllner submitted when first
reporting on what is now known as the Zöllner illusion, in 1860. The magnitude of the illusion depends on the properties of
the obscuring pattern and the nature of its borders. The Poggendorff illusion is an optical illusion that involves the brain's
perception of the interaction between diagonal lines and horizontal and vertical edges, i.e., Poggendorff illusion is an image
where thin diagonal lines are positioned at an angle behind wider stripes. When observing these thin lines, they appear to be
misaligned; in the example above, the blue line on the right appears to line up with the black line on the left. In actuality, the
black and red lines match up.
The Ponzo illusion is an optical illusion that was first demonstrated by the Italian psychologist Mario Ponzo (1882-1960) in
1913. He suggested that the human mind judges an object's size based on its background. He showed this by drawing two
identical lines across a pair of converging lines, similar to railway tracks. The upper line looks longer because people interpret
the converging sides according to linear perspective as parallel lines receding into the distance. In this context, people interpret
the upper line as though it were farther away, so people see it as longer, a farther object would have to be longer than a nearer
one for both to produce retinal images of the same size.

The Müller-Lyer illusion is an optical illusion consisting of three stylized arrows. When viewers are asked to place a mark on
the figure at the midpoint, they tend to place it more towards the "tail" end. The illusion was devised by Franz Carl Müller-Lyer
(1857–1916), a German sociologist, in 1889. A variation of the same effect (and the most common form in which it is seen
today) consists of a set of arrow-like figures. Straight line segments of equal length comprise the "shafts" of the arrows, while
shorter line segments (called the fins) protrude from the ends of the shaft. The fins can point inwards to form an arrow "head"
or outwards to form an arrow "tail". The line segment forming the shaft of the arrow with two tails is perceived to be longer
than that forming the shaft of the arrow with two heads. People have learned that if a corner of a room extends outward, it is
closer; this experience distorts the perception so that the left arrow appears to be shorter. In contrast, they have learned that if
a corner of a room recedes inward, it is farther away, and this experience makes them perceive the right arrow as longer.

The Hermann grid illusion is an optical illusion reported by Ludimar Hermann in 1870. The illusion is characterized by
"ghostlike" grey blobs perceived at the intersections of a white (or light-colored) grid on a black background. The grey blobs
disappear when looking directly at an intersection.
IV. MEMORY
Memory is an active system that receives information from the senses, puts that information into a usable form, organizes it as
it stores it away, and then retrieves the information from storage. Or, it is the ability to retain information over time through
three processes: encoding (forming), storing, and retrieving. Memories are not copies but representations of the world that vary
in accuracy and are subject to error and bias.
PROCESS OF MEMORY
Although there are several different models of how memory works, all of them involve the same three processes: getting the
information into the memory system, storing it there, and getting it back out.
Putting it in: encoding; the first process in the memory system is to get sensory information (sight, sound, etc.) into a form
that the brain can use. This is called encoding. Encoding is the set of mental operations that people perform on sensory
information to convert that information into a form that is usable in the brain’s storage systems. For example, when people hear
a sound, their ears turn the vibrations in the air into neural messages from the auditory nerve (transduction), which make it
possible for the brain to interpret that sound. Encoding is not limited to turning sensory information into signals for the brain.
Encoding is accomplished differently in each of the three different storage systems of memory. In one system, encoding may
involve rehearsing information over and over to keep it in memory, whereas in another system, encoding involves elaborating
on the meaning of the information.
In simple words, encoding refers to making mental representations of information so that it can be placed into memories. For
example, people encode numbers by visualizing each number as having a different shape, color, and texture. Such a vivid mental
representation helps to store numbers in his memory.
Keeping it in: storage; the next step in memory is to hold on to the information for some period of time in a process called
storage. The period of time will actually be of different lengths, depending on the system of memory being used. For example,
in one system of memory, people hold on to information just long enough to work with it, about 20 seconds or so. In another
system of memory, people hold on to information more or less permanently.
Getting it out: retrieval; the biggest problem many people have is retrieval, that is, getting the information they know they
have out of storage. It is actually the process of getting or recalling information that has been placed into short-term or long-
term storage.
TYPES OF MEMORY
In fact, a popular model of memory divides it into three different processes: sensory, short-term, and long-term memory.
Sensory memory
It is the first system in the process of memory, the point at which information enters the nervous system through the sensory
systems, such as eyes, ears, and so on. Sensory memory refers to an initial process that receives and holds environmental
information in its raw form for a brief period of time, from an instant to several seconds. Information is encoded into sensory
memory as neural messages in the nervous system. As long as those neural messages are traveling through the system, it can be
said that people have a “memory” for that information that can be accessed if needed. The brief preservation of sensations in
sensory memory is adaptive in that it gives you additional time to try to recognize stimuli. However, it had better take advantage
of this stimulus persistence immediately, because it doesn’t last long.
There are two kinds of sensory memory that have been studied extensively. They are the iconic (visual) and echoic (auditory)
sensory systems.
Iconic memory is a form of sensory memory that automatically holds visual information for about a quarter of a second or
more; as soon as a person shifts attention, the information disappears. (The word icon means “image”). In real life, information
that has just entered iconic memory will be pushed out very quickly by new information, a process called masking. Research
suggests that after only a quarter of a second, old information is replaced by new information. Although it is rare, some people
do have what is properly called eidetic imagery, or the ability to access a visual sensory memory over a long period of time. Although
the popular term photographic memory is often used to mean this rare ability, some people claiming to have photographic memory
actually mean that they have an extremely good memory. Having a very good memory and having eidetic imagery ability are two
very different things. As with iconic memory, it also allows people to hold on to incoming auditory information long enough
for the lower brain centres to determine whether processing by higher brain centres is needed.
George Sperling conducted an experiment in individual subjects, who sat in front of a screen upon which 12 letters (three rows
of four letters) appeared for a very brief period of time (50 milliseconds, or 50/1,000 of a second). After each presentation,
subjects were asked to recall a particular row of letters. Results and conclusion showed that if subjects responded immediately
(0.0-second delay) after seeing the letters, they remembered an average of nine letters. However, a delay of merely 0.5 seconds
reduced memory to an average of six letters, and a delay of 1.0 seconds reduced memory to an average of only four letters.
Notice that an increased delay in responding resulted in subjects’ remembering fewer letters, which indicated the brief duration
of iconic memory, seconds or less. This study demonstrated a sensory memory for visual information, which was called iconic
memory.
Echoic memory is another type of sensory system or the brief memory of something a person has heard. Echoic memory’s
capacity is limited to what can be heard at any one moment and is smaller than the capacity of iconic memory, although it lasts
longer about 2–4 seconds. It is very useful when a person wants to have meaningful conversations with others. It allows the
person to remember what someone said just long enough to recognize the meaning of a phrase. Researchers discovered that
the length of echoic memory increases as children grow into adults.
Short-term memory
Short-term memory, also called working memory, refers to another process that can hold only a limited amount of information,
an average of seven items, for only a short period of time, 2 to 30 seconds. i.e., once a limited amount of information is
transferred into short-term, or working, memory, it will remain there for up to 30 seconds, and possibly longer through
maintenance rehearsal. However, the relatively short duration can be lengthened by repeating or rehearsing the information.
Maintenance rehearsal refers to the practice of intentionally repeating or rehearsing information so that it remains longer in
short-term memory.
It has long been known that people can increase the capacity of their short-term memory by combining stimuli into larger,
possibly higher-order units, called chunks. A chunk is a group of familiar stimuli stored as a single unit.
George Miller (1956), who was the first to discover that short-term memory can hold only about seven items or bits, plus or
minus two. Although this seems like too small a number, researchers have repeatedly confirmed Miller’s original finding. Thus,
one reason telephone numbers worldwide are generally limited to seven digits is that seven matches the capacity of short-term
memory.
Research eventually uncovered a number of problems with the original model of short-term memory. Among other things,
studies showed that short term memory is not limited to phonemic encoding and that decay is not the only process responsible
for the loss of information from STM. These and other findings suggested that short-term memory involves more than a simple
rehearsal buffer, as originally proposed. Alan Baddeley has developed a more complex, modular model of short-term memory
that characterizes it as “working memory.” Baddeley’s model of working memory consists of the four components.
The primacy effect refers to better recall, or improvement in retention, of information presented at the beginning of a task. The
recency effect refers to better recall, or improvement in retention, of information presented at the end of a task. Together, these
two effects are called the primacy-recency effect. The primacy-recency effect refers to better recall of information presented at
the beginning and end of a task.
Phonological loop, the first component in Baddeley's model of working memory is the phonological loop. Phonology means
the study of how words and sounds are organized in a language. This part of the system deals with information obtained from
written and spoken language. This component handles things such as information told to a person, mathematical problems, new
vocabulary words, or addresses written down.
The phonological loop has two distinct parts: the phonological store and the articulatory control process. These parts work independently
from each other but share information. The phonological store is the inner ear; it stores information that is heard for a few
seconds. This is where the short-term memories are generated during a conversation. The second part of the phonological loop
is the articulatory control process. This is the inner voice that interprets and rehearses the information from the phonological
store. During a conversation, the articulatory control process is constructing and producing the words.
Visuospatial sketchpad, while the phonological loop deals with spoken and written information, the visuospatial sketchpad
handles visual and spatial information. This means it stores objects and images into short-term memory. Spatial information
refers to the way people know their location in relation to other objects. The visuospatial sketchpad allows people to recall
layouts of a room or the way a painting looks.
Central executive, the most important part of Baddeley's model of working memory is the central executive memory. This
element of the model coordinates all the other parts of the system. The central executive deals with switching between memories
and tasks; this is important with working memory as many different memories may be needed to complete actions. The central
executive is also responsible for such things as daydreaming or stopping certain memories.
Episodic buffer, Baddeley's concept of working memory originally only contained three parts. The link between long-term and
short-term memory was not included in Baddeley's short-term memory study. The episodic buffer was added to the model as
the fourth component as it plays an important role in short-term memory. There is communication between both the long-term
and the short-term memory, and this is provided by the episodic memory buffer. It deals with how these memories work
together and how a short-term memory can become a long-term memory. The episodic buffer acts in much the same way as
backup storage on a computer.

Long-term memory
Long-term memory refers to the process of storing almost unlimited amounts of information over long periods of time with
the potential of retrieving, or remembering, such information in the future. Unlike sensory memory and short-term memory,
which have very brief storage durations, LTM can store information indefinitely. In fact, one point of view is that all information
stored in long-term memory is kept there permanently. According to this view, forgetting occurs only because people sometimes
cannot retrieve needed information from LTM. Elaborative rehearsal is a way of increasing the number of retrieval cues (stimuli
that aid in remembering) for information by connecting new information with something that is already well known.
Declarative memory vs Procedural memory (non-declarative memory), non-declarative memory is type of long-term
memory including memory for skills, procedures, habits, and conditioned responses. These memories are not conscious but are
implied to exist because they affect conscious behavior, and is not retrievable and recallable. Evidence that the separate areas of
the brain control nondeclarative memory comes from studies of people with damage to the hippocampal area of the brain. This
damage causes them to have anterograde amnesia, in which new long-term declarative memories cannot be formed.
Autobiographic memory refers to the memory of an individual’s history i.e., it is the personal knowledge that each person has of his
or her daily life and personal history. It refers to memory for one's personal history. Examples might include memories of
experiences that occurred in childhood, the first time learning to drive a car, and even such memories as where a person was
born.
Non-declarative memory is about the things that people can do, but declarative (explicit) memory is about all the things that
people can know, the facts and information that make up knowledge, i.e., type of long-term memory containing information
that is conscious and known. Declarative memory involves memories of facts or events, such as scenes, stories, words,
conversations, faces, or daily events. People are aware of and can recall, or retrieve, these kinds of memories. For example, an
important life event, who came to dinner last night, or the date of the mother's birthday, as well as information about the world.
There are two kinds of declarative memory, semantic and episodic memory. Episodic and semantic memories are explicit
memories because they are easily made conscious and brought from long-term storage into short-term memory.
(1) Semantic memory is a type of declarative memory and involves knowledge of facts, concepts, words, definitions, and
language rules. The word semantic refers to meaning, so this kind of knowledge is the awareness of the meanings of
words, concepts, and terms as well as names of objects, math skills, and so on. This is also the type of knowledge that
is used on game shows such as Jeopardy. Semantic memories, like nondeclarative memories, are relatively permanent.
(2) Episodic memory is a type of declarative memory and involves knowledge of specific events, personal experiences
(episodes), or activities, such as naming or describing favorite restaurants, movies, songs, habits, or hobbies. Memories
of what has happened to people each day, certain birthdays, anniversaries that were particularly special, childhood
events, and so on are called episodic memory, because they represent episodes from their lives. Unlike non-declarative
and semantic long-term memories, episodic memories tend to be updated and revised more or less constantly.
Narrative memory is the subset of episodic and semantic memory in a cognitive storing information that presents narrative features.
It is a type of narrating an event, incident, situation.

ATKINSON AND SHIFFRIN MODEL


The proliferation of the boxes in the head explanation for human memory was well underway when Atkinson and Shiffrin
(1968) reported their model, the framework of which was based on the notion that memory structures are fixed and control
processes are variables. Atkinson and Shiffrin share the dualist concept of memory described by Waugh and Norman but
postulate far more subsystems within STM and LTM. The early model of memory, according to Atkinson and Shiffrin, was too
simplistic and not powerful enough to handle the complexities of attention, comparison, retrieval control, transfer from STM
to LTM, imagery, coding sensory memory, and so on.

In their model, memory has three stores; the sensory register, the short-term store, and the long-term store. A stimulus is
immediately registered within the appropriate sensory dimension and it is either lost or passed on for further processing.
Atkinson and Shiffrin an important distinction between the concepts of memory and memory stores; they use the term memory
to refer to the data being retained, while store refers to the structural component that contains the information. simply indicating
how long an item has been retained does not necessarily reveal where it is located in the structure of memory. In their model,
this information in the short-term store can be transferred to the long-term store, while other information can be held for several
minutes short-term store in the short-term store and never enter the long-term store. The short-term store was regarded as the
working system, in which entering information decay and disappears rapidly. Information in the short-term store maybe in a
different form than it was originally (e.g., a word originally read by the visual system can be converted and represented
auditorially). information contained in the long-term store was envisioned as relatively permanent, even though it might be
inaccessible because of interference of incoming information. The function of the long-term store was to monitor stimuli in the
sensory register (and thud controlling information store) and provide storage space for information in the short terms store.
NEURAL NETWORK MODEL
Actually, however, the picture emerging from neuropsychological studies is quite different and much more complicated.
Memories don’t all seem to be “stored” in one place. Desimone (1992) noted that in humans and animals, lesions of the
cerebellum, a motor control structure, impair the acquisition of classically conditioned motor responses; lesions or disease of
portions of the striatum, which normally functions in sensorimotor integration, impair stimulus-response learning of habits;
lesions of the inferior temporal cortex, an area important for visual discrimination, impair visual recognition and associative
memory; and lesions of the superior temporal cortex, an area important for auditory discrimination, impair auditory recognition
memory.
Much of the interest in “localizing” memory in the brain dates back to a famous case study. In 1953, William Beecher Stover, a
neurosurgeon, performed surgery on H.M., a 27-year-old epileptic patient. Before the operation, H.M. was of normal
intelligence. Stover removed many structures on the inner sector of the temporal lobes of both sides of H.M.’s brain, including
most of the hippocampus, the amygdala, and some adjacent areas. This noticeably reduced H.M.’s seizures, and H.M.’s post-
operative IQ actually rose about 10 points. Unfortunately, however, H.M. suffered another decrement: he lost his ability to
transfer new episodic memories into long-term memory, and thus became one of the most famous neuropsychological case
studies in the literature. H.M. could remember semantic information, and events that he had experienced several years before
the operation. However, H.M. could no longer form new memories of new events. He could remember a series of seven or so
digits, as long as he was not distracted, but if he turned his attention to a new task, he could not seem to store that (or much
other) information. In addition to this anterograde amnesia (amnesia for new events), H.M. had retrograde amnesia (amnesia
for old events) for the period of several years just before his operation. H.M.’s case, widely publicized by psychologist Brenda
Milner in the hope of preventing similar surgeries this extensive, suggested strongly that the structures removed from his brain,
especially the rhinal cortex and underlying structures, played a major role in forming new memories. Other researchers reported
other case studies and other animal studies that seemed to provide corroborating evidence. H.M.’s case was also taken as
evidence to support the distinction between long-term (perhaps very long-term) memories, which seemed accessible, at least for
events several years before the operation, and short-term memories, which seemed unstorable.
Findings from other brain-damaged people have implicated areas in the frontal lobe as having much to do with WM, perhaps
because frontal-lobe damage is often reported to disrupt attention, planning, and problem solving. Shimura (1995) suggested
that these problems may arise not because attention and planning are located in the frontal lobe but rather because areas of the
frontal lobe inhibit activity in the posterior part of the brain. People with frontal-lobe damage seem more distractible and less
able to ignore irrelevant stimuli. PET scan studies also give us more information about the neural underpinnings of memory.
Smith and Jonides (1997) reported that PET study results confirm many aspects of Baddeley’s model of working memory, in
particular, different patterns of activation for verbal WM (localized primarily in the left frontal and left parietal lobes) versus
spatial WM (localized primarily in the right parietal, temporal, and frontal lobes). Nyberg and Cabeza (2000) reviewed brain-
imaging studies of memory conducted in many different laboratories and reported similar findings.
Other researchers have conducted fMRI studies on what brain regions are activated when information is remembered. Wagner
et al. (1998), for example, reported that verbal material that was encoded and remembered produced more activation in certain
regions of the frontal and temporal lobes. A similar fMRI study of people learning to remember photographs also indicated
greater activity in parts of the left prefrontal and medial temporal lobes.
Neil Carlson (1994) described some basic physiological mechanisms for learning new information. One basic mechanism is the
Hebb rule, named after the man who posited it, Canadian psychologist Donald Hebb. The Hebb rule states that if a synapse
between two neurons is repeatedly activated at about the same time the postsynaptic neuron fires, the structure or chemistry of
the synapse changes. A more general, and more complex, mechanism is called long-term potentiation. In this process, neural circuits
in the hippocampus that are subjected to repeated and intense electrical stimulation develop hippocampal cells that become
more sensitive to stimuli. This effect of enhanced response can last for weeks or even longer, suggesting to many that this could
be a mechanism for long-term learning and retention. Disrupting the process of long-term potentiation (say, through different
drugs) also disrupts learning and remembering. Despite the intriguing results from neuropsychological studies, humans are far
from having a complete picture of how the brain instantiates all, or even many, memory phenomena. It is not clear which aspects
of memory are localized in one place in the brain and which are distributed across different cortical regions. It is not clear what
kinds of basic neural processes are involved in any one particular complex cognitive activity.
FORGETTING
Forgetting refers to the inability to retrieve, recall, or recognize information that was stored or is still stored in long-term memory,
i.e., it refers to failure to either recall or retain information into present consciousness. Like memory or remembering, forgetting
is also an important cognitive function of the mind that influences our behavior in a significant way. Forgetting refers to failure,
either to recall or to retain, i.e., it is a failure to revive the learned or acquired information into present consciousness. Past
experiences do not always remain fresh. Therefore, forgetting is a common event in life. There are several ways in which the
phenomenon of forgetting has been explained. These are called theories of forgetting.
Curve of Forgetting
The first person to conduct scientific studies of forgetting was Hermann Ebbinghaus. He published a series of insightful memory
studies way back in 1885. Ebbinghaus studied only one subject, himself. To give himself lots of new material to memorize, he
invented nonsense syllables consonant-vowel-consonant arrangements that do not correspond to words (such as BAF, XOF,
VIR, and MEQ). He wanted to work with meaningless materials that would be uncontaminated by his previous learning.
Ebbinghaus was a remarkably dedicated researcher. In one study he went through over 14,000 practice repetitions, as he tirelessly
memorized 420 lists of nonsense syllables. He tested his memory of these lists after various time intervals.
This diagram, called a forgetting curve, graphs retention and forgetting over time. Ebbinghaus’s forgetting curve shows a
precipitous drop in retention during the first few hours after the nonsense syllables were memorized. Thus, he concluded that
most forgetting occurs very rapidly after learning something. Fortunately, subsequent research showed that Ebbinghaus’s
forgetting curve was unusually steep. Forgetting isn’t usually quite as swift or as extensive as Ebbinghaus thought. One problem
was that he was working with such meaningless material. When subjects memorize more meaningful material, such as prose or
poetry, forgetting curves aren’t nearly as steep. Studies of how well people recall their high school classmates suggest that
forgetting curves for autobiographical information are even shallower. Also, different methods of measuring forgetting yield
varied estimates of how quickly people forget. This variation underscores the importance of the methods used to measure
forgetting.
Measures of Forgetting
To study forgetting empirically, psychologists need to be able to measure it precisely. Measures of forgetting inevitably measure
retention as well. Retention refers to the proportion of material retained (remembered). In studies of forgetting, the results may
be reported in terms of the amount forgotten or the amount retained. In these studies, the retention interval is the length of
time between the presentation of materials to be remembered and the measurement of forgetting. The three principal methods
used to measure forgetting are recall, recognition, and re-learning.
A recall measure of retention requires subjects to reproduce information on their own without any cues. A recognition measure
of retention requires subjects to select previously learned information from an array of options. Subjects not only have cues to
work with, but they also have the answers right in front of them. In educational testing, essay questions and fill-in-the-blanks
questions are recall measures of retention. Multiple-choice, true-false, and matching questions are recognition measures. The
third method of measuring forgetting is re-learning. A relearning measure of retention requires a subject to memorize
information a second time to determine how much time or how many practice trials are saved by having learned it before.
Subjects’ savings scores provide an estimate of their retention. Relearning measures can detect retention that is overlooked by
recognition tests.
CAUSES OF FORGETTING
Ineffective encoding, a great deal of forgetting may only appear to be forgetting. That’s because the information in question
may never have been inserted into memory in the first place. Since a person can’t really forget something he never learned, this
phenomenon is sometimes called pseudo-forgetting. People usually assume that they know what a penny looks like. Most people,
however, have actually failed to encode this information. Pseudo-forgetting is usually attributable to a lack of attention. Even
when memory codes are formed for new information, subsequent forgetting may be the result of ineffective or inappropriate
encoding. The research on levels of processing shows that some approaches to encoding lead to more forgetting than others.
Phonemic encoding is inferior to semantic encoding for the retention of verbal material. When a person can’t remember the
information that he’s read, his forgetting may be due to ineffective encoding.
Decay, instead of focusing on encoding, decay theory attributes forgetting to the impermanence of memory storage. Decay
theory proposes that forgetting occurs because memory traces fade with time. The implicit assumption is that decay occurs in
the physiological mechanisms responsible for memories. According to decay theory, the mere passage of time produces
forgetting. This notion meshes nicely with common sense views of forgetting. Evidence suggests that decay does contribute to
the loss of information from the sensory and short-term memory stores. However, the critical task for theories of forgetting is
to explain the loss of information from long-term memory. Researchers have not been able to reliably demonstrate that decay
causes LTM forgetting. If decay theory is correct, the principal cause of forgetting should be the passage of time. In studies of
long-term memory, however, researchers have repeatedly found that time passage is not as influential as what happens during
the time interval. Research has shown that forgetting depends not on the amount of time that has passed since learning but on
the amount, complexity, and type of information that subjects have had to assimilate during the retention interval. The negative
impact of competing information on retention is called interference.
Repression, Freud asserted that people often keep embarrassing, unpleasant, or painful memories buried in their unconscious.
For example, a person who was deeply wounded by perceived slights at a childhood birthday party might suppress all recollection
of that party. In his therapeutic work with patients, Freud recovered many such buried memories. He theorized that the
memories were there all along but that their retrieval was blocked by unconscious avoidance tendencies. The tendency to forget
things one doesn’t want to think about is called motivated forgetting, or to use Freud’s terminology, repression. According to
Freud, is a mental process that automatically hides emotionally threatening or anxiety-producing information in the unconscious,
from which repressed memories cannot be recalled voluntarily, but something may cause them to enter consciousness at a later
time. In Freudian theory, repression refers to keeping distressing thoughts and feelings buried in the unconscious.
Studying for exams by cramming or using rote memory may lead to forgetting because these techniques result in poor retrieval
cues and thus poor encoding or storing. Retrieval cues are mental reminders that we create by forming vivid mental images or
creating associations between new information and information a person already knows. Many students don’t realize that it’s
not how long but how well they study that matter. Effective studying is not only memorizing but also creating good retrieval
cues. The best retrieval cues, which ensure the best encoding, are created by associating new information with information
already learned. For example, instead of just trying to remember that the hippocampus is involved in memory, try to make a
new association, such as a hippo remembering its way around campus.
If a person has to study for several exams and take them on the same day, there is a good chance that he may mix up and forget
some of the material because of interference. Interference, one of the common reasons for forgetting, means that the recall of
some particular memory is blocked or prevented by other related memories. For example, if he is studying for and taking
psychology and sociology tests on the same day, he may find that some of the material on social behavior in psychology is similar
to but different from the material in sociology, and this mix-up will cause interference and forgetting. Because psychologists
believe that interference between material is a common cause of forgetting.
Amnesia, which may be temporary or permanent, is a loss of memory that may occur after a blow or damage to the brain or
after disease, general anesthesia, certain drugs, or severe psychological trauma. Depending on its severity, a blow to the head
causes the soft jellylike brain to crash into the hard skull, and this may result in temporary or permanent damage to thousands
of neurons, which form the communication network of the brain. The reason people who strike their heads during car accidents
usually have no memories of the events immediately before and during the accident is that the brain crashed into the skull,
which interferes with the neurons’ communication network, disrupts memory, and results in varying degrees of amnesia.
People may not be aware of the times they misremember something due to memory distortions caused by bias or suggestibility.
Retrieval failure, people often remember things that they were unable to recall at an earlier time. This phenomenon may be
obvious only during struggles with the tip-of-the-tongue phenomenon, but it happens frequently. In fact, a great deal of
forgetting may be due to breakdowns in the process of retrieval. Morris, Bransford, and Franks (1977) gave subjects a list of
words and a task that required either semantic or phonemic processing. Retention was measured with recognition tests that
emphasized either the meaning or the sound of the words. Semantic processing yielded higher retention when the testing stressed
semantic factors, while phonemic processing yielded higher retention when the testing stressed phonemic factors. Thus, retrieval
failures are more likely when a poor fit occurs between the processing done during encoding and the processing invoked by the
measure of retention.
THEORIES OF FORGETTING
Trace decay theory of forgetting
In addition to the interference theory, there is another theory for explaining how people forget information, the decay theory.
Decay theory asserts that information is forgotten because of the gradual disappearance, rather than displacement, of the
memory trace. Thus, decay theory views the original piece of information as gradually disappearing unless something is done to
keep it intact. This view contrasts with interference theory, in which one or more pieces of the information block the recall of
another. Decay theory turns out to be exceedingly difficult to test because under normal circumstances, preventing participants
from rehearsing is difficult. Through rehearsal, participants maintain the to-be-remembered information in memory. Usually,
participants know that a person is testing their memory. They may try to rehearse the information or they may even inadvertently
rehearse it to perform well during testing. However, if he does prevent them from rehearsing, the possibility of interference
arises. The task he uses to prevent rehearsal may interfere retroactively with the original memory. For example, try not to think
of white elephants as he read the next two pages. When instructed not to think about them, he actually finds it quite difficult
not to. The difficulty persists even if they try to follow the instructions. Unfortunately, as a test of decay theory, this experiment
is itself a white elephant because preventing people from rehearsing is so difficult. Despite these difficulties, it is possible to test
decay theory. A research paradigm called the “recent-probes task” has been developed that does not encourage participants to
rehearse the items presented. It is based on the item-recognition task of Sternberg (1966) presented. The recent-probes task is;

 Participants are shown four target words.


 Next, participants are presented with a probe word.
 Participants decide whether or not the probe word is identical to one of the four target words.
If the probe word is not the same as the target words but is identical to a target word from a recent prior set of target words
(“recent negative”), then it will take participants longer to decide that the probe word and target words do not match than if the
probe word is completely new. The response delay, which is usually between 50–100 milliseconds, is a result of the high
familiarity of the probe word. i.e., the recent-probes task elicits clear interference effects. Of interest to researchers is the intertrial
interval (the time between the presentation of one set of target words and subsequent probe), which can easily be varied. After
each set of stimuli, participants have no incentive to rehearse the target words, so the longer the intertrial interval, the more time
passes, and the more is the target words subject to decay in memory. Thus, if there is memory decay just as a result of time
passing by, then recent negative probes in trials with a longer intertrial interval should not be as interfering with memory
performance as recent negative probes in trials with a shorter intertrial time. So even if both decay and interference contribute
to forgetting, it can be argued that interference has the strongest effect. And this is exactly what researchers have found:

 Decay only had a relatively small effect on forgetting in short-term memory.


 Interference accounted for most of the forgetting.
 So even if both decay and interference contribute to forgetting, it can be argued that interference has the strongest
effect.

To conclude, evidence exists for both interference and decay, at least in short-term memory. There is some evidence for decay,
but the evidence for interference is much stronger. For now, people can assume that interference accounts for most of the
forgetting in short-term memory. However, the extent to which the interference is retroactive, proactive, or both is unclear. In
addition, interference also affects material in long-term memory, leading to memory distortion.
Interference theory of forgetting
Interference theory refers to the view that forgetting occurs because the recall of certain words interferes with the recall of other
words. Evidence for interference goes back many years. In one study, participants were asked to recall trigrams (strings of three
letters) at intervals of 3, 6, 9, 12, 15, or 18 seconds after the presentation of the last letter. The investigators used only consonants
so that the trigrams would not be easily pronounceable, for example, “K B F.” The recall declined so rapidly because, after the
oral presentation of each trigram, participants counted backward by threes from a three-digit number spoken immediately after
the trigram. The purpose of having the participants count backward was to prevent them from rehearsing during the retention
interval. This is the time between the presentation of the last letter and the start of the recall phase of the experimental trial.
Clearly, the trigram is almost completely forgotten after just 18 seconds if participants are not allowed to rehearse it. Moreover,
such forgetting also occurs when words rather than letters are used as stimuli to be recalled. So, counting backward interfered
with recall from short-term memory, supporting the interference account of forgetting in short-term memory. At that time, it
seemed surprising that counting backward with numbers would interfere with the recall of letters. The previous view had been
that verbal information would interfere only with verbal (words) memory. Similarly, it was thought that quantitative (numerical)
information would interfere only with quantitative memory. At least two kinds of interference figure prominently in
psychological theory and research: retroactive interference and proactive interference.
Retroactive interference (or retroactive inhibition) occurs when newly acquired knowledge impedes the recall of older material.
This kind of interference is caused by activity occurring after people learn something but before people are asked to recall that
thing. The interference in the Brown-Peterson task appears to be retroactive because counting backward by threes occurs after
learning the trigram. It interferes with our ability to remember the information people learned previously.
Proactive interference (or proactive inhibition) occurs when material that was learned in the past impedes the learning of new
material. In this case, the interfering material occurs before, rather than after, learning of the to-be-remembered material. If a
person has studied more than one foreign language, he may have experienced this effect quite intensely. The author studied
French at school and then started learning Spanish when she entered college. Unfortunately, French words found their way into
her Spanish essays unnoticed, and it took her a while to eliminate those French words from her writing in Spanish (proactive
interference). Later, she studied Italian, and because she had not practiced Spanish in a few years, when she formulated Spanish
sentences in a conversation without much time to think, there was a good chance a mixture of Italian and Spanish would emerge
(retroactive interference).
Proactive as well as retroactive interference may play a role in short-term memory. Thus, retroactive interference appears to be
important, but not the only factor impeding memory performance. The amount of proactive interference generally climbs with
increases in the length of time between when the information is presented (and encoded) and when the information is retrieved.
Also, as a person might expect, proactive interference increases as the amount of prior, and potentially interfering, learning
increases. Proactive interference generally has stronger effects in older adults than in younger people.
Proactive interference seems to be associated with activation in the frontal cortex. In particular, it activates Brodmann area 45
in the left hemisphere. In alcoholic patients, proactive interference is seen to a lesser degree than in non-alcoholic patients. This
finding suggests that alcoholic patients have difficulty integrating past information with new information. Thus, alcoholic
patients may have difficulty binding together unrelated items in a list. Taken together, these findings suggest that Brodmann
area 45 is likely involved in the binding of items into meaningful groups. When more information is gathered, an attempt to
relate them to one another can occupy much of the available resources, leaving limited processing ability for new items. All
information does not equally contribute to proactive interference. For instance, if a person is learning a list of numbers, his
performance in learning the list will gradually decline as the list continues. If, however, the list switches to words, his performance
will rebound. This enhancement in performance is known as a release from proactive interference. The effects of proactive
interference appear to dominate under conditions in which recall is delayed. However, proactive and retroactive interference
now are viewed as complementary phenomena.

Some early psychologists recognized the need to study memory retrieval for connected texts and not just for unconnected strings
of digits, words, or nonsense syllables. In one study, participants learned a text and then recalled it. British participants learned
a North American Indian legend called “The War of the Ghosts,” which to them was a strange and difficult-to-understand text.
Participants distorted their recall to render the story more comprehensible to themselves. In other words, their prior knowledge
and expectations had a substantial effect on their recall. Apparently, people bring into a memory task their already existing
schemas, which affect the way in which they recall what they learn. Schemas are mental frameworks that represent knowledge
in a meaningful way. The later work using the Brown-Peterson paradigm confirms the notion that prior knowledge has an
enormous effect on memory, sometimes leading to interference or distortion.
Displacement theory of forgetting
This fits into the multi-store model of memory and is an explanation of why forgetting occurs in STM. This theory explains
why information in STM does not always transfer into LTM. It says that information in STM is displaced due to the limited
capacity – 9 or less. This means incoming information replaces the information already being held in STM. The primacy and
recency effect which form part of multi-store model apply to forgetting in STM.
Another method often used for determining the causes of forgetting involves the serial-position curve. The serial-position curve
represents the probability of recall of a given word, given its serial position (order of presentation) in a list. Suppose that a person
is presented with a list of words and are asked to recall them.
The recency effect refers to the superior recall of words at and near the end of a list. The primacy effect refers to the superior
recall of words at and near the beginning of a list. Both the recency effect and the primacy effect seem to influence recall. The
serial-position curve makes sense in terms of interference theory. Words at the end of the list are subject to proactive but not
to retroactive interference. Words at the beginning of the list are subject to retroactive but not to proactive interference. And
words in the middle of the list are subject to both types of interference. Therefore, the recall would be expected to be the poorest
in the middle of the list. Indeed, it is the poorest. Primacy and recency effects can also be encountered in everyday life.
An example of the displacement theory of forgetting involves grocery lists. As a person walks out the door to the grocery store,
his partner tells him that he needs to buy “milk, eggs, cheese, flour, and sugar.” He tries to memorize the whole list, but a lot of
it goes away while he’s driving to the store. When he arrives, all he can remember is “milk” and “sugar”.
Retrieval failure theory of forgetting
Endel Tulving, cognitive psychologist, takes credit for developing the retrieval failure theory of forgetting in the year 1974. He
believed that forgetting of information occurs when an individual fails to retrieve information from their memory. Even though
the information stored as long-term memory is not lost, but cannot recall it at the given moment. The best example to explain
this theory is the tip-of-the-tongue phenomena, i.e., when a word is known, but cannot remember it, and it feels as if the word
is stuck at the tip of the tongue. The two main reasons for failure in memory retrieval are when there is a failure in encoding
due to which the information never made it to the long-term memory in the first place. Also, there could be a retrieval failure
where people cannot access the information due to a lack of retrieval cues.
A retrieval cue is a trigger that helps to remember something. When people create a new memory, they also retain elements of
the situation in which the event occurred. These elements will later serve as retrieval cues. Information is more likely to be
retrieved from long-term memory with the help of relevant retrieval cues. Conversely, retrieval failure or cue-dependent
forgetting may occur when people can’t access memory cues. The semantic cues are associations with other memories. For
example, people might have forgotten everything about a trip took years ago until they remember visiting a friend in that place.
This cue will allow recollecting further details about the trip.
The state-dependent cues are related to the psychological state at the time of the experience, like being very anxious or
extremely happy. Finding ourselves in a similar state of mind may help us retrieve some old memories. Whereas the context-
dependent cues are environmental factors such as sounds, sight, and smell. For instance, witnesses are often taken back to the
crime scene that contains environmental cues from when the memory was formed. These cues can help recollect the details of
the crime.
Consolidation theory of forgetting
Theorised by George Muller and Alfons Pilzecker in 1900, this theory is based on physiological evidence. This theory of
forgetting focuses on the physiological aspects of forgetting. The process of memory consolidation takes place when the memory
is stabilised to prevent disruptions. The moment a memory is consolidated, it becomes resistant to forgetting. Memory
consolidation is the critical process of stabilizing a memory and making it less susceptible to disruptions. Once it is consolidated,
memory is moved from short term to a more permanent long-term storage, becoming much more resistant to forgetting.
Consolidation theory dismisses retrieval theory as the major factor causing forgetting; according to the former, memory is not
lost through competition between relevant information during retrieval. Memory consolidation is the process through which
information becomes stable over time. Neural processes following the initial recording of information contribute to a longer-
lasting record of this information, strengthening memory traces. Consolidation thus refers to the gradual stabilization of a piece
of information after its acquisition; therefore, a new memory needs time to stabilize.
MNEMONICS AND MEMORY CODES
Mnemonic methods are ways to improve encoding and create better retrieval cues by forming vivid associations or images,
which improve recall. Many mnemonic devices, such as acrostics and acronyms, are designed to make abstract material more
meaningful. Other mnemonic devices depend on visual imagery. Allan Paivio (1986, 2007) believes that visual images create a
second memory code and that two codes are better than one.
Acrostics and Acronyms, acrostics are phrases (or poems) in which the first letter of each word (or line) functions as a cue to
help a person recall information to be remembered. For instance, he may remember the order of musical notes with the saying
“every good boy does fine.” A slight variation on acrostics is the acronym, a word formed out of the first letters of a series of
words. Students memorizing the order of colors in the light spectrum often store the name “Roy G. Biv” to remember red,
orange, yellow, green, blue, indigo, and violet. Notice that this acronym also takes advantage of the principle of chunking.
Acrostics and acronyms that individuals create for themselves can be effective memory tools.
Rhymes, another verbal mnemonic that people often rely on is rhyming. He’s probably repeated, “I before E except after C..”
many times. Perhaps he also remembers the number of days in each month with the old standby, “thirty days has September..”.
Rhyming something to remember it is an old and useful trick.
Link method, is a mnemonic that relies on the power of imagery. The link method involves forming a mental image of items
to be remembered in a way that links them together. For instance, suppose that he needs to remember some items to pick up
at the drugstore: a news magazine, shaving cream, film, and pens. To remember these items, he might visualize a public figure
on the magazine cover shaving with a pen while being photographed. The more bizarre he makes his image, the more helpful
it’s likely to be.
The method of loci, involves taking an imaginary walk along a familiar path where images of items to be remembered are
associated with certain locations. The first step is to commit to memory a series of loci, or places along a path. Usually, these
loci are specific locations in your home or neighborhood. Then envision each thing he wants to remember in one of these
locations. Try to form distinctive, vivid images. When he needs to remember the items, imagine himself walking along the path.
The various loci on his path should serve as cues for the retrieval of the images that he formed. Evidence suggests that the
method of loci can be effective in increasing retention. Moreover, this method ensures that items are remembered in their
correct order because the order is determined by the sequence of locations along the pathway. A recent study found that using
loci along a pathway from home to work was more effective than a pathway through one’s home.
Another useful mnemonic device for memorizing a long list, especially in the exact order, is the peg method. The peg method
is an encoding technique that creates associations between number-word rhymes and items to be memorized.
DISTORTION OF MEMORY
People have tendencies to distort their memories. For example, just saying something has happened to a person makes him
more likely to think it really happened. This is true whether the event happened or not. These distortions tend to occur in seven
specific ways, which Schacter (2001) refers to as the “seven sins of memory.”
(1) Transience, memory fades quickly. For example, although most people know that O. J. Simpson was acquitted of
criminal charges in the murder of his wife, they do not remember how they found out about his acquittal. At one time
they could have said, but they no longer can.
(2) Absent-mindedness, people sometimes brush their teeth after already having brushed them or enter a room looking for
something only to discover that they have forgotten what they were seeking.
(3) Blocking, people sometimes have something that they know they should remember, but they can’t. It’s as though the
information is on the tip of their tongue, but they cannot retrieve it. For example, people may see someone they know,
but the person’s name escapes them; or they may try to think of a synonym for a word, knowing that there is an obvious
synonym, but are unable to recall it.
(4) Misattribution, people often cannot remember where they heard what they heard or read what they read. Sometimes
people think they saw things they did not see or heard things they did not hear. For example, eyewitness testimony is
sometimes clouded by what we think we should have seen, rather than what we actually saw.
(5) Suggestibility, people are susceptible to suggestion, so if it is suggested to them that they saw something, they may think
they remember seeing it. For example, in one study, when asked whether they had seen a television film of a plane
crashing into an apartment building, many people said they had seen it. There was no such film.
(6) Bias, people often are biased in their recall. For example, people who currently are experiencing chronic pain in their
lives are more likely to remember the pain in the past, whether or not they actually experienced it. People who are not
experiencing such pain are less likely to recall pain in the past, again with little regard for their actual past experience.
(7) Persistence, people sometimes remember things as consequential that, in a broad context, are inconsequential. For
example, someone with many successes but one notable failure may remember the single failure better than the many
successes.
There are some of specific ways in which memory distortions are studied and the two research areas that investigate memory
distortion are eyewitness testimony and repressed memories.
The Eyewitness Testimony Paradigm, a survey of U.S. prosecutors estimated that about 77,000 suspects are arrested each
year after being identified by eyewitnesses. Eyewitness testimony may be the most common source of wrongful convictions in
the United States.
Consider the story of a man named Timothy. In 1986, Timothy was convicted of brutally murdering a mother and her two
young daughters. He was then sentenced to die, and for 2 years and 4 months, Timothy lived on death row. Although the
physical evidence did not point to Timothy, eyewitness testimony placed him near the scene of the crime at the time of the
murder. Subsequently, it was discovered that a man who looked like Timothy was a frequent visitor to the neighborhood of the
murder victims. Timothy received a second trial and was acquitted.
There are serious potential problems of wrongful conviction when using eyewitness testimony as the sole, or even the primary,
basis for convicting accused people of crimes. Moreover, eyewitness testimony is often a powerful determinant of whether a
jury will convict an accused person. The effect is particularly pronounced if eyewitnesses appear highly confident in their
testimony. This is true even if the eyewitnesses can provide few perceptual details or offer apparently conflicting responses.
People sometimes even think they remember things simply because they have imagined or thought about them. It has been
estimated that as many as 10,000 people per year may be convicted wrongfully on the basis of mistaken eyewitness testimony.
In general, people are remarkably susceptible to mistakes in eyewitness testimony. They are generally prone to imagine that they
have seen things they have not seen.
Some of the strongest evidence for the constructive nature of memory has been obtained by those who have studied the validity
of eyewitness testimony. In a now-classic study, participants saw a series of 30 slides in which a red Datsun drove down a street,
stopped at a stop sign, turned right, and then appeared to knock down a pedestrian crossing at a crosswalk. Afterward,
participants were asked a series of 20 questions, one of which referred either to correct information (the stop sign) or incorrect
information (a yield sign instead of the stop sign). In other words, the information in the question given to this second group
was inconsistent with what the participants had seen. Later, after engaging in an unrelated activity, all participants were shown
two slides and asked which they had seen. One had a stop sign, the other had a yield sign. Accuracy on this task was 34% better
for participants who had received the consistent question (stop sign question) than for participants who had received the
inconsistent question (yield sign question).
Loftus’ eyewitness testimony experiment and other experiments have shown people’s great susceptibility to distortion in
eyewitness accounts. This distortion may be due, in part, to phenomena other than just constructive memory. But it does show
that people easily can be led to construct a memory that is different from what really happened. As an example, a person might
have had a disagreement with a roommate or a friend regarding an experience in which both of them were in the same place at
the same time. But what each of them remembers about the experience may differ sharply. And both of them may feel that you
are truthfully and accurately recalling what happened.
(1) Questions do not have to be suggestive to influence the accuracy of eyewitness testimony. Line-ups also can lead to
faulty conclusions. Eyewitnesses assume that the perpetrator is in the line-up. This is not always the case, however.
When the perpetrator of a staged crime was not in a line-up, participants were susceptible to naming someone other
than the true perpetrator as the perpetrator. In this way, they believed they were able to recognize someone in the line-
up as having committed the crime. The identities of the non-perpetrators in the line-up also can affect judgments. In
other words, whether a given person is identified as a perpetrator can be influenced simply by who the others are in the
line-up. So, the choice of the “distracter” individuals is important. Police may inadvertently affect the likelihood of
whether or not an identification occurs and also whether a false identification is likely to occur.
(2) Confessions also influence the testimony of eyewitnesses. A study by Hasel and Kassin (2009) had participants view a
staged robbery. Afterward, the participants were presented with a line-up of suspects and were given the opportunity
to identify the robber (although the actual perpetrator was not among them). Sometime later, the participants were
informed that one of the suspects in the line-up had made a confession. In all, 61% of those who had made a selection
previously changed their identifications, and 50% of those who had not made an identification went on to positively
identify the confessor. This finding shows what a grave impact a confession has on the identification of a perpetrator.
(3) Likewise, feedback to eyewitnesses affected participants’ testimony. Telling them that they had identified the perpetrator
made them feel more secure in their choice, whereas the feedback that they had identified a filler person made them
back away from their judgment immediately. This phenomenon is called the post-identification feedback effect.
(4) Eyewitness identification is particularly weak when identifying people of a racial or ethnic group other than that of the
witness. Evidence suggests that this weakness is not a problem remembering stored faces of people from other racial
or ethnic groups, but rather, a problem of accurately encoding their faces.
(5) Eyewitness identification and recall are also affected by the witness’s level of stress. As stress increases, the accuracy of
both recall and identification declines. These findings further call into question the accuracy of eyewitness testimony
because most crimes occur in highly stressful situations.
Whatever may be the validity of eyewitness testimony for adults, it clearly is suspect for children. Children’s recollections are
particularly susceptible to distortion. Such distortion is especially likely when the children are asked leading questions, as in a
courtroom setting. Consider some relevant facts.
(1) The younger the child is, the less reliable the testimony of that child can be expected to be. In particular, children of
preschool age are much more susceptible to suggestive questioning that tries to steer them to a certain response than
are school-age children or adults.
(2) When a questioner is coercive or even just seems to want a particular answer, children can be quite susceptible to
providing the adult with what he or she wants to hear. Given the pressures involved in court cases, such forms of
questioning may be unfortunately prevalent. For instance, when asked a yes-or-no question, even if they don’t know
the answer, most children will give an answer. If the question has an explicit “I don’t know” option, most children,
when they do not know an answer, will admit they do not know, rather than speculate.
(3) Children may believe that they recall observing things that others have said they observed. In other words, they hear a
story about something that took place and then believe that they have observed what allegedly took place. If the child
has some intellectual disability, memory of the event is even more likely to be distorted, at least when a significant delay
has occurred between the time of the event and the time of recall.
A study in the United Kingdom has found that when giving eyewitness testimony, children are also easily impressed by the
presence of uniformed officers. When having to identify an individual in a line-up after having witnessed a staged incident,
children made significantly more mistakes when a uniformed officer was present. Therefore, perhaps even more so than the
eyewitness testimony of adults, the testimony of children must be interpreted with great caution.
Steps can be taken to enhance eye-witness identification (e.g., using methods to reduce potential biases, reduce the pressure to
choose a suspect from a limited set of options, and ensure that each member of an array of suspects fits the description given
by the eyewitness, yet offers diversity in other ways). Moreover, suggestive interviews can cause biases in memory. This problem
is especially likely to occur when these interviews take place close in time to the actual event. After a crime, witnesses are
generally interviewed as soon as possible. Therefore, steps must be taken to ensure that the questions asked of witnesses are not
leading questions, especially when the witness is a child. This caution can decrease the likelihood of distortion of memory.
Gary Wells (2006) made several suggestions to improve identification accuracy in line-ups. These suggestions include presenting
only one suspect per line-up so that witnesses do not feel like they have to decide between several people they saw; making sure
that all people in the line-up are reasonably similar to each other to decrease the chance that somebody is identified mistakenly,
just because he or she happens to share one characteristic with the suspected perpetrator that no one else in the line-up shares;
and cautioning witnesses that the suspect may not be in the line-up at all.
In addition, some psychologists and many defense attorneys believe that jurors should be advised that the degree to which the
eyewitness feels confident of her or his identification does not necessarily correspond to the degree to which the eyewitness is
actually accurate in her or his identification of the defendant as being the culprit. At the same time, some psychologists and
many prosecutors believe that the existing evidence, based largely on simulated eyewitness studies rather than on actual
eyewitness accounts, is not strong enough to risk attacking the credibility of eyewitness testimony when such testimony might
send a true criminal to prison, preventing the person from committing further crimes.
Repressed memories, some psychotherapists have begun using hypnosis and related techniques to elicit from people what are
alleged to be repressed memories. Repressed memories are memories that are alleged to have been pushed down into
unconsciousness because of the distress they cause. Such memories, according to the view of psychologists who believe in their
existence, are very inaccessible, but they can be dredged out. However, although people may be able to forget terrible events
that happened to them, there is only dubious support for the notion that clients in psychotherapy of ten are unaware of their
having been abused as a child. Many psychologists strongly doubt their existence. Others are at least highly skeptical. There are
many reasons for this skepticism.
(1) Some therapists may inadvertently plant ideas in their clients’ heads. In this way, they may inadvertently create false
memories of events that never took place. Indeed, creating false memories is relatively easy, even in people with no
particular psychological problems. Such memories can be implanted by using ordinary, non-emotional stimuli.
(2) Showing that implanted memories are false is often extremely hard to do. Reported incidents often end up, as in the
case of childhood sexual abuse, merely pitting one person’s word against another. At the present time, no compelling
evidence points to the existence of such memories. But psychologists also have not reached the point where their
existence can be ruled out definitively. Therefore, no clear conclusion can be reached at this time.
The Roediger-McDermott (1995) paradigm, which is adapted from the work of Deese (1959), is able to show the effects of
memory distortion in the laboratory. Participants receive a list of 15 words strongly associated with a critical but non-presented
word. For example, the participants might receive 15 words strongly related to the word sleep but never receive the word sleep.
The recognition rate for the non-presented word was comparable to that for presented words. This result has been replicated
multiple times. Even when shorter lists were used, there was an increased level of false recognition for non-presented items. In
one experiment, lists as short as three items revealed this effect, although to a lesser degree. Embedding the list in a story can
increase this effect in young children. This strategy strengthens the shared context and increases the probability of a participant
falsely recognizing the non-presented word.
AMNESIA
Many people are familiar with the concept of repression, a type of psychologically motivated forgetting in which a person
supposedly cannot remember a traumatic event. People suffering from memory disorders collectively known as amnesia, i.e.,
the temporary or permanent, partial or complete loss of the ability to memorize or recall information stored in memory. Amnesia
can result from damage either to the hippocampal system (which includes the hippocampus and amygdala) or to the closely
related midline diencephalic region.
There are two forms of severe loss of memory disorders caused by problems in the functioning of the memory areas of the
brain. These problems can result from concussions, brain injuries brought about by trauma, alcoholism (Korsakoff’s syndrome),
or disorders of the aging brain, oxygen deprivation, blockage of arteries through a stoke, the herpes simplex encephalitis virus,
a closed head injury, Alzheimer’s disease, etc.

Retrograde amnesia
People who are in accidents in which they received a head injury often are unable to recall the accident itself. Sometimes they
cannot remember the last several hours or even days before the accident. This type of amnesia (literally, “without memory”) is
called retrograde amnesia, which is the loss of memory from the point of injury backward or loss of memory for the past, which
means that the person can't recall memories that were formed before the event that caused the amnesia. It usually affects recently
stored past memories, not memories from years ago. What apparently happens in this kind of memory loss is that the
consolidation process, which was busy making the physical changes to allow new memories to be stored, gets disrupted and
loses everything that was not already nearly “finished”.
All memories that were in the process of being stored, but are not yet permanent, are lost. One of the therapies for severe
depression is ECT, or electroconvulsive therapy, is used for this purpose for many decades. One of the common side effects of
this therapy is the loss of memory, specifically retrograde amnesia. While the effects of the induced seizure seem to significantly
ease the depression, the shock also seems to disrupt the memory consolidation process for memories formed prior to the
treatment. While some researchers in the past found that memory loss can go back as far as three years for certain kinds of
information, later research suggests that the loss may not be a permanent one.
Anterograde amnesia
Concussions can also cause a more temporary version of the kind of amnesia experienced. This kind of amnesia is called
anterograde amnesia, or the loss of memories from the point of injury or illness forward. People with this kind of amnesia, have
difficulty remembering anything new; also, the kind of amnesia most often seen in people with senile dementia, a mental disorder
in which severe forgetfulness, mental confusion, and mood swings are the primary symptoms. Dementia patients also may suffer
from retrograde amnesia in addition to anterograde amnesia. If retrograde amnesia is like losing a document in the computer
because of a power loss, anterograde amnesia is like discovering that hard drive has become defective. The features are;
(1) Anterograde amnesia affects LTM but not working memory
(2) Anterograde amnesia affects memory regardless of the modality that is, regardless of whether the information is visual,
auditory, kinesthetic, olfactory, gustatory, or tactile
(3) Anterograde amnesia spares memory for general knowledge (acquired well before the onset of amnesia) but grossly
impairs recall for new facts and events
(4) Anterograde amnesia spares skilled performance
V. LEARNING
DEFINITION AND CONCEPT OF LEARNING
The psychologists define learning as any relatively permanent change in behavior, or behavior potential, produced by experience.
Several aspects of this definition are noteworthy. First, the term learning does not apply to temporary changes in behavior such
as those stemming from fatigue, drugs, or illness. Second, it does not refer to changes resulting from maturation, the fact that
people change in many ways as they grow and develop. Third, learning can result from vicarious as well as from direct
experiences; in other words, they can be affected by observing events and behavior in the environment as well as by participating
in them. Finally, the changes produced by learning are not always positive in nature. The learned thing can be unlearned or
relearned.
People are as likely to acquire bad habits as good ones. There can be no doubt that learning is a key process in human behavior.
Indeed, learning appears to play an important role in virtually every activity people perform, from mastering complex skills to
falling in love. Although the effects of learning are diverse, many psychologists believe that learning occurs in several basic
forms: classical conditioning, operant conditioning, and observational learning.
Criterions of Learning
One criterion is that learning involves change, in behavior or in the capacity for behavior. People learn when they become
capable of doing something different. At the same time, people must remember that learning is inferential. They do not observe
learning directly but rather its products or outcomes. Learning is assessed based on what people say, write, and do. But people
also add that learning involves a changed capacity to behave in a given fashion because it is not uncommon for people to learn
skills, knowledge, beliefs, or behaviors without demonstrating them at the time learning occurs.
A second criterion is that learning endures over time. This excludes temporary behavioral changes (e.g., slurred speech) brought
about by such factors as drugs, alcohol, and fatigue. Such changes are temporary because when the cause is removed, the
behavior returns to its original state. But learning may not last forever because forgetting occurs. It is debatable how long
changes must last to be classified as learned, but most people agree that changes of brief duration (e.g., a few seconds) do not
qualify as learning.
A third criterion is that learning occurs through experience (e.g., practice, observation of others). This criterion excludes
behavioral changes that are primarily determined by heredity, such as maturational changes in children (e.g., crawling, standing).
Nonetheless, the distinction between maturation and learning often is not clear-cut. People may be genetically predisposed to
act in given ways, but the actual development of particular behaviors depends on the environment.
The occurrence of Learning
Behavioral and cognitive theories agree that differences among learners and in the environment can affect learning, but they
diverge in the relative emphasis they give to these two factors. Behavioral theories stress the role of the environment, specifically,
how stimuli are arranged and presented and how responses are reinforced. Behavioral theories assign less importance to learner
differences than cognitive theories. Two learner variables that behavioral theories consider are reinforcement history (the extent
to which the individual was reinforced in the past for performing the same or similar behavior) and developmental status (what
the individual is capable of doing given his or her present level of development). Thus, cognitive handicaps will hinder the
learning of complex skills, and physical disabilities may preclude the acquisition of motor behaviors. Cognitive theories
acknowledge the role of environmental conditions as influences on learning.
Types of Learning
(1) Habituation, people learn to stop paying attention to a continuous environmental stimulus that never changes. The
reaction to that particular stimulus has changed, i.e., stop reacting or stenting to it. For example, the tick-tick sound on
the clock. When the stimulus is weak with less biological significance, and repeated stimulus, then it is rapid habituation.
(2) Sensitization, in which people become more responsive to a stimulus they encounter repeatedly. i.e., responding in high
intensity to it. For example, bell ringing at the end of the school hours. When the stimulus is a strong stimulus and with
substantial significance (food, electric shock), then it is increased sensitization.
(3) Conditioning, a pairing of stimulus in which one stimulus get the capacity of the other, or involves the acquisition of
particular behavior in the presence of a particular environmental stimulus. The types of conditioning are; classical
conditioning and operant conditioning.
Types of Behavioral Learning
(1) Classical conditioning; a neutral stimulus is associated with a natural response, i.e., is a kind of learning in which
a neutral stimulus acquires the ability to produce a response that was originally produced by a different stimulus.
(2) Operant conditioning; a response is increased or decreased due to reinforcement and punishment. i.e., refers
to a kind of learning in which the consequences that follow some behavior increase or decrease the likelihood
of that behavior’s occurrence in the future.
(3) Observational learning; learning occurs through observation and imitation of others. i.e., is a kind of learning
that involves mental processes, such as attention and memory; may be learned through observation or
imitation; and may not involve any external rewards or require the person to perform any observable behaviors.
CLASSICAL CONDITIONING
One of the most significant was the work of Ivan Petrovich Pavlov (1849–1936), a Russian physiologist who won the Nobel
Prize in 1904 for his work on digestion. Pavlov’s legacy to learning theory was his work on classical conditioning. Pavlov noticed
that dogs often would salivate at the sight of the attendant bringing them food or even at the sound of the attendant’s footsteps.
Pavlov realized that the attendant was not a natural stimulus for the reflex of salivating; rather, the attendant acquired this power
by being associated with food.
Classical conditioning is a multistep procedure that initially involves presenting an unconditioned stimulus, which elicits an
unconditioned response. Pavlov presented a hungry dog with meat powder (UCS), which would cause the dog to salivate (UCR).
To condition the animal requires repeatedly presenting an initially neutral stimulus for a brief period before presenting the UCS.
Pavlov often used a ticking metronome as the neutral stimulus. In the early trials, the ticking of the metronome produced no
salivation. Eventually, the dog salivated in response to the ticking metronome prior to the presentation of the meat powder.
The metronome had become a conditioned stimulus that elicited a conditioned response similar to the original UCR.
Basic process in Pavlovian Conditioning
Before conditioning Food (UCS) Salivation (UCR)
Metronome tone (NS) No salivation
During conditioning Food (UCS) + Salivation (UCR)
Metronome tone (NS)
After conditioning Metronome tone (CS) Salivation (CR)
Studying the digestive system in his dogs, Pavlov had built a device that would accurately measure the amount of saliva produced
by the dogs when they were fed a measured amount of food. Normally, when food is placed in the mouth of any animal, the
salivary glands automatically start releasing saliva to help with chewing and digestion. This is a normal reflex, an unlearned,
involuntary response that is not under personal control or choice, one of many that occur in both animals and humans. The
food causes a particular reaction, the salivation. After a number of trials of hearing a tone paired with food, the dog salivated at
the sound of the bell alone, a phenomenon that Pavlov called a conditioned reflex and today is called classical conditioning,
is a kind of learning in which a neutral stimulus acquires the ability to produce a response that was originally produced by a
different stimulus. Classical conditioning was an important discovery because it allowed researchers to study learning in an
observable, or objective, way. A stimulus can be defined as any object, event, or experience that causes a response, the reaction
of an organism. In the case of Pavlov’s dogs, the food is the stimulus while salivation is the response. Pavlov eventually identified
several basic steps that must be present and experienced in a particular way for conditioning to take place.
(1) An unconditioned stimulus (UCS), a naturally occurring stimulus that leads to an involuntary and unlearned
response. i.e., it is a stimulus that evokes an unconditioned response without previous conditioning. In the case of
Pavlov’s dogs, the food is the unconditioned stimulus.
(2) An unconditioned response (UCR), an involuntary and unlearned response to a naturally occurring or unconditioned
stimulus, i.e., is an unlearned reaction to an unconditioned stimulus that occurs without previous conditioning. In
Pavlov’s experiment, the salivation to the food is the unconditioned response.
(3) A neutral stimulus (NS), a stimulus that has no effect on the desired response prior to conditioning. In the experiment,
the metronome tone was the neutral stimulus.
(4) A conditioned stimulus (CS), a previously neutral stimulus that becomes able to produce a conditioned response,
after pairing with an unconditioned stimulus, i.e., is a previously neutral stimulus that has, through conditioning,
acquired the capacity to evoke a conditioned response. When a previously neutral stimulus, through repeated pairing
with the unconditioned stimulus, begins to cause the same kind of involuntary response, learning has occurred, which
happened when the metronome tone was paired with the food to produce the salivation.
(5) A conditioned response (CR), a learned response to a conditioned stimulus, i.e., is a learned reaction to a conditioned
stimulus that occurs because of previous conditioning. Here, the presence of tone alone produced salivation.
Basic terms in Pavlovian Conditioning
Acquisition (forming new responses), the initial stage of learning something. He theorized that the acquisition of a conditioned
response depends on stimulus contiguity, or the occurrence of stimuli together in time and space. Stimulus contiguity is
important, but learning theorists now realize that contiguity alone doesn’t automatically produce conditioning. In Pavlovian dog,
the acquisition happened, i.e., the dog learned a new response, which is salivation to the sound of a metronome tone.
Extinction (weakening conditioned responses), the gradual weakening and disappearance of a conditioned response tendency,
i.e., refers to a procedure in which a conditioned stimulus is repeatedly presented without the unconditioned stimulus and, as a
result, the conditioned stimulus tends to no longer elicit the conditioned response. It occurs in classical conditioning when the
conditioned stimulus is consistently presented alone, without the unconditioned stimulus. For e.g., when Pavlov consistently
presented only the tone to a previously conditioned dog, the tone gradually lost its capacity to elicit the response of salivation.
Spontaneous recovery (resurrecting responses), the reappearance of an extinguished response after a period of non-exposure
to the conditioned stimulus. Pavlov (1927) observed this phenomenon in some of his early studies. He fully extinguished a dog’s
CR of salivation to a tone and then returned the dog to its home cage for a “rest interval” (a period of non-exposure to the CS).
At a later date, when the dog was brought back to the chamber for retesting, the tone was sounded and the salivation response
reappeared. Although the response had returned, the amount of salivation was less than it had been at its peak strength. If
Pavlov consistently presented the CS by itself again, the response re-extinguished quickly. More recent studies have uncovered
a related phenomenon called the renewal effect, if a response is extinguished in a different environment than where it was
acquired, the extinguished response will reappear if the animal is returned to the original environment where the acquisition
took place. This phenomenon, along with the evidence on spontaneous recovery, suggests that extinction somehow suppresses
a conditioned response rather than erasing a learned association. In other words, extinction does not appear to lead to unlearning.
Stimulus generalization, it occurs when an organism that has learned a response to a specific stimulus responds in the same
way to new stimuli that are similar to the original stimulus. i.e., after conditioning has occurred, organisms often show a tendency
to respond not only to the exact CS used but also to other, similar stimuli. For example, Pavlov’s dogs might have salivated in
response to a different-sounding tone. Generalization is adaptive given that organisms rarely encounter the exact same stimulus
more than once. Stimulus generalization is also commonplace. The likelihood and amount of generalization to a new stimulus
depend on the similarity between the new stimulus and the original CS. The basic law governing generalization is this: The more
similar new stimuli are to the original CS, the greater the generalization. Once a dog is conditioned to salivate in response to a
metronome ticking, it also may salivate in response to a metronome ticking faster or slower, as well as to ticking clocks or timers.
John B. Watson, the founder of behaviorism, conducted an influential early study of generalization. Watson and a colleague,
Rosalie Rayner, examined the generalization of conditioned fear in an 11-month-old boy, known in the annals of psychology as
“Little Albert.” Like many babies, Albert was initially unafraid of a live white rat. Then Watson and Rayner (1920) paired the
presentation of the rat with a loud, startling sound (made by striking a steel gong with a hammer). Albert did show fear in
response to the loud noise. After seven pairings of the rat and the gong, the rat was established as a CS eliciting a fear response.
Five days later, Watson and Rayner exposed the youngster to other stimuli that resembled the rat in being white and furry. They
found that Albert’s fear response generalized to a variety of stimuli, including a rabbit, a dog, a fur coat, and Watson’s hair.
Stimulus discrimination, occurs when an organism that has learned a response to a specific stimulus does not respond in the
same way to new stimuli that are similar to the original stimulus. It is just the opposite of stimulus generalization. Once dog is
conditioned to salivate in response to a metronome ticking, it will not salivate in response to a metronome ticking, as well as to
ticking clocks or timers, as the dog discriminate the sound of tone from that of the clock. Like generalization, discrimination is
adaptive in that an animal’s survival may hinge on its being able to distinguish friend from foe, or edible from poisonous food.
Organisms can gradually learn to discriminate between an original CS and similar stimuli if they have adequate experience with
both. As with generalization, a basic law governs discrimination: The less similar new stimuli are to the original CS, the greater
the likelihood (and ease) of discrimination. Conversely, if a new stimulus is quite similar to the original CS, discrimination will
be relatively hard to learn.
Pavlovian Conditioning Procedures

(1) Delayed conditioning, this is what is used in Pavlov's dogs. This is when the tone is
made until the food is brought to the dogs.
(2) Trace conditioning, which is when the tone comes on then goes off for a fixed amount
of time before the meat was delivered.
(3) Simultaneous conditioning, is when the tone and the food are brought at the same time.
(4) Backward conditioning, is when the food is given first, and then the tone was given.

In all these three techniques (trace, simultaneous, and delay) the conditioned stimulus is presented before the unconditioned
stimulus, this is Forward Classical Conditioning, there is however Backward Classical Conditioning where the
unconditioned stimulus comes before the conditioned stimulus. Forward delayed conditioning is the best method for better
learning because after the presence of CS and within the completion of CS, UCS will be presented.
Higher-order conditioning, another concept in classical conditioning is higher-order conditioning. This occurs when a strong
conditioned stimulus is paired with a neutral stimulus. The strong CS can actually play the part of a UCS, and the previously
neutral stimulus becomes a second conditioned stimulus. Higher-order conditioning shows that classical conditioning does not
depend on the presence of a genuine, natural UCS. An already-established CS can do just fine. In higher-order conditioning,
new conditioned responses are built on the foundation of already-established conditioned responses. Many human-conditioned
responses are the product of higher-order conditioning.
First-order conditioning Food + Salivation
Metronome tone
Metronome tone Salivation
High intensity
Second-order conditioning Metronome tone + Salivation
Buzzer
Buzzer Salivation
Low intensity
Once a stimulus becomes conditioned, it can function as a UCS and higher-order conditioning can occur. If a dog has been
conditioned to salivate at the sound of a metronome ticking, the ticking metronome can function as a UCS for higher-order
conditioning. A new neutral stimulus (such as a buzzer) can be sounded for a few seconds, followed by the ticking metronome.
If, after a few trials, the dog begins to salivate at the sound of the buzzer, the buzzer has become a second-order CS. Conditioning
of the third order involves the second-order CS serving as the UCS and a new neutral stimulus being paired with it. Pavlov
(1927) reported that conditioning beyond the third order is difficult.
Applications of Classical Conditioning
Aversion therapy, psychotherapy designed to cause a patient to reduce or avoid an undesirable behaviour pattern by
conditioning the person to associate the behaviour with an undesirable stimulus. The chief stimuli used in the therapy are
electrical, chemical, or imagined aversive situations.
Exposure therapy, is a kind of behavioral therapy that is typically used to help people living with phobias and anxiety disorders.
It involves a person facing what they fear, either imagined or in real life, but under the guidance of a trained therapist in a safe
environment. In this form of therapy, psychologists create a safe environment in which to “expose” individuals to the things
they fear and avoid. The exposure to the feared objects, activities or situations in a safe environment helps reduce fear and
decrease avoidance.
Counter-conditioning, refers both to the technique and putative process by which behavior is modified through a new
association with a stimulus of an opposite valence.
Flooding therapy, is a behavioral therapy technique wherein the patient learns to associate feelings of relaxation with the fear-
inducing stimulus. The patient is exposed directly and rather abruptly to the fear-inducing stimuli while at the same time
employing relaxation techniques designed to lower levels of anxiety.
Systematic desensitization, is a procedure based on classical conditioning, in which a person imagines or visualizes fearful or
anxiety-evoking stimuli and then immediately uses deep relaxation to overcome the anxiety. Systematic desensitization is a form
of counterconditioning because it replaces, or counters, fear and anxiety with relaxation.
OPERANT CONDITIONING
Thorndike (1874–1949) was one of the first researchers to explore and attempt to outline the laws of learning voluntary
responses, although the field was not yet called operant conditioning. Thorndike placed a hungry cat inside a “puzzle box” from
which the only escape was to press a lever located on the floor of the box. Thorndike placed a dish of food outside the box, so
the hungry cat is highly motivated to get out. Thorndike observed that the cat would move around the box, pushing and rubbing
up against the walls in an effort to escape. Eventually, the cat would accidentally push the lever, opening the door. Upon
escaping, the cat was fed from a dish placed just outside the box. The lever is the stimulus, the pushing of the lever is the
response, and the consequence is both escape (good) and food (even better). The cat did not learn to push the lever and escape
right away. After a number of trials (and many errors) in a box like this one, the cat took less and less time to push the lever that
would open the door. It’s important not to assume that the cat had “figured out” the connection between the lever and freedom,
Thorndike kept moving the lever to a different position, and the cat had to learn the whole process over again. The cat would
simply continue to rub and push in the same general area that led to food and freedom the last time, each time getting out and
fed a little more quickly. Based on this research, Thorndike developed the law of effect (law stating that if an action is followed
by a pleasurable consequence, it will tend to be repeated, and if followed by an unpleasant consequence, it will tend not to be
repeated), if an action is followed by a pleasurable consequence, it will tend to be repeated. If an action is followed by an
unpleasant consequence, it will tend not to be repeated. This is the basic principle behind learning voluntary behavior. In the
case of the cat in the box, pushing the lever was followed by a pleasurable consequence (getting out and getting fed), so pushing
the lever became a repeated response.
B. F. Skinner found in the work of Thorndike a way to explain all behavior as the product of learning. According to him, an
operant response is a response that can be modified by its consequences and is a meaningful unit of ongoing behavior that can
be easily measured. By measuring or recording operant responses, Skinner can analyze animals’ ongoing behaviors during
learning. He calls this kind of learning operant conditioning, which focuses on how consequences (rewards or punishments)
affect behaviors, i.e., the learning of voluntary behavior through the effects of pleasant and unpleasant consequences to
responses. The operant learning is also referred to as the instrumental conditioning or response-stimulus conditioning.
Skinner box experiment
Like Pavlov, Skinner created a prototype experimental procedure that has been repeated (with variations) thousands of times.
In this procedure, a rat, is placed in an operant chamber that has come to be better known as a “Skinner box”. An operant
chamber, or Skinner box, is a small enclosure in which an animal can make a specific response that is recorded while the
consequences of the response are systematically controlled. In the boxes designed for rats, the main response made available is
pressing a small lever mounted on one side wall. In this apparatus designed for rats, the response under study is lever pressing.
Food pellets, which may serve as reinforcers, are delivered into the food cup on the right. The speaker and light permit
manipulations of auditory and visual stimuli, and the electric grid gives the experimenter control over aversive consequences
(shock) in the box. A cumulative recorder connected to the box keeps a continuous record of responses and reinforcements.
Basic process in Skinner’s conditioning
Skinner’s major contribution is the concept of reinforcement and punishment.
Reinforcement is responsible for response strengthening and increasing the rate of responding or making responses more
likely to occur. A reinforcer (or reinforcing stimulus) is any stimulus or event following a response that leads to response
strengthening. Reinforcers (rewards) are defined based on their effects, which do not depend upon mental processes such as
consciousness, intentions, or goals. Because reinforcers are defined by their effects, they cannot be determined in advance.
Reinforcement occurs when an event following a response increases an organism’s tendency to make that response. Typically,
this means that reinforcement is a consequence that is in some way pleasurable to the organism, which relates back to
Thorndike’s law of effect.
Operant theorists make a distinction between unlearned, or primary, reinforcers as opposed to conditioned, or secondary,
reinforcers. Primary reinforcers are events that are inherently reinforcing because they satisfy biological needs. A given species
has a limited number of primary reinforcers because they are closely tied to physiological needs, which are necessary for the
survival. In humans, primary reinforcers include food, water, warmth, sex, and perhaps affection expressed through hugging
and close bodily contact. Secondary, or conditioned, reinforcers are events that acquire reinforcing qualities by being associated
with primary reinforcers. The events that function as secondary reinforcers vary among members of a species because they
depend on learning. Examples of common secondary reinforcers in humans include money, good grades, attention, flattery,
praise, and applause. Similarly, people learn to find stylish clothes, sports cars, fine jewellery, and exotic vacations reinforcing.
The positive reinforcement involves presenting a stimulus, or adding something to a situation, following a response, which
increases the future likelihood of that response occurring in that situation. A positive reinforcer is a stimulus that, when
presented following a response, increases the future likelihood of the response occurring in that situation. Negative
reinforcement involves removing a stimulus, or taking something away from a situation following a response, which increases
the future likelihood that the response will occur in that situation. A negative reinforcer is a stimulus that, when removed by a
response, increases the future likelihood of the response occurring in that situation. Some stimuli that often function as negative
reinforcers are bright lights, loud noises, criticism, annoying people, and low grades, because behaviors that remove them tend
to be reinforcing. Positive and negative reinforcement have the same effect: They increase the likelihood that the response will
be made in the future in the presence of the stimulus.
Schedules of reinforcement, the schedules refer to when reinforcement is applied. A continuous schedule reinforces the
desired response each time it occurs. It is useful in establishing of strengthening new behavior. This may be desirable while skills
are being acquired, i.e., the students receive feedback after each response concerning the accuracy of their work. Continuous
reinforcement helps to ensure that incorrect responses are not learned. A partial schedule involves reinforcing some but not
all correct responses. This reinforces a response only part of the time or depending on the number of desired responses. The
two types are; interval and ratio schedule. The interval schedule depends on the time interval between reinforcement, whereas
the ratio schedule depends on the number of desired responses. The interval schedule is divided into two; a fixed interval
schedule, which is the occurrence of reinforcement depending on a fixed passage of time, and a variable interval schedule,
the passage of time is not fixed. The ratio schedule is divided into two; a fixed ratio schedule, in which reinforcement occurs
only after a fixed number of desired responses, and a variable ratio schedule, the number of responses is not fixed.

Punishment occurs when an event following a response weakens the tendency to make that response. It suppresses a response
but does not eliminate it; when the threat of punishment is removed, the punished response may return. The effects of
punishment are complex. Punishment often brings about responses that are incompatible with the punished behavior and that
are strong enough to suppress it. Spanking a child for misbehaving may produce guilt and fear, which can suppress misbehavior.
If the child misbehaves in the future, the conditioned guilt and fear may reappear and lead the child quickly to stop misbehaving.
Punishment also conditions responses that lead one to escape or avoid punishment. In positive punishment, there is an
addition of negative stimulus following a behavior (beating a child for not studying), whereas negative punishment is to
remove stimulus following behavior, i.e., removing the pleasurable behavior (grounding a child for not study).
Premack’s principle
Premack (1962, 1971) described a means for ordering reinforcers that allows one to predict reinforcers. The Premack Principle
says that the opportunity to engage in a more valued activity reinforces engaging in a less valued activity, where “value” is defined
in terms of the amount of responding or time spent on the activity in the absence of reinforcement. If a contingency is arranged
such that the value of the second (contingent) event is higher than the value of the first (instrumental) event, an increase will be
expected in the probability of occurrence of the first event (the reward assumption). If the value of the second event is lower
than that of the first event, the likelihood of occurrence of the first event ought to decrease (the punishment assumption).
Applications of Operant Conditioning
Shaping, the reinforcement of simple steps in behavior through successive approximations that lead to a desired, more complex
behavior. It modifies behavior by reinforcing behaviors that progress and approximate the target behavior (operant response).
Or systematically reinforce a successive approximation toward a target behavior. It can be used to train organisms to perform
behaviors that would rarely if ever occur otherwise.
Applied Behavior Analysis (ABA), is the modern term for a form of functional analysis and behavior modification that uses
both analysis of current behavior and behavioral techniques to address a socially relevant issue
Chaining, is a type of intervention that aims to create associations between behaviors in a behavior chain. A behavior chain is
a sequence of behaviors that happen in a particular order where the outcome of the previous step in the chain serves as a signal
to begin the next step in the chain.
Token economy, the use of objects called tokens to reinforce behavior in which the tokens can be accumulated and exchanged
for desired items or privileges.
Premack's principle, refers to reinforcing a target behavior by awarding some privilege to engage in a more desired behavior
afterward.
Time-out, another tool that behaviorists can use to modify behavior is the process of time-out. It is a technique, originating
from behavior therapy, in which undesirable behavior is weakened and its occurrence decreased, typically by moving the
individual away from the area that is reinforcing the behavior.
Biofeedback, using feedback about biological conditions to bring involuntary responses, such as blood pressure and relaxation,
under voluntary control; and neurofeedback is a form of biofeedback using brain-scanning devices to provide feedback about
brain activity in an effort to modify behavior.
Contingency management, refers to a type of behavioural therapy in which individuals are ‘reinforced’, or rewarded, for
evidence of positive behavioural change.
OBSERVATIONAL LEARNING
Observational learning occurs when an organism’s responding is influenced by the observation of others, who are called models.
This process has been investigated extensively by Albert Bandura (1977, 1986). Bandura does not see observational learning as
entirely separate from classical and operant conditioning. Instead, he asserts that it greatly extends the reach of these conditioning
processes. Whereas previous conditioning theorists emphasized the organism’s direct experience, Bandura has demonstrated
that both classical and operant conditioning can take place “vicariously” through observational learning. Essentially,
observational learning involves being conditioned indirectly by virtue of observing another’s conditioning. Bandura’s theory of
observational learning can help explain why physical punishment tends to increase aggressive behavior in children, even when
it’s intended to do just the opposite. Parents who depend on physical punishment unwittingly serve as models for aggressive
behavior. In this situation, actions speak louder than words because of observational learning.
Bobo doll experiment
Bandura and his colleagues conducted a series of experiments from 1961-1963 and it collectively called as Bobo doll experiment.
He conducted this experiment on pre-school children. The first group of children watched a model behaving aggressively toward
the Bobo doll. Another group of children was exposed to a model who behaved in a non-aggressive manner. When each child
was left alone in the room. The children who saw the model ignore the doll didn’t act aggressively. And children who watched
aggressive models behaved aggressively. The aggressive children had learned their aggressive actions from merely watching the
model, with no reinforcement necessary. The fact that learning can take place without actual performance (a kind of latent
learning) is called learning/performance distinction.
In later studies, Bandura showed a film of a model beating up the Bobo doll. In one condition, the children saw the model
rewarded afterward. In another, the model was punished. When placed in the room with toys, the children in the first group
beat up the doll, but the children in the second group did not. But when Bandura told the children in the second group that he
would give them a reward if they could show him what the model in the film did, each child duplicated the model’s actions.
Both groups had learned from watching the model, but only the children watching the successful (rewarded) model imitated the
aggression with no prompting. Apparently, consequences do matter in motivating a child (or an adult) to imitate a particular
model. Bandura began this research to investigate possible links between children’s exposure to violence on television and
aggressive behaviour towards others.
Process in Observational Learning
Observational learning comprises four processes: attention, retention, production, and motivation.

 The first process is observer attention to relevant events so that they are meaningfully perceived. At any given moment
one can attend to many activities. Characteristics of the model and the observer influence one’s attention to models.
Task features also command attention, especially unusual size, shape, color, or sound. Attention also is influenced by
perceived functional value of modeled activities. Modeled activities that observers believe are important and likely to
lead to rewarding outcomes command greater attention.
 Rehearsal, or the mental review of information, serves a key role in the retention of knowledge. Bandura and Jeffery
(1973) found benefits of coding and rehearsal. Adults were presented with complex-modeled movement configurations.
Some participants coded these movements at the time of presentation by assigning to them numerical or verbal
designators. Other participants were not given coding instructions but were told to subdivide the movements to
remember them. In addition, participants either were or were not allowed to rehearse the codes or movements following
presentation. Both coding and rehearsal enhanced retention of modeled events; individuals who coded and rehearsed
showed the best recall. Rehearsal without coding and coding without rehearsal were less effective.
 The third observational learning process is production, which involves translating visual and symbolic conceptions of
modeled events into overt behaviors. Many simple actions may be learned by simply observing them; subsequent
production by observers indicates learning. Rarely, however, are complex behaviors learned solely through observation.
Learners often will acquire a rough approximation of a complex skill by observing modeled demonstrations. They then
refine their skills with practice, corrective feedback, and reteaching.
 Motivation, the fourth process, influences observational learning because people are more likely to engage in the
preceding three processes (attention, retention, production) for modeled actions that they feel are important. Individuals
form expectations about anticipated outcomes of actions based on consequences experienced by them and models.
They perform those actions they believe will result in rewarding outcomes and avoid acting in ways they believe will be
responded to negatively. Persons also act based on their values, performing activities they value and avoiding those they
find unsatisfying, regardless of the consequences to themselves or others. People forgot money, prestige, and power
when they believe the activities, they must engage in to receive these rewards are unethical (e.g., questionable business
practices).
SOCIAL LEARNING THEORY
Social cognitive theory, which stresses the idea that much human learning occurs in a social environment. By observing others,
people acquire knowledge, rules, skills, strategies, beliefs, and attitudes. Individuals also learn from models the usefulness and
appropriateness of behaviors and the consequences of modeled behaviors, and they act in accordance with beliefs about their
capabilities and the expected outcomes of their actions. The opening scenario portrays an instructional application of modeling.
The social cognitive theory distinguishes between new learning and performance of previously learned behaviors. Unlike
conditioning theories, which contend that learning involves connecting responses to stimuli or following responses with
consequences, social cognitive theory asserts that learning and performance are distinct processes. Although much learning
occurs by doing, we learn a great deal by observing. Reinforcement, or the belief that it will be forthcoming, affects performance
rather than learning.
Enactive and vicarious learning
Learning occurs either enactively through actual doing or vicariously by observing models perform (e.g., live, symbolic, portrayed
electronically). Enactive learning involves learning from the consequences of one’s actions. Behaviors that result in successful
consequences are retained; those that lead to failures are refined or discarded. Conditioning theories also say that people learn
by doing, but social cognitive theory provides a different explanation. Social cognitive theory contends that behavioral
consequences, rather than strengthening behaviors as postulated by conditioning theories, serve as sources of information and
motivation. Consequences inform people of the accuracy or appropriateness of behavior. People who succeed at a task or are
rewarded understand that they are performing well. When people fail or are punished, they know that they are doing something
wrong and may try to correct the problem. Consequences also motivate people. People strive to learn behaviors they value and
believe will have desirable consequences, whereas they avoid learning behaviors that are punished or otherwise not satisfying.
People’s cognitions, rather than consequences, affect learning.
Much human learning occurs vicariously, or without overt performance by the learner, at the time of learning. Common sources
of vicarious learning are observing or listening to models who are live (appear in person), symbolic or non-human (e.g., televised
talking animals, cartoon characters), electronic (e.g., television, computer, videotape, DVD), or in print (e.g., books, magazines).
Vicarious sources accelerate learning over what would be possible if people had to perform every behavior for learning to occur.
It also saves people from personally experiencing negative consequences.
Functions of modelling
Observational learning expands the range and rate of learning over what could occur through shaping, where each response
must be performed and reinforced. Bandura (1986) distinguished three key functions of modeling: response facilitation,
inhibition/disinhibition, and observational learning.
Response facilitation, people learn many skills and behaviors that they do not perform because they lack motivation to do so.
Response facilitation refers to modeled actions that serve as social prompts for observers to behave accordingly. Response
facilitation effects are common. The response facilitation does not reflect true learning because people already know how to
perform the behaviors. Rather, the models serve as cues for observer’s actions. Observers gain information about the
appropriateness of behavior and may be motivated to perform the actions if models receive positive consequences. Response
facilitation modeling may occur without conscious awareness.
Inhibition/disinhibition, observing a model can strengthen or weaken inhibitions to perform behaviors previously learned.
Inhibition occurs when models are punished for performing certain actions, which in turn stops or prevents observers from
acting accordingly. Disinhibition occurs when models perform threatening or prohibited activities without experiencing negative
consequences, which may lead observers to perform the same behaviors. Inhibitory and disinhibitory effects on behavior occur
because the modeled displays convey to observers that similar consequences are probable if they perform the modeled behaviors.
Such information also may affect emotions (e.g., increase or decrease anxiety) and motivation. Inhibition and disinhibition are
similar to response facilitation in that behaviors reflect actions people already have learned. One difference is that response
facilitation generally involves behaviors that are socially acceptable, whereas inhibited and disinhibited actions often have moral
or legal overtones (i.e., involve breaking rules or laws) and have accompanying emotions (e.g., fears).
Observational learning, the observational learning through modeling occurs when observers display new patterns of behavior
that, prior to exposure to the modeled behaviors, have a zero probability of occurrence even when motivation is high. A key
mechanism is the information conveyed by models to observers of ways to produce new behaviors. Observational learning
comprises four processes: attention, retention, production, and motivation.
COGNITIVE LEARNING PROCESSES
Meta-cognition and conditional knowledge
An issue with information processing theories is that they primarily describe learning rather than explain it. Thus, we know that
inputs are received into working memory (WM), rehearsed, coded, linked with relevant information, and stored in long-term
memory (LTM). Meta-cognition refers to higher-order cognition.
Declarative and procedural knowledge refer to knowledge of facts and procedures, respectively. Conditional knowledge is
understanding when and why to employ forms of declarative and procedural knowledge. Conditional knowledge likely is
represented in LTM as propositions in networks and linked with the declarative and procedural knowledge to which it applies.
It actually is a form of declarative knowledge because it is “knowledge that”, for example, knowledge that skimming is valuable
to get the gist of a passage and knowledge that summarizing text is valuable to derive greater understanding. It is an integral part
of self-regulated learning. Self-regulated learning requires that students decide which learning strategy to use prior to engaging
in a task.
Meta-cognition and learning
Metacognition refers to the deliberate conscious control of cognitive activity. Metacognition comprises two related sets of skills.
First, one must understand what skills, strategies, and resources a task requires. Included in this cluster are finding main ideas,
rehearsing information, forming associations or images, using memory techniques, organizing material, taking notes or
underlining, and using test-taking techniques. Second, one must know how and when to use these skills and strategies to ensure
the task is completed successfully. These monitoring activities include checking level of understanding, predicting outcomes,
evaluating the effectiveness of efforts, planning activities, deciding how to budget time, and revising or switching to other
activities to overcome difficulties. Collectively, metacognitive activities reflect the strategic application of declarative, procedural,
and conditional knowledge to tasks. Kuhn (1999) argued that metacognitive skills were the key to the development of critical
thinking.
Meta-cognition and behavior
Understanding which skills and strategies help people learn and remember information is necessary but not sufficient to enhance
our achievement. Even students who are aware of what helps them learn do not consistently engage in metacognitive activities
for various reasons. In some cases, metacognition may be unnecessary because the material is easily learned. Learners also might
be unwilling to invest the effort to employ metacognitive activities. The latter are tasks in their own right; they take time and
effort. Learners may not understand fully that metacognitive strategies improve their performances, or they may believe they do
but that other factors, such as time spent or effort expended, are more important for learning. Metacognitive activities improve
achievement, but the fact that students often do not use them presents a quandary for educators. Students need to be taught a
menu of activities ranging from those applying to learning in general (e.g., determining the purpose in learning) to those applying
to specific situations (e.g., underlining important points in text), and they need to be encouraged to use them in various context.
Although the what component of learning is important, so are the when, where, and why of strategy use. Teaching the what
without the latter will only confuse students and could prove demoralizing; students who know what to do but not when, where,
or why to do it might hold low self-efficacy for performing well.
Meta-cognition and reading
Metacognition is relevant to reading because it is involved in understanding and monitoring reading purposes and strategies.
Beginning readers often do not understand the conventions of printed material: in the English language, one reads words from
left to right and top to bottom. Beginning and poorer readers typically do not monitor their comprehension or adjust their
strategies accordingly. Older and skilled readers are better at comprehension monitoring than are younger and less-skilled
readers, respectively. Metacognition is involved when learners set goals, evaluate goal progress, and make necessary corrections.
Skilled readers do not approach all reading tasks identically. They determine their goal: find main ideas, read for details, skim,
get the gist, and so on. They then use a strategy they believe will accomplish the goal. When reading skills are highly developed,
these processes may occur automatically. While reading, skilled readers check their progress. If their goal is to locate important
ideas, and if after reading a few pages they have not located any important ideas, they are apt to reread those pages. If they
encounter a word they do not understand, they try to determine its meaning from context or consult a dictionary rather than
continue reading. Developmental evidence indicates a trend toward greater recognition and correction of comprehension
deficiencies. Younger children recognize comprehension failures less often than do older children. Younger children who are
good comprehenders may recognize a problem but may not employ a strategy to solve it (e.g., rereading). Older children who
are good comprehenders recognize problems and employ correction strategies. Children develop metacognitive abilities through
interactions with parents and teachers. Adults help children solve problems by guiding them through solution steps, reminding
them of their goal, and helping them plan how to reach their goal. An effective teaching procedure includes informing children
of the goal, making them aware of information relevant to the task, arranging a situation conducive to problem solving, and
reminding them of their goal progress. Strategy instruction programs generally have been successful in helping students learn
strategies and maintain their use over time.
COGNITIVE THEORY
In the early days of behaviorism, the focus of Watson, Skinner, and many of their followers was on observable, measurable
behavior. Anything that might be occurring inside a person’s or animal’s head during learning was considered to be of no interest
to the behaviorist because it could not be seen or directly measured. Other psychologists, however, were still interested in the
mind’s influence over behavior. Gestalt psychologists, for instance, were studying the way that the human mind tried to force a
pattern onto stimuli in the world around the person. This continued interest in the mind was followed, in the 1950s and 1960s,
by the comparison of the human mind to the workings of those fascinating “thinking machines,” computers. Soon after, interest
in cognition, the mental events that take place inside a person’s mind while behaving, began to dominate experimental
psychology. Many behavioral psychologists could no longer ignore the thoughts, feelings, and expectations that clearly existed
in the mind and that seemed to influence observable behavior, and eventually began to develop a cognitive learning theory to
supplement the more traditional theories of learning. Three important figures often cited as key theorists in the early days of the
development of cognitive learning theory were the Gestalt psychologists, Edward Tolman, and Wolfgang Köhler, and modern
psychologist Martin Seligman.
Latent Theory
One of Gestalt psychologist Edward Tolman’s best-known experiments in learning involved teaching three groups of rats the
same maze, one at a time. In the first group, each rat was placed in the maze and reinforced with food for making its way out
the other side. The rat was then placed back in the maze, reinforced upon completing the maze again, and so on until the rat
could successfully solve the maze with no errors. The second group of rats was treated exactly like the first, except that they
never received any reinforcement upon exiting the maze. They were simply put back in again and again, until the 10 th day of the
experiment. On that day, the rats in the second group began to receive reinforcement for getting out of the maze. The third
group of rats, serving as a control group, was also not reinforced and was not given reinforcement for the entire duration of the
experiment. A strict Skinnerian behaviorist would predict that only the first group of rats would learn the maze successfully
because learning depends on reinforcing consequences. At first, this seemed to be the case. The first group of rats did indeed
solve the maze after a certain number of trials, whereas the second and third groups seemed to wander aimlessly around the
maze until accidentally finding their way out. On the 10th day, however, something happened that would be difficult to explain
using only Skinner’s basic principles. The second group of rats, upon receiving the reinforcement for the first time, should have
then taken as long as the first group to solve the maze. Instead, they began to solve the maze almost immediately.
Tolman concluded that the rats in the second group, while wandering around in the first 9 days of the experiment, had indeed
learned where all the blind alleys, wrong turns, and correct paths were and stored this knowledge away as a kind of “mental
map,” or cognitive map of the physical layout of the maze. The rats in the second group had learned and stored that learning
away mentally but had not demonstrated this learning because there was no reason to do so. The cognitive map had remained
hidden, or latent, until the rats had a reason to demonstrate their knowledge by getting to the food. Tolman called this latent
learning. The idea that learning could happen without reinforcement and then later affect behavior was not something traditional
operant conditioning could explain.
Insight Learning
Another exploration of the cognitive elements of learning came about almost by accident. Wolfgang Köhler (1887–1967) was a
Gestalt psychologist. In one of his more famous studies, he set up a problem for one of the chimpanzees. Sultan the chimp was
faced with the problem of how to get to a banana that was placed just out of his reach outside his cage. Sultan solved this
problem relatively easily, first trying to reach through the bars with his arm, then using a stick that was lying in the cage to rake
the banana into the cage. As chimpanzees are natural tool users, this behavior is not surprising and is still nothing more than
simple trial-and-error learning. But then the problem was made more difficult. The banana was placed just out of reach of
Sultan’s extended arm with the stick in his hand. At this point there were two sticks lying around in the cage, which could be
fitted together to make a single pole that would be long enough to reach the banana. Sultan tried first one stick, then the other
(simple trial and error). After about an hour of trying, Sultan seemed to have a sudden flash of inspiration. He pushed one stick
out of the cage as far as it would go toward the banana and then pushed the other stick behind the first one. Of course, when
he tried to draw the sticks back, only the one in his hand came. He jumped up and down and was very excited, and when Köhler
gave him the second stick, he sat on the floor of the cage and looked at them carefully. He then fitted one stick into the other
and retrieved his banana. Köhler called Sultan’s rapid “perception of relationships” insight and determined that insight could
not be gained through trial-and-error learning alone. Although Thorndike and other early learning theorists believed that animals
could not demonstrate insight, Köhler’s work seems to demonstrate that insight requires a sudden “coming together” of all the
elements of a problem in a kind of “aha” moment that is not predicted by traditional animal learning studies.
Learned Helplessness
In the mid- to late 1960s, learning theorist Seligman (1975) and his colleagues were doing classical conditioning experiments on
dogs. They accidentally discovered an unexpected phenomenon, which Seligman called learned helplessness, the tendency to
fail to act to escape from a situation because of a history of repeated failures. Their original intention was to study escape and
avoidance learning. Seligman and colleagues presented a tone followed by a harmless but painful electric shock to one group of
dogs. The dogs in this group were harnessed so that they could not escape the shock. The researchers assumed that the dogs
would learn to fear the sound of the tone and later try to escape from the tone before being shocked. These dogs, along with
another group of dogs that had not been conditioned to fear the tone, were placed in a special box containing a low fence that
divided the box into two compartments. The dogs, which were now unharnessed, could easily see over the fence and jump over
if they wished, which is precisely what the dogs that had not been conditioned did as soon as the shock occurred. In fact, these
dogs showed distress but didn’t try to jump over the fence even when the shock itself began. The dogs that had been harnessed
while being conditioned had apparently learned in the original tone/shock situation that there was nothing they could do to
escape the shock. So, when placed in a situation in which escape was possible, the dogs still did nothing because they had learned
to be “helpless.” They believed they could not escape, so they did not try.
COGNITIVE INFORMATION PROCESSING THEORY
Information processing theories focus on how people attend to environmental events, encode information to be learned and
relate it to knowledge in memory, store new knowledge in memory, and retrieve it as needed. Information processing theorists
challenged the idea inherent in behaviorism that learning involves forming associations between stimuli and responses.
Information processing theorists do not reject associations, because they postulate that forming associations between bits of
knowledge helps to facilitate their acquisition and storage in memory. Rather, these theorists are less concerned with external
conditions and focus more on internal (mental) processes that intervene between stimuli and responses. Learners are active
seekers and processors of information. Unlike behaviorists who said that people respond when stimuli impinge on them,
information processing theorists contend that people select and attend to features of the environment, transform and rehearse
information, relate new information to previously acquired knowledge, and organize knowledge to make it meaningful.
Two-store (dual) memory model
Although this model is generic, it closely corresponds to the classic model proposed by Atkinson and Shiffrin (1968, 1971).

Information processing begins when a stimulus input (e.g., visual, auditory) impinges on one or more senses (e.g., hearing, sight,
touch). The appropriate sensory register receives the input and holds it briefly in sensory form. It is here that perception (pattern
recognition) occurs, which is the process of assigning meaning to a stimulus input. This typically does not involve naming
because naming takes time and information stays in the sensory register for only a fraction of a second. Rather, perception
involves matching an input to known information. The sensory register transfers information to short-term memory (STM).
STM is a working memory (WM) and corresponds roughly to awareness, or what one is conscious of at a given moment. WM
is limited in capacity. Miller (1956) proposed that it holds seven plus or minus two units of information. A unit is a meaningful
item: a letter, word, number, or common expression (e.g., “bread and butter”). WM also is limited in duration; for units to be
retained in WM they must be rehearsed (repeated). Without rehearsal, information is lost after a few seconds. While information
is in WM, related knowledge in long-term memory (LTM), or permanent memory, is activated and placed in WM to be integrated
with the new information.
Control (executive) processes regulate the flow of information throughout the information processing system. Rehearsal is an
important control process that occurs in WM. For verbal material, rehearsal takes the form of repeating information aloud or
subvocally. Other control processes include, imaging (visually representing information), implementing decision rules,
organizing information, monitoring level of understanding, and using retrieval, self-regulation, and motivational strategies.
The two-store model can account for many research results. One of the most consistent research findings is that when people
have a list of items to learn, they tend to recall best the initial items (primacy effect) and the last items (recency effect). According
to the two-store model, initial items receive the most rehearsal and are transferred to LTM, whereas the last items are still in
WM at the time of recall. Middle items are recalled the poorest because they are no longer in WM at the time of recall (having
been pushed out by subsequent items), they receive fewer rehearsals than initial items, and they are not properly stored in LTM.
Research suggests, however, that learning may be more complex than the basic two-store model stipulates. One problem is that
this model does not fully specify how information moves from one store to the other. The control processes notion is plausible
but vague. Another concern is that this model seems best suited to handle verbal material. How the non-verbal representation
occurs with material that may not be readily verbalized, such as modern art and well-established skills, is not clear. The model
also is vague about what really is learned.
Levels (depth) of processing theory
Levels (depth) of processing theory conceptualizes memory according to the type of processing that information receives rather
than its location. This view does not incorporate stages or structural components such as WM or LTM. Rather, different ways
to process information (such as levels or depth at which it is processed) exist: physical (surface), acoustic (phonological, sound),
semantic (meaning). These three levels are dimensional, with physical processing being the most superficial and semantic
processing being the deepest.
These three levels seem conceptually similar to the sensory register, WM, and LTM of the two-store model. Both views contend
that processing becomes more elaborate with succeeding stages or levels. The levels of processing model, however, does not
assume that the three types of processing constitute stages. In levels of processing, one does not have to move to the next
process to engage in more elaborate processing; depth of processing can vary within a level.
Another difference between the two information processing models concerns the order of processing. The two-store model
assumes information is processed first by the sensory register, then by WM, and finally by LTM. The levels of processing model
does not make a sequential assumption. To be processed at the meaning level, information does not have to be first processed
at the surface and sound levels (beyond what processing is required for information to be received). The two models also have
different views of how type of processing affects memory. In levels of processing, the deeper the level at which an item is
processed, the better the memory because the memory trace is more ingrained. Once an item is processed at a particular point
within a level, additional processing at that point should not improve memory. In contrast, the two-store model contends that
memory can be improved with additional processing of the same type. This model predicts that the more a list of items is
rehearsed, the better it will be recalled.
Moscovitch and Craik (1976) proposed that deeper processing during learning results in a higher potential memory performance,
but that potential will be realized only when conditions at retrieval match those during learning. Another concern with levels of
processing theory is whether additional processing at the same level produces better recall. Nelson (1977) gave participants one
or two repetitions of each stimulus (word) processed at the same level. Two repetitions produced better recall, contrary to the
levels of processing hypothesis. Other research shows that additional rehearsal of material facilitates retention and recall as well
as automaticity of processing. A final issue concerns the nature of a level. Investigators have argued that the notion of depth is
fuzzy, both in its definition and measurement.
Resolving these issues may require combining levels of processing with the two-store idea to produce a refined memory model.
For example, information in WM might be related to knowledge in LTM superficially or more elaborately. Also, the two memory
stores might include levels of processing within each store. Semantic coding in LTM may lead to a more extensive network of
information and a more meaningful way to remember information than surface or phonological coding.
CONSTRUCTIVIST THEORY
Constructivism is a psychological and philosophical perspective contending that individuals form or construct much of what
they learn and understand. A major influence on the rise of constructivism has been theory and research in human development,
especially the theories of Piaget and Vygotsky. Piaget’s and Vygotsky’s theories form a cornerstone of the constructivist
movement. The emphasis that these theories place on the role of knowledge construction is central to constructivism.
Strictly speaking, constructivism is not a theory but rather an epistemology, or philosophical explanation about the nature of
learning. Constructivism does not propound that learning principles exist and are to be discovered and tested, but rather that
learners create their own learning. Readers who are interested in exploring the historical and philosophical roots of
constructivism are referred to Bredo (1997) and Packer and Goicoechea (2000). Nonetheless, constructivism makes general
predictions that can be tested. Although these predictions are general and thus open to different interpretations (i.e., what does
it mean that learners construct their own learning?), they could be the focus of research. Constructivist theorists reject the notion
that scientific truths exist and await discovery and verification. They argue that no statement can be assumed as true but rather
should be viewed with reasonable doubt. The world can be mentally constructed in many different ways, so no theory has a lock
on the truth. This is true even for constructivism: there are many varieties and no one version should be assumed to be more
correct than any other. Rather than viewing knowledge as truth, constructivists construe it as a working hypothesis. Knowledge
is not imposed from outside people but rather formed inside them. A person’s constructions are true to that person but not
necessarily to anyone else. This is because people produce knowledge based on their beliefs and experiences in situations, which
differ from person to person. All knowledge, then, is subjective and personal and a product of their cognitions. Learning is
situated in contexts.
Constructivism highlights the interaction of persons and situations in the acquisition and refinement of skills and knowledge.
Constructivism contrasts with conditioning theories that stress the influence of the environment on the person as well as with
information processing theories that place the locus of learning within the mind with little attention to the context in which it
occurs. It shares with social cognitive theory, the assumption that persons, behaviors, and environments interact in reciprocal
fashion. A key assumption of constructivism is that people are active learners and develop knowledge for themselves.
Constructivists differ in the extent to which they ascribe this function entirely to learners. Some believe that mental structures
come to reflect reality, whereas others (radical constructivists) believe that the individual’s mental world is the only reality.
Constructivists also differ in how much they ascribe the construction of knowledge to social interactions with teachers, peers,
parents, and others. Although constructivism seems to be a recent arrival on the learning scene, its basic premise that learners
construct understandings underlies many learning principles. This is the epistemological aspect of constructivism. Some
constructivist ideas are not as well developed as those of other theories, but constructivism has affected theory and research in
learning and development. Constructivism also has influenced educational thinking about curriculum and instruction. It
underlies the emphasis on the integrated curriculum in which students study a topic from multiple perspectives. Another
constructivist assumption is that teachers should not teach in the traditional sense of delivering instruction to a group of students.
Rather, they should structure situations such that learners become actively involved with content through manipulation of
materials and social interaction.
Constructivism is not a single viewpoint but rather has different perspectives. Exogenous constructivism refers to the idea
that the acquisition of knowledge represents a reconstruction of structures that exist in the external world. This view posits a
strong influence of the external world on knowledge construction, such as by experiences, teaching, and exposure to models.
Knowledge is accurate to the extent it reflects that reality. Contemporary information processing theories reflect this notion
(e.g., schemas, productions, memory networks). In contrast, endogenous constructivism emphasizes the coordination of
cognitive actions. Mental structures are created from earlier structures, not directly from environmental information; therefore,
knowledge is not a mirror of the external world acquired through experiences, teaching, or social interactions. Knowledge
develops through the cognitive activity of abstraction and follows a generally predictable sequence. Piaget’s (1970) theory of
cognitive development fits this framework. Between these extremes lies dialectical constructivism, which holds that
knowledge derives from interactions between persons and their environments. Constructions are not invariably bound to the
external world nor are they wholly the result of the workings of the mind; rather, they reflect the outcomes of mental
contradictions that result from interactions with the environment. This perspective has become closely aligned with many
contemporary theories. For example, it is compatible with Bandura’s (1986) social cognitive theory and with many motivation
theories. It also is referred to as cognitive constructivism. The developmental theories of Bruner and Vygotsky also emphasize
the influence of the social environment. Each of these perspectives has merit and is potentially useful for research and teaching.
Exogenous views are appropriate when people are interested in determining how accurately learners perceive the structure of
knowledge within a domain. The endogenous perspective is relevant to explore how learners develop from novices through
greater levels of competence. The dialectical view is useful for designing interventions to challenge children’s thinking and for
research aimed at exploring the effectiveness of social influences such as exposure to models and peer collaboration.
Situated cognition
A core premise of constructivism is that cognitive processes (including thinking and learning) are situated (located) in physical
and social contexts. Situated cognition (or situated learning) involves relations between a person and a situation; cognitive
processes do not reside solely in one’s mind. The idea of person–situation interaction is not new. Most contemporary theories
of learning and development assume that beliefs and knowledge are formed as people interact in situations. This emphasis
contrasts with the classical information processing model that highlights the processing and movement of information through
mental structures (e.g., sensory registers). Information processing downplays the importance of situations once environmental
inputs are received. Research in a variety of disciplines, including cognitive psychology, social cognitive learning, and content
domains (e.g., reading, mathematics), shows this to be a limited view and that thinking involves an extended reciprocal relation
with the context. Research highlights the importance of exploring situated cognition as a means of understanding the
development of competence in domains such as literacy, mathematics, and science. Situated cognition also is relevant to
motivation. As with learning, motivation is not an entirely internal state as posited by classical views or wholly dependent on
the environment as predicted by reinforcement theories. Rather, motivation depends on cognitive activity in interaction with
sociocultural and instructional factors, which include language and forms of assistance such as scaffolding. Situated cognition
addresses the intuitive notion that many processes interact to produce learning.
Piaget’s theory of cognitive development
Piaget’s theory was little noticed when it first appeared, but gradually it ascended to a major position in the field of human
development. According to Piaget, cognitive development depends on four factors: biological maturation, experience with the
physical environment, experience with the social environment, and equilibration. Equilibration refers to a biological drive to
produce an optimal state of equilibrium (or adaptation) between cognitive structures and the environment. Equilibration is the
central factor and the motivating force behind cognitive development. It coordinates the actions of the other three factors and
makes internal mental structures and external environmental reality consistent with each other.
The two component processes of equilibration: assimilation and accommodation. Assimilation refers to fitting external reality
to the existing cognitive structure. When people interpret, construe, and frame, they alter the nature of reality to make it fit their
cognitive structure. Accommodation refers to changing internal structures to provide consistency with external reality. People
accommodate when they adjust their ideas to make sense of reality. Assimilation and accommodation are complementary
processes. As reality is assimilated, structures are accommodated.
Piaget concluded from his research that children’s cognitive development passed through a fixed sequence. The pattern of
operations that children can perform may be thought of as a level or stage. Each level or stage is defined by how children view
the world. In the sensorimotor stage (0-2 years), children’s actions are spontaneous and represent an attempt to understand the
world. Understanding is rooted in present action; for example, a ball is for throwing and a bottle for sucking. The period is
characterized by rapid change; a two-year-old is cognitively far different from an infant. Children actively equilibrate, albeit at a
primitive level. Cognitive structures are constructed and altered, and the motivation to do this is internal. The notion of
effectance motivation is relevant to sensorimotor children. By the end of the sensorimotor period, children have attained
sufficient cognitive development to progress to new conceptual symbolic thinking characteristic of the preoperational stage.
Preoperational (2-7 years) children are able to imagine the future and reflect on the past, although they remain heavily perceptually
oriented in the present. Preoperational children demonstrate irreversibility; that is, once things are done, they cannot be changed
(e.g., the box flattened cannot be remade into a box). They have difficulty distinguishing fantasy from reality. Cartoon characters
appear as real as people. The period is one of rapid language development. Another characteristic is that children become less
egocentric: they realize that others may think and feel differently than they do. The concrete operational (7-11 years) stage is
characterized by remarkable cognitive growth and is a formative one in schooling, because it is when children’s language and
basic skills acquisition accelerate dramatically. Children begin to show some abstract thinking, although it typically is defined by
properties or actions (e.g., honesty is returning money to the person who lost it). Concrete operational children display less
egocentric thought, and language increasingly becomes social. Reversibility in thinking is acquired along with classification and
seriation concepts essential for the acquisition of mathematical skills. Concrete operational thinking no longer is dominated by
perception; children draw on their experiences and are not always swayed by what they perceive. The formal operational (11-
adulthood) stage extends concrete operational thought. No longer is thought focused exclusively on tangibles; children are able
to think about hypothetical situations. Reasoning capabilities improve, and children can think about multiple dimensions and
abstract properties. Egocentrism emerges in adolescents’ comparing reality to the ideal; thus, they often show idealistic thinking.
Piaget’s stages have been criticized on many grounds. One problem is that children often grasp ideas and are able to perform
operations earlier than Piaget found. Another problem is that cognitive development across domains typically is uneven; rarely
does a child think in stage-typical ways across all topics (e.g., mathematics, science, history).
Vygotsky’s socio-cultural theory
Like Piaget’s theory, Vygotsky’s also is a constructivist theory; however, Vygotsky’s places more emphasis on the social
environment as a facilitator of development and learning. One of Vygotsky’s central contributions to psychological thought was
his emphasis on socially meaningful activity as an important influence on human consciousness.
Vygotsky’s theory stresses the interaction of interpersonal (social), cultural-historical, and individual factors as the key to human
development. Interactions with persons in the environment (e.g., apprenticeships, collaborations) stimulate developmental
processes and foster cognitive growth. But interactions are not useful in a traditional sense of providing children with
information. Rather, children transform their experiences based on their knowledge and characteristics and reorganize their
mental structures. The cultural-historical aspects of Vygotsky’s theory illuminate the point that learning and development cannot
be dissociated from their context. The way that learners interact with their worlds, with the persons, objects, and institutions in
it, transforms their thinking. The meanings of concepts change as they are linked with the world.
There also are individual, or inherited, factors that affect development. Vygotsky was interested in children with mental and
physical disabilities. He believed that their inherited characteristics produced learning trajectories different from those of
children without such challenges. Vygotsky considered the social environment critical for learning and thought that social
interactions transformed learning experiences. Social activity is a phenomenon that helps explain changes in consciousness and
establishes a psychological theory that unifies behavior and mind. The social environment influences cognition through its
“tools”, that is, its cultural objects (e.g., cars, machines) and its language and social institutions (e.g., schools, churches). Social
interactions help to coordinate the three influences on development. Cognitive change results from using cultural tools in social
interactions and from internalizing and mentally transforming these interactions. Vygotsky’s position is a form of dialectical
(cognitive) constructivism because it emphasizes the interaction between persons and their environments.
Vygotsky’s most controversial contention was that all higher mental functions originated in the social environment. This is a
powerful claim, but it has a good degree of truth to it. The most influential process involved is language. Vygotsky thought that
a critical component of psychological development was mastering the external process of transmitting cultural development and
thinking through symbols such as language, counting, and writing. Once this process was mastered, the next step involved using
these symbols to influence and self-regulate thoughts and actions. Self-regulation uses the important function of private speech.
In spite of this impressive theorizing, Vygotsky’s claim appears to be too strong. Research evidence shows that young children
mentally figure out much knowledge about the way the world operates long before they have an opportunity to learn from the
culture in which they live. Children also seem biologically predisposed to acquire certain concepts (e.g., understanding that
adding increases quantity), which does not depend on the environment. Although social learning affects knowledge construction,
the claim that all learning derives from the social environment seems overstated.
Zone of Proximal Development, a key concept is the ZPD, defined as “the distance between the actual developmental level
as determined by independent problem solving and the level of potential development as determined through problem solving
under adult guidance or in collaboration with more capable peers”. The ZPD represents the amount of learning possible by a
student given the proper instructional conditions. It is largely a test of a student’s developmental readiness or intellectual level
in a specific domain, and it shows how learning and development are related and can be viewed as an alternative to the
conception of intelligence. In the ZPD, a teacher and learner work together on a task that the learner could not perform
independently because of the difficulty level. Working in the ZPD requires a good deal of guided participation; however, children
do not acquire cultural knowledge passively from these interactions, nor is what they learn necessarily an automatic or accurate
reflection of events. Rather, learners bring their own understandings to social interactions and construct meanings by integrating
those understandings with their experiences in the context.
The influence of the cultural-historical setting is seen clearly in Vygotsky’s belief that schooling was important not because it
was where children were scaffolded but, rather, because it allowed them to develop greater awareness of themselves, their
language, and their role in the world order. Participating in the cultural world transforms mental functioning rather than simply
accelerate processes that would have developed anyway. Broadly speaking, therefore, the ZPD refers to new forms of awareness
that occur as people interact with their societies’ social institutions. The culture affects the course of one’s mental development.

You might also like