Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Notes PSYCH 207

Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

PSYCH 207

Text 4 Lecture + Textbook Notes


Quizlet Decks
Thinking, Problem Solving & Reasoning (Chapter 10)
What is Thinking?
• Most definitions of thinking are quite vague because it’s tough to define.
• Thinking is “going beyond the information given” (Bruner, 1957).
• Thinking is a “complex and high-level skill that fills up gaps in the evidence” (Bartlett,
1958).
• Thinking is the “process of searching through a problem space” (Newell & Simon, 1972).
• Thinking is “what we do when we are in doubt about how to act, what to believe, or what
to desire” (Baron, 1994).p
• Thinking could either be focused or unfocused. Focused thinking is goal-based, problem
solving. Unfocused thinking is daydreaming and unintentional.
• People tend to think creative thinking falls under unfocused thinking, but actually require
goals / problem solving techniques, making it a focused thinking process
• Introspection is the detailed, concurrent, and nonjudgmental observation of the contents
of your consciousness as you work on a problem
• Problems could be either :
o well-defined: have a beginning and end, and have rules or guidelines), or
o ill-defined: don’t have their goals, starting information, or steps clearly spelled
out. These problems potentially have no clear solution. It’s hard to find a clear
solution path for the problem. Specifically, it’s hard to account for all potential
variables.
• The vast majority of psychologists tend to research focused problem solving in well-
defined problems, since it’s more easily tracked.
• There’s a lot of mystery/magic that happens in order to solve a problem. It’s a huge black
box. It’s unconstrained.
Problem Solving Techniques
• Generate and Test:
• Generate a number of solutions, and then test the solutions.
o This is a useful technique if there are a very limited number of possibilities.
o It’s a problematic approach if there are too many possibilities, if there is no
guidance over generation, or if you can’t keep track of the possibilities that have
already been tested. Essentially, it’s limited by our working memory capacity.
o This is essentially the brute force or exhaustive search approach.
o Generation is not entirely random. It’s often guided by frequency, recency,
availability, familiarity, etc.
• Means-Ends Analysis
o Every problem is a problem space.
o A problem space contains:
§ Initial state: conditions at the beginning of the problem.
§ Goal state: conditions at the end of the problem.
§ Intermediate states: the various conditions that exist along the path(s)
between the initial and goal states.
§ Operators: permissible moves that can be made towards the problem’s
solution (transitions, essentially).
• We aim to reduce the difference between the initial state and the goal state.
• Sometimes you have to move further away (back) from the goal state in order to make
progress, like when solving a Rubik’s Cube. Means-End analysis breaks down a bit in
these cases.
• Involves generating a goal and several sub-goals along the way.
• Any sequence of moves beginning at the initial state and ending at the final state is
considered a solution path. In many cases there are multiple solution paths.
• The Tower of Hanoi (moving the tower of discs from one peg to another) works well with
means-end analysis.
• Other CS students will appreciate this as being similar to a nondeterministic finite
automaton (NFA).*
• Working Backwards
o Start at the goal state and create sub-goals that work towards the initial state.
o Steps will be the same, but the reasoning for taking those steps would be different
than when working forwards.
o Working backwards is just yet another possible solution path(s).
o Similar to means-end analysis in that we create sub-goals to reduce differences
between the current state and goal state.
• Backtracking
o Problem solving often involves making working assumptions.
o In order to correct mistakes in problem solving, we need to remember our
assumptions, assess which assumptions failed, and correct our assumptions
appropriately.
o Essentially, when we make a mistake, we need to move back to the state where
we were most right, then change the next steps to stay on track. For example,
when you get lost you need to go back to the last place where you were in the
right place/towards the right direction.
• Analogy
o Analogies work by making comparisons between two situations and applying the
solution from one of the situations to the other.
o We often find an existing domain to help explain something new.
o Analogies are especially useful in explaining unobservable phenomenon.
o They can also be problematic because we can make unwarranted faulty
assumptions.
o Massive scientific discoveries must build upon knowledge that’s already known.
That’s what analogies achieve.
§ For example: The structure of atoms is analogous to the structure of the
solar system.
• Note: Analogies can be overly mapped – if an atom is like a solar
system, the nucleus probably is a big as the sun (which it may not
me), mapping analogies too literally can create errors in
assumptions
§ Darwin discussed how the use of analogies helped him develop his theory
on evolution.
§ Recall: the computer metaphor of mind (where our short-term memory is
analogous to RAM and long-term memory is analogous to a hard drive).
o Analogies involve similar structures at a deeper abstract level, but the two
situations can differ superficially.
• Reasoning by Analogy
o The tumor problem: given a human being with a tumor, and rays that destroy
organic tissue at sufficient intensity, by what procedure can one free him of the
tumor by these rays and at the same time avoid destroying the healthy tissue that
surrounds it?
o The general story: In a small country, a rebel general aimed to seize a fortress
ruled by a king. The fortress stood surrounded by roads, each rigged with mines
that would detonate under the weight of a large force, destroying the roads and
nearby villages. Realizing a direct assault was impossible, the general ingeniously
divided his army into smaller groups, sending each down a different road. By
avoiding triggering the mines individually, the groups converged simultaneously
at the fortress, leading to its capture and the king's exile.
o Gick & Holyoak (1980) presented participants with the tumour problem, but
before each person read the story of the general. Some were told the story of the
General had a hint relevant to the tumour problem and others were not.
§ Results:
§ 75% of the individuals told the story of the General had a hint solved the
problem correctly
§ Only 30% of the individuals not told noticed the analogy
§ Only 10% solved the problem without the story
o The tumor/fortress analogy is structurally very similar, but superficially very
different.
o You can improve your personal performance with generating analogies through
practice.
o The tumor/fortress analogy is an example of a cross-domain analogy. Cross-
domain analogies are harder to handle due to their superficial dissimilarity.
o Fugelsang and colleagues presented groups of individuals sets of analogies and
asked to judge how logical/true they are while their brains were being scanned.
They presented two types of analogies:
§Cross-domain analogies: compare things that are really superficially
different and abstract (for example, a flock is to a goose what a
constellation is to a star)
§ Within-domain analogies: compare things that are more similar, (for
example, a bracelet is to a wrist what a ring is to a finger)
o Results showed that cross-domain, more abstract and semantically different
analogies activated the frontopolar cortex, arguing that this region is sensitive to
analogical abstract reasoning
o Humans are probably the only species with analogous reasoning.
Blocks in Problem Solving
• Mental Set
o Some solutions are obtained by first perceiving the object and then representing it
in a different way, which involves insight; this takes practice.
o Mental set (or perceptual set) affects this, as it’s the tendency to perceive an object
or pattern in a certain way on the basis of your immediate perceptual experience,
preventing you from attempting other (possibly easier) solution paths.
o If given many algebra problems before a more general problem, the probability
that you’ll solve the problem in an algebraic way increases dramatically.
o Your mind makes unwarranted, faulty assumptions about the problem space.
o You can break the set if you step away (incubate) and then come back later.
§ Incubation is essentially argued to help unconscious processing active, so
that when you come back to the problem, you have some unconscious
knowledge that the incubation period allowed to happen
o In instance of mental set is functional fixedness: our tendency to see objects
based purely on their intended purpose
o An example of functional fixedness is not realizing you can use a screwdriver as a
pendulum to enable you to grab two ropes that are just a little too far away from
each other (so you can then tie them together).
• Lack of Problem-Specific Knowledge or Expertise
o Chess masters are able to choose the best move more easily than novices, despite
the fact that they must consider the same number of moves (all possible moves).
o Experts are able to extract more information from brief exposure.
o A chess master can recall the positions of more chess pieces on a chessboard
(after a brief exposure) compared to novices, but only when the pieces depict a
possible chess game.
o Experts have more domain-specific categorization skills, since they can draw
from more exemplars than novices can.
o Experts represent information at a deeper, more conceptual level.
o Novices will remember the superficial locations of the pieces, but chess masters
will remember the meaning of the state of the game being depicted.
o Having knowledge is equivalent to having the ability to move beyond superficial
details of the problem. Instead, the structural details are retained.
o You can’t really get around this without practice and training, unlike the other
blocks (which you can solve by just taking a break from the problem).
Decision Making (Chapter 11)
What is Decision Making?
• Decision making a choice between competing options. We have to weigh the pros and
cons of each option.
• Most difficult decisions are made under conditions of uncertainty
• The study of decision making concerns how we make rational, optimal decisions.
o Rationality here refers to consider all your relevant goals and principles, not just
the first ones that come to mind.
• Decision making primarily occurs in working memory, so its capacity limits our abilities
to make rational decisions.
• There’s a lot of information out there that we could consider to make a particular
decision, but we can’t possibly consider everything.
• People don’t make decisions rationally – it’s too cognitively taxing. Instead, we insert
and apply heuristics and biases.
• Lack of rational, optimal decision-making is argued to be a result of cognitive overload
which happens when the information available overwhelms the cognitive processing
available.
• Phases of decision making:
o Setting goals
o Gathering information
o Decision structuring (for complex decisions)
o Making a final choice
o Evaluating
• Probability can be generally thought of as a measurement of a degree of uncertainty
• Decision making is often based on probabilities. Bayes Theorem could be used to find
the optimal solution, however we’re interested in how we actually make decisions (not
necessarily the most optimal one).
• Decision making involves conflict processing between intuition and the ideal solution.
• Researchers look at how and why people deviate from optimal reality.
• Every outcome is probabilistic in some way or another.
Sources of Decision Difficulty
• There are two main sources of decision difficulty:
o Conflict: trade-offs must be made across different dimensions, by the decision
maker. For example: a car’s power vs. gas mileage, TV’s price vs. quality,
spending time at work vs. with family.
o Uncertainty: the outcome of a decision often depends on unknown/uncertain
variables or events. For example: the future demand of a product, or completion
time of a project.
Heuristics in Decision Making
• Making purely rational decisions is very difficult, so instead we use mental shortcuts. It’s
important to note that heuristics and biases are not all good or all bad.
• Availability Heuristic
o The ease of which things come to mind increases their prevalence.
o In a study where people are asked how many words in a book would have the
form ---ing or -----n- most people would predict more words with “ing” than with
“n” because it’s easier to recall words that end in “ing”
o Another example: asthma deaths. Many deaths are caused by asthma, and not
many are caused by things like botulism or tornadoes (relatively). People seem to
think the latter are more common causes of death, though, because they’re more
prevalent in the news media.
o People overestimate the frequency of things in real life, often influenced by media
• Representativeness Heuristic
o This has to do with the question “how representative is something?”
o “Of all families with six children, in what percentage do you think the exact birth
order was (a) BBBGGG and (b) GBBGBG?” These probabilities are equivalent,
but many people said (b) had a higher probability because it was more random.
o In this experiment, the vast majority of possible birth orders appear random, so
GB- BGBG is representative of that. However, individually, those two orders have
the same probability.
o The more random something is, the more representative it is of that large group of
possibilities that obviously exists.
o Law of Small Numbers: a small sample is not representative of the group as a
whole.
o Gambler’s Fallacy: gamblers don’t understand the independence of events.
Winning/losing streaks do not affect upcoming outcomes. Also, a slot machine is
programmed to pay out between 85% and 98% of the time.
o "Man who" arguments: Misuse of the representativeness heuristic
§ Argument is usually advances by someone who had just fronted (for
example) a stat of smokers with lung cancer
§ "I know a man who smoked three packs a day and lived to be 110"
§ Ignoring base rate information
• Framing Effects:
o People evaluate outcomes as changes from a reference point, their current state
§ Depending on how their current state is described, they perceive certain
outcomes as gains or losses
§ Can be thought of as context effects in decision making
o Kahneman and Tversky argued that we treat losses more seriously than we treat
gains of an equivalent amount
o Simply changing the description of a situation can lead us to adopt different
reference points and therefore to see the same outcome as a gain in one situation
and a loss in the other
• Anchoring Heuristic
o The initial starting point sets the anchor, and everything that follows will not
deviate (much) from that anchor.
o If you’ve asked to estimate 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1, you’ll get a different
result
o than when asked to estimate 1 × 2 × 3 × 4 × 5 × 6 × 7 × 8. The first number sets
the anchor for the estimation.
o The larger the starting number, the larger the final answer will be
o You will make adjustments from the anchor occasionally, but they will not be
significant enough to fix the anchoring bias.
• Illusory Correlation
o General tendency to see patterns in things where we believe a pattern exists even
when such patterns are present.
o For example, it’s common to think that more wild things and crimes occur under a
full moon. However, this relationship is often overestimated.
• Hindsight Bias: Tendency to consistently exaggerate what could have been anticipated in
foresight when looking back on an event
• Confirmation Bias
o We have an initial intuition and seek out additional information that supports that
intuition.
o Wason’s number study showed participants the numbers 2, 4, and 6, and asked
them to determine the rule that’s in play. They could ask if a certain group of
numbers is or is not following the rule correctly.
o Most people might think it’s just even numbers, or just multiples of the tuple (1, 2,
3), but in this case it’s actually just three numbers in ascending order.
o People tend to test numbers that confirm their initial rule, which could take a long
time. You should instead make guesses that you think may discredit your theory
about what the rule is.
o In science, no theory can be absolutely proven to be true (or at least, by example it
cannot be proven), but it can be proven to be false.
o Wason’s Selection Task: if the document has an A on one side, then it must have
a 4 on the other. There might be some errors in the cards, but each card does have
one numerical side and one alphabetic side. Which of the following cards do you
have to check for validity? A D 4 7.
o Answer: A and 7. The A needs to be checked to ensure there’s a 4. The 7 needs to
be checked to ensure there’s not an A.
o A logically equivalent example could occur if you’re asked to determine legal
drinking ages. If you have someone drinking a beer, someone drinking a coke,
someone who is 22, and someone who is 16, who do you need to check?
o Answer: the beer drinker and the 16-year-old. We don’t care who drinks coke and
we don’t care what a 22-year-old is drinking, since they’re above the legal
drinking age.
o Why is the drinking age problem easier than the more abstract problem involving
letters and numbered cards, despite them being logically equivalent? Our
intuitions guide us based on our experiences. Also, cheater detection is innate.
o We’ve evolved to have cheater detection – that is, we’re really good at
determining if anyone is violating a social contract. We define cheating as
someone who deliberately takes benefits without paying costs or meeting certain
requirements.
o Cheater detection depends on the context to define cheating.
Utility Models of Decision Making
• Normative models – define ideal performance under ideal circumstances
• Prescriptive models – tell us how we "ought" to make decisions
o Take into account the fact that circumstances in which decisions are made rarely
ideal and they provide guidance about how to do the best we can
• Descriptive models – simply detail what people actually do when they make decisions
o Not endorsements of good ways of thinking; rather describe actual performance
• Expected Utility Theory: a formal framework for decision-making under uncertainty,
emphasizing the calculation of expected value or utility by integrating subjective
preferences and probabilities.
o Utility – capture ideas of happiness, pleasure, and the satisfaction that comes
from achieving one or more personal goals
o A choice that fulfills one goal has less utility than a choice that fulfills the same
goal plus another
• Image theory: proposes that individuals don’t employ a formal, systematic process as
expected by Expected Utility (EU) models. Instead, decision-making predominantly
involves a phase termed "pre-choice screening of options," where individuals narrow
down their choices based on three key images:
o the value image (reflecting personal values),
o the trajectory image (pertaining to future goals),
o the strategic image (related to achieving those goals).
o Options incompatible with any of these images are discarded, resulting in a
noncompensatory screening process. If only one option remains, the decision is
whether to accept it. Otherwise, further decision strategies may be employed, such
as making trade-offs or seeking new options, challenging the traditional view of
decision-making.
• Recognition-primed decision making: experts are most likely to rely on intuition, mental
simulation, making metaphors or analogies and recalling or creating stories
• Much of the work in decision making is done as the experts "size up" a situation
o Compare new situation to other situations they've previously encountered calling
to mind narrative stories about what happened in those situations
o Experts consider one option at a time, mentally stimulating the likely effect of a
particular decision
Decision Making in the Split Brain
• Dual process theories distinguish between:
o Intuitive type 1 processing, heuristics are driven by this process
o Analytic type 2 processing – requires working memory, also dependent on
thinking styles (someone’s willingness to put the effort into a specific type of
decision)
• The severing of the corpus callosum used to be done to treat people with severe epilepsy.
This, in turn, causes people to have split brains in which each brain hemisphere
essentially works independently.
• The left hemisphere is most often the primary source of language processing thus, when
people with split brains are presented with visual information, it takes much longer for
them to consciously verbally process certain aspects of what they are seeing.
• For example, when people with split brains are shown pictures of faces made out of fruits
the left hemisphere will recognize the fruits while the right hemisphere will look at the
entire face.
• One real-world example we saw of this was with split brain patient Vicki. When she was
presented a picture of a woman on the phone, she could only verbally identify the
woman. When asked to then close her eyes and write what she saw, she wrote telephone,
signalling that her left hemisphere did process the whole picture, but verbalizing that was
difficult until she was made aware of it.
• Another example of this independent function between hemispheres was Joe, who
showed that people with split brains can draw two distinct shapes at the same time (one
with each hand), whereas normal people cannot.
• Probability Guessing Study on split brain patients were done to examine how
individuals try to understand and seek patterns, and what roles each hemisphere of the
brain might play into decision making processes.
• JW and VP were two split brain subjects asked to guess where stimuli would appear,
either top or bottom of the screen.
o The probability of the positions was manipulated so that 80% of the time it was at
the top and 20% of the time at the bottom. Even though the rational thing to do is
to just press the top button, we tend to probability match rather than maximize
our chances, decreasing performance from 80% to 60-70%.
• In split brain subjects, they presented stimuli on the left screen, responded with the right
hand, and the right side of the screen, responded with the left hand.
• What the results showed was that:
o Right hemisphere maximized chances, always chose the option that occurred the
most frequently in the past
o Left hemisphere probability matched, they matched the probability of the
incoming stimuli
§ Neural processes responsible for searching for patterns in events are
housed in the left hemisphere
• Other studies on split brain patient examined causal thinking, further asserting that:
o Casual perception: Simple, perceptual events that require no real thinking or
inference (like predicting the collision between two balls) will often be a right
hemisphere dependent task
o Casual inferences: Inference tasks that require more high-level thought (such as
blicket detector tasks) will be supported by the left hemisphere
§ The left hemisphere is trying to find causal patterns. It tries to out-guess
randomness that is not actually there. This is called the left hemisphere
interpreter.
§ The ability to form inferences about two events is part of the left
hemisphere.
o Results of this study showed that non-severed people process information almost
instantaneously from one hemisphere to another, so either tasks will be easy
because they are bilaterally supported
o Split brain participants performed worse overall due to independent functioning of
each hemisphere.
o Right hemisphere was responding in causal inference tasks but it could not
determine the pattern.
o Perceptual inference tasks were solved by the right hemisphere, the left
hemisphere could not asses the causal connection
• This tells us about the different functions of the brain but also tells us about how causality
likely involves multiple types of representations dependent on the different types of
stimuli we are presented with
Decision Making, Emotion, and the Brain
• Cognition is part of a highly interactive system involving the interplay between attention,
perception, emotion, and social interactions.
• Decisions don’t happen in cognitive vacuums, decisions are also determined by emotions
which drive those heuristic and gut-based decisions. Take into account emotional
reactivity when making decisions, especially when you’re making interpersonal decisions
which involve other people.
• Neuroeconomics: This is a new field that examines how the brain interacts with the
environment to enable us to make complex decisions.
• Ultimatum game: you have the opportunity to split $10 with someone. You’ll receive a
one-time offer from your partner, then you must decide to accept or reject the offer. If you
reject the offer, you both go home with nothing.
• If you’re offered $1, you’ll probably reject it to punish the other person, so they won’t be
able to keep $9. This punishment is because of their greed.
• The rational thing to do in this situation is to take any offer of money, because any
amount is better than no amount.
• Remember: this is a one-time offer. There are no future offers, so the punishment is not
meant to change their behavior to benefit you in the future.
• Unfair offers activate three regions of the brain:
o Right Dorsolateral prefrontal cortex – any difficult task, working memory
region
o Anterior singular cortex – activated during conflict processing
o Bilateral insula – disgust response, emotionally aversive
• Why do we want to prevent their greed? We get an initial disgust response
• This experiment is an example of a highly cognitive act that is affected by emotions.
• Often times, emotional processing happens quicker than cognitive processing.
Individual Differences (Chapter 12)
• There is less variability in low-level processes.
• The more high-level or complex a task is, the more it will likely differ based on
individual differences.
• Recall that chess novices have a superficial representation of the board while experts
have a deeper understanding. They define importance differently, too.
• Individual differences and expertise will change cognition.
Aging
• Aging also changes cognition as part of individual differences.
• Processing speed slows down.
• Memory also slows down with age. Senior’s moments occur. Episodic and working
memory decline steadily with age, due to degeneration of the frontal lobes as you age.
• Semantic memory remains intact and continues to increase over lifespan
• Indirect, unconscious memory is relatively preserved over time.
• As you age, three brain changes occur:
1. Reductions in brain volume stemming from gray and white matter atrophy
(shrinkage).
2. Synaptic degeneration which impairs communication between neurons.
3. Reductions in regional cerebral blood flow (rCBF) to the brain.
• Changes (1) and (3) can be slowed down, by doing things like aerobic exercises. It’s hard
to say the same about (2) because it’s harder to measure.
• There are reductions in bottom-up perceptual processing abilities, and an increase in top-
down processes (involving the frontal lobes), as people age.
• Compensatory strategies are used to alleviate these ability reductions.
• Arthur Rubinstein, a pianist, used several compensatory strategies as he got older:
o Selection. He played fewer pieces.
o Optimization. He practiced these pieces more often.
o Compensation. He would play slower before fast segments to make the fast
segments seem faster. He did this to counteract his loss in mechanical speed.
• People often use distributed cognition (off-loading) more as they get older. They use
Google more, or rely on partners/family members to fill in gaps.
• A cue experiment was conducted where youth and adults had to remember pairs of
words. The adults were separated into two groups: low performance, and high
performance (at this specific task). The right hemisphere was active for this task in young
adults. The older, low performance adults showed even higher right frontal lobe
activation intensively. The older, high performance adults recruited bilateral frontal lobe
activation, seemingly adapting to decline in neural structures this way.
• The more active the mind is, the less of a memory deficit that forms over time.
Surprisingly, physical exercise also helps maintain cognitive abilities. One study showed
that regular exercise reduced the likelihood of developing Alzheimer’s disease, even in
people who were predisposed to the disease.
Sex Differences
• People place a lot of emphasis on finding differences between the sexes, and not on other
arbitrary differentiators like eye color.
• Differences might be smaller than researchers think (or smaller than they’d like to admit).
• In general, males outperform females in spatial tasks, and females outperform males in
verbal tasks. This outperforming is within one standard deviation, so it isn’t that big of a
difference. This is a relatively stable finding.
• Mean differences but large overlapping distribution. Some females outperform males on
spatial tasks, for instance, and some males outperform females on verbal tasks.
• File drawer problem: it’s hard for researchers to publish null findings. This is a problem
for research in all fields. They may exaggerate their results a bit as a consequence of this,
in order to publish something.
• Experimenter expectancy effects: confirmation bias.
• Why are there some slight sex differences? There are two theories:
o Socialization: reading materials, communication styles, access to puzzles & video
games all differ between the sexes.
o Lateralization: it might be genetics. Women have more brain resources that are
used for verbal processing. They have bilateral resources (both hemispheres),
which makes them more likely to retain their verbal skills after brain damage than
men. Males have more lateralized processing, showing greater asymmetries in the
functioning of their two cerebral hemispheres.
The Big Picture, and Future Directions
• Conscious experience is largely reconstructive.
• Cognition = Attention + Perception + Emotion + Social Interactions.
• Cognitive science is used in industry for informing law, and for interacting with others.
• Cognitive science can inform developmental work.
o Emotional processes cause dumb things to happen.
o An experiment was conducted that asked teens and adults if something was a
good idea or not, while they were in an MRI machine.
o Good ideas were judged quickly be both teens and adults, though generally
quicker by adults.
o Bad ideas took slightly longer for both adults and teens, but teens took
significantly longer than adults to think about bad ideas.
o Why is this? Adults show bilateral insula activation (an initial disgust response).
It’s an emotionally negative response. Meanwhile, teens activate the analytical
regions of their brain (Dorsolateral prefrontal cortex)
• An fMRI could be used as a lie detector.
o Traditional polygraph detectors measure bodily response (sweating, heart rate),
but they aren’t helpful if the individual doesn’t feel guilty.
o Using an fMRI as a lie detector works but has problems. It’s still better than
polygraph lie detectors though.
o An experiment gave participants a group of cards. They were asked to tell the
truth about certain cards and to lie about others.
o Lies are indicated by heightened activity in the dorsomedial prefrontal cortex
(DMPFC) which causes self-referential processing, internal monitoring of
cognitive states

Text 3 Lecture + Textbook Notes


Quizlet Decks
Concepts and Categorization (Chapter 7)
• A concept is defined as a mental representation that is used for a variety of cognitive
functions (including memory, reasoning, language) and is storing as much knowledge as
is typically relevant for that object, event, or pattern.
• Categorization is the process by which things are placed into groups (which we call
categories).
o A category is nothing more than a group of similar objects or entities.
o We categorize things into sections, but many things belong in multiple sections.
o The categorization of concepts is the grouping of complex objects.
o Fits in the mental model as part of long-term memory.
o Categorization is how our semantic memory system is organized. It’s web-like.
o We study it because much of human cognition involves interpreting a large
variety of sensory inputs in terms of a finite number of meaningful concepts.
o Doctors use categorization to find brain tumors and make diagnosis.
o “Experts” in many industries are really just people who are great at
categorization.
• People used to have a richer understanding of the world around them because the
technology was simpler. Today, if you ask someone how a helicopter works (for
instance), you’ll get a very high level overview.
• Today, people rely on other sources to augment their less-rich understanding. The Internet
is a large source for this off-loading.
• Off-loading increases with age. Memory processes get worse as you age, but you get
more efficient at off-loading.
Functions of Categorization
• Categorization allows you to understand individual cases that you haven’t seen before.
You can make inferences about these cases.
• It reduces the complexity of the environment by organizing everything in a logical way.
• It requires less learning and memorization. It reduces redundancy.
• It provides a guide to appropriate action. For example: you may want to be aware of the
difference in appearance between a dolphin and a shark before going for a swim.
• Category based induction – you can say things about something by knowing what the
category is (for example: if someone tells you an ostrich is a bird, you can assume it has
wings)
Classical View
• Category membership determined by a set of defining (necessary and sufficient)
properties. That is, all properties must be present.
o Example: the “bachelor” category may have the defining properties “unmarried”,
“hu- man”, “adult”, and “male.”
o Example: the “triangle” category may have the defining properties “three-sided”,
“planar”, and “geometric figure.”
• Does a preteen boy have the properties of a bachelor? Yes, however it’s not really
appropriate or sensical to categorize him as a bachelor. This is a flaw in the classical
view.
• The classical view works quite well for simple objects like the triangle. There is no
triangle that is more (or less) typical than any other triangle.
• Let’s say we have a “dog” category, with one of the defining properties being “four legs.”
What happens if the dog somehow loses a leg? Does it lose its dog-ness? It shouldn’t.
• The classical view makes two assumptions: concepts are not representations of specific
examples, they are a list of characteristics, and membership in a category is all or none.
• Problems with the Classical View:
o There are no defining features for many natural-kind categories, such as games.
Nothing is common among all games, but there are similarities between certain
games.
o Eleanor Rosch and colleagues argues this view has a typicality problem. People
judge members of a category as having a different “grade” in
membership/different “goodness” levels – i.e. different levels of familiar
resemblance.
§ For example: a tomato will have a lower grade in fruit membership than
an apple, because an apple is more typically thought of as a fruit. The
classical view assumes there is no graded membership – all members are
created equal.
Prototype View
• A prototype is an idealized representation of a class of objects.
• It includes features that are typical rather than necessary or sufficient. We no longer need
all characteristics to be present.
• A prototype is formed by averaging out the characteristics of category members we have
seen in the past.
• Members within a category differ in terms of prototypicality – some members can be
more typical than others (i.e., high-prototypicality vs low-prototypicality)
• Chairs all look different (have no single defining qualities), but they generally have a set
of characteristics. There are some exceptions, however, which is why the prototype view
beats the classical view in this sense.
Determinants of Typicality
• Family resemblance is another area where the prototype view shines. Family members
look similar but they don’t all have one characteristic that is exactly the same. Some
family members share some characteristics of some other family members, but those
characteristics aren’t shared among all family members.
o Category measuring is graded, constraints are often fuzzy and protypes serve as a
reference point.
o Typical examples are classified faster than non-typical as they share many
features with other category members and few features with other categories.
• Overlapping features between members of a category predict typicality.
• Example: fruits.
o Apple: red, sweet, crunchy, round.
o Orange: juicy, sweet, round, soft.
o Coconut: hard, brown, white inside, tropical.
• Example: furniture.
o Chair: sit on, legs, armrests, upholstery.
o Sofa: cushions, sit on, upholstery, armrests.
o Rug: stand on, woven, soft, flat.
• The prototypicality of a category member predicts the performance on a number of
tasks.
o For instance, if you were asked to identify if a given concept is included in a
category (“is X a fruit?”), you would perform worse on less typical members.
You’d respond slower to “is tomato a fruit?” than “is apple a fruit?”.
o Concepts will still have features in common with other categories as well, just less
features in common than the main category it’s considered to be part of.
o Categorization is nuanced, grouping has some form of grading or hierarchical
nature when you consider that any two random objects can be defended as similar
in infinite ways (you can move both them, etc.)
§ Basic level categories - include members that are maximally similar to
one another (such as piano and guitar)
§ Superordinate levels of categories – contain members (such as musical
instruments) that dissimilar in several respects
§ Subordinate levels of categories – contain less distinct members than
basic level categories (such as grand piano and upright piano)
• Problems with the Prototype View:
o Categories are variable. The concept of a “game” may differ depending on you –
it may mean paintball, online poker, or basketball, for instance.
o Typicality is not fixed. Typicality varies as a function of the context it’s in. A
robin may seem typical in the context of a park and less typical in the context of a
farm or backyard.
o Our mind can flexibly change how categories are organized, posing issues for
these simplistic models
Exemplar View
• Opposes the two previous concepts, which argue that concepts are some sort of mental
abstraction or summary; individual instances are not specifically stored/mentally
represented but instead averaged into some sort of compositive representation.
• The Exemplar View argues concepts are composed of previously stored instances
(exemplars).
• Categorization occurs by comparing the current instance with previous instances that are
stored in memory.
• Think of semantic network – the closeness to which something is connected to a category
depends on experienced associations; sort of in a webbed network.
• In Allen and Brooks' 1991 experiment, they investigated how prior exemplars store in
memory affect categorization beyond simple defining features. Participants were trained
on pictures of fictional creatures, "builders" (which had to have at least two of three
features: long legs, angular body and spots) and "diggers" which lived in holes they dug.
o In the test phase, the researchers introduced new stimuli that were slightly
different from the training examples. These new stimuli fell into two categories:
§ Positive Match: These were creatures that categorically (followed the rule)
and visually similar to the training examples of builders. For example, a
creature with long legs and an angular body, similar to the known builders.
§ Negative Match: These were creatures that were categorically a builder
but were visually more similar to the training examples of diggers. For
instance, a creature with long legs and spots, similar to the known builders
but embedded in the environment of a digger.
o Results showed that participants were slower and made more errors in
categorizing negative matches. This suggests that physical similarity to stored
exemplars influenced categorization, even when a clear rule was available.
• The exemplar view discards less information than the prototype view.
• It also explains why definitions don’t work (there are only instances, no definitions), and
why typicality occurs (objects more like exemplars store in memory are classified faster).
• There are no fuzzy prototypes being stored in memory under this view.
• This view shows that in many cases we’re interested in superficial similarity, and not
actually digging for similar meaning.
• The best way to train doctors to make medical diagnoses is to show them actual instances
that represent all the different features present in specific illnesses rather than teach them
rules to follow to make a diagnosis.
• Problems with the Exemplar View:
o It does not specify which exemplars will be used for categorization.
o It requires us to store a lot of exemplars. We do still only have a cranium of finite
size, however!
• A stimulus driven theory, more emphasis on bottom-up processes, it does not rely on
applying predefined rules or abstract prototypes but rather data-driven.
The Schemata View
• Concepts are schemas (organized frameworks for representing knowledge).
• Schemata involve both abstractions across instances (prototype view) and information
about actual instances (exemplar view).
• Problems with the Schemata View:
o It does not specify boundaries among individual schemata.
o This view is difficult to test empirically for validity.
• Mainly interacts with top-down processes.
The Knowledge-Based View
• Examples of categories are children, pets, photo albums, family heirlooms, and cash.
• Argues that the relationship between a concept and examples of that concept are
analogous to the relationship between a theory and data supporting that theory.
• A category becomes coherent only when you know the purpose of the category. This
provides a theory for the purpose of objects.
o For example, one category could include children, pets, photo albums, family
heirlooms and cash. On a surface level, these things don’t seem to be too related,
but in the context of scenario where a fire is about to engulf a house, these things
fall neatly into the category of “things to save”.
• Categories interact more with top-down processes – until we know what the purpose of a
category is, our personal experiences/expectations colour where we categorize things.
• This view captures the highly flexible nature of categorization.
o Categories can change as our goals/tasks change.
• Concepts are theories, and instances are data.
• Categorization involves:
o Knowledge of how concepts are organized.
o The purpose of the category.
o People’s theories about the world.
o Expectations.
• It is difficult to test this view as well.
Learning Strategies
• Bruner et al. (1956) experiment uses cards as a stimuli, each of which contains one of
three shapes (circle, square, cross), one of three colors (black, white, striped), and one of
three borders (one, two, three).
o Participants were given an instances (such as “black circles”), without the concept
itself.
o They then had to pick additional cards which they thought might be instances of
the same concept. They received feedback on their guesses.
o This experiment is an analytical/conscious/explicit strategy for learning new
concepts.
• Learning strategies might use a conscious, strategic, or unconscious strategy.
o When the task is simple (few objects and categories), conscious learning is the
way to go. Learning a language is more complex, so an unconscious approach
would be better in that situation.
o Simultaneous scanning: testing multiple hypotheses at the same time (such as
“white circles”). This has heavy demands on working memory.
o Successive scanning: testing one hypothesis at a time (such as“ black figures”).
This is inefficient but has low demands on working memory.
o Conservative focusing: choosing cards that vary in only one respect from a
positive instances (known as the “focus card”). This is both efficient and easy.
Implicit Concept Learning:
• (Reber, 1967,1976) conducted a series of studies using artificial grammar to see whether
people can retain and make use of information about specific exemplars.
• In the initial study, one group was given a sequence of letters generated by an artificial
language, the other was given a random sequence of letters to memorize. Neither group
was told that the strings of letters followed any underlying grammar rules.
o Results showed that the random group performed poorer than the grammar group.
o This is an example of how language is learned implicitly.
o Participants were not consciously aware there was a grammar rule in play at all.
o If participants were told there were grammatical rules the strings of letters
followed, they performed worse. Using a conscious strategy in complex situations
will hurt you.
o Those who did well implicitly used the grammatical rules to improve their
performance, but were not consciously aware of said rules.
• The similarity aspect is a bottom-up process, and the explanation is top-down.
• This is a non-analytical/unconscious/implicit learning strategy.
• Many of the things we’ve actually acquired and learned, we can’t articulate it in a
conscious fashion.
Neuroanatomical Correlates of Category Representa;on
• Segar et al. did a study to assess how the brain changes when acquiring prototypes.
• Participants were sorted abstract drawings into two groups, based on two related
prototypes (Smith and Jones). They were not shown the actual prototype, simply to try to
learn to distinguish between the paintings of the two artists.
o Each exemplar was very similar to the unseen prototype of one painter, and
dissimilar to the other
• In initial naive learning, stimuli could only be processed as specific visual patterns,
activating the right hemisphere (Right Prefrontal, Right Parietal).
• As learning progressed, the left hemisphere became more active as participants gained
the ability to abstract general properties (Left Parietal) and generate and reason with a
verbal rule (Left Prefrontal).
• The basal ganglia, a major brain relay center, plays a role in processing visual stimuli,
selecting actions, and learning from feedback during categorization tasks.
• Categorization tasks involve a complex interplay between cortical and subcortical brain
regions.
Scripts
• Scripts are schemata for routine events, such as the sequences of events which people
follow when ordering a meals.
• Bower, Black, and Turner (1979) conducted studies on script usage. Participants were
asked to write scripts for various events, revealing a significant overlap in the characters,
props, actions, and order mentioned. Notably, participants tended to mention general
actions, reflecting a broader description level.
• Medin (1989) proposed the psychological essential framework which assumes:
o People act as if objects, people or events have specific essences
o Essences constrain variations within a category
§ Ex. people can vary in height, weight, hair colour, eye colour, bone
structure, and the like, but they must have certain other properties in
common by virtue of the underlying essence they share.
o Theories about essences connect deeper properties (e.g., DNA) to superficial ones
(e.g., eye colour)
• Knowledge of category essence varies with expertise
o Ex. biologists posses more knowledge about genetic structure, influencing
classifications.
o One effective classification strategy is perceptual similarly. Though experts may
classify things based on deeper principles when expertise allows.
o Classification of instances can also change as people become more experienced or
knowledgeable about certain things.
• Assumes there are 3 types of concepts:
o Nominal-kinds concepts – clear definitions, necessary and sufficient features;
emphasizes concept definition.
o Natural-kinds concepts – relates to naturally occurring things (e.g., gold or
tiger). Emphasizes definitional/essential features (especially about molecular or
chromosomal structure)
o Artifact concepts – constructed for a purpose or task. Emphasizes purpose or
function, explained within a knowledge-based approach.
Summary:
• Categories are classes of similar objects, events, patterns, things etc.
• There are five distinct approaches to the study of categorization and concepts:
o Similarity based approaches include: Classical, Prototype and Exemplar
o Explanation based approaches include: Schemata and Knowledge-Based
• Categorical structure can be learned both explicitly and implicitly depending on the task.
o Simpler tasks with more rules are easier to do, they require working memory and
are done consciously.
• Categorical learning is associated with a shift from right to left hemispheric processing in
the brain.
Visual Imagery and Memory (Chapter 8)
• Imagery is reprocessing information using the same components that were used when it
was encoded for the first time.
• Pictures evoke emotions, especially those that are particularly hard to express in words.
• Pictures can express ideas much more efficiently than words.
• Images are much more memorable than other representations. They’re encoded in a richer
way, which makes them easily retrievable.
• Various techniques are used to increase chances of remembering, such as mnemonics.
• Mnemonics serve as an effective retrieval cue, many (though not all) use imagery
o Reductionist mnemonics – (doesn’t use imagery) reducing content to a concise
sentence (for example: “HOMES” to remember the names of the Great Lakes).
o The method of Loci requires the learner to imagine a series of places (locations)
that have some sort of order to them.
§ This is where you imagine a series of places that have a specific order to
them. You imagine the items that you have to remember at those locations.
You can perform a mental walkthrough to recreate the images of the items
along the path. The geographical locations are the cues.
§ There is an issue, however: proactive interference.
Techniques for interac;ng with images
• Bower (1970): paired-associate learning. He argued that interacting images are better
than non-interacting images. Remembering the association would act as a retrieval cue.
• Wollen et al. (1972) argued about interactive and non-interactive images, and bizarre vs.
non-bizarre images. They discovered that interactive images are recalled better, and
the bizarreness of an image has no effect.
• The Pegword Method: create an image of memory items with another set of ordered
cues. Create an easily recalled list of nouns (which is the ordered cues), then picture each
memory item interacting with one of the nouns. There’s more to remember, but it
provides more retrieval cues. It works well, but it’s fairly limited in terms of how long the
list can be. There is also proactive interference with associating new items over and over.
• Dual code hypothesis (Paivio, 1969): memory contains two distinct coding systems:
verbal and imagery. This improves memory over having only a single code. We have
multiple representations of the same information, which means we have two sources to
recall the target information from.
o He did a study with four lists of noun pairs: CC, CA, AC, and AA (concrete /
abstract). The best results were for both concrete objects, followed by CA, then
AC, then AA. CA was second because the first concrete object was used as a peg
to attach the abstract noun to.
§ Analysis: people spontaneously make images for concrete nouns, and
imagery varies with concreteness. Concrete nouns are dual-coded, but
abstract nouns are only coded verbally. The first noun in the experiment
acted as a peg for the second noun, so the imageability of the first noun in
crucial.
• Relational-organizational hypothesis (Bower 1970b): imagery improves because it
produces more associations between the items to be recalled. Essentially, imagery works
by facilitating the creation of a greater number of “hooks” that link the to-be-remembered
pieces of information.
Mental Rotation
• When asked to perform a letter rotation (determining if a letter is forwards or back-
wards), it’s quicker to determine letters that require less of a rotation.
• Research shows that you actually mentally rotate in your mind (no shortcuts).
• There are individual differences in how good people are at generating images.
• Spatial and visual processing are quite similar. Mathematical ability is highly associated
with mental rotation ability (since spatial processing is really just comparing
magnitudes).
• Symbolic difference effect: if you create an image of two things, the difference between
the two determines the performance of the comparison. For example, comparing an
elephant and a cat is quicker than comparing a beaver to a cat or a mouse to a rat.
Comparing 7 to 2 is faster than comparing 3 to 2.
• Shepard and Metzler (1971) conducted an experiment where they asked participants to
compare two objects and decide if they are the same. The objects were presented at
different orientations/rotations. They discovered the time to compare is directly related to
the degree of rotation required. In their experiment, there was a rotation rate of 60
degrees per second.
o Men perform better than women (by one standard deviation) for mental rotations
on average, and the converse is true for information processing. Females typically
have more math anxiety on average, so this is fairly logical.
o Performance is likely better for rotating letters exactly 180 degrees (upside down).
o The mind takes a linear increase in amount of time depending on the degree of
rotation
o The more it’s rotated the longer it takes to identify the object
o Issues with this study are possibility of interference from demand characteristics –
that is, the stimuli looked at are cognitively penetrable (you know what the
researcher expects to see)
§ Still, it’s hard to image how such a linear pattern could be solely a result of
demand characteristics
§
Image Scanning:
• Kosslyn in 1973 said if imagery is spatial (like perception), then it should take longer for
participants to find parts that are located further away from the initial point of focus.
• In his first study, Kosslyn had participants look at images, memorize them and then close
their eyes and picture the objects and identify specific points of that object
o The longer the focal point was in their mind, the longer they took to identify the
object
o Reaction time was measured by hitting a button when they had “found” the spot
o Confounds included demand effects and also intervening items (as they found the
focal point of the object, they passed other visual parts of that object).
• He then performed a follow up study with a map with focal points distributed further
apart, and discovered that people would take longer to scan larger distances in their
mental imagery, which suggests that it is spatial.
o Like his original study, results could have been affected by demand
characteristics.
Properties of Mental Images
• Implicit encoding: images give access to information that was not explicitly encoded.
This information can be accessed explicitly
• Brooks (1968) - An example of implicit encoding is in determining the number of
cupboards in your kitchen. You probably haven’t counted, but you could use a mental
image of your kitchen to determine the proper answer.
• Perceptual equivalence: imagery activates similar systems as perception does (although
not as strongly).
• Perky (1910) Study: Lab setting - participants were instructed to stare at a screen and
form strong mental images of some stimuli (ex. banana)
o Participants were then distracted, and when told to redirect their focus onto the
screen, a faint image of whatever they were imagining was presented on the
screen.
o Participants had a hard time distinguish what was shown to them and what was in
their mind
o Further shows that mental imagery is memory; a reconstructive process
• Spatial equivalence: spatial relationships in images correspond to spatial relationships in
actual physical space. Recall Kosslyn’s scanning studies (involving the map) from earlier.
Blind people had the same results as sighted people after learning the map by touch.
• Transformational equivalence: image transformations and physical transformations are
governed by the same laws of motion. Recall the mental rotation studies from earlier – it
takes time to rotate (depending on the angle of rotation), it seems continuous (there are
intermediate stages), and the entire object is rotated, not just its parts.
• Structural equivalence: images have a coherent structure, are well organized, and can
be reorganized and reinterpreted. Generating images depends on the complexity of the
image (as picture inspection does as well). It takes longer to construct a detailed image
than one with less detail.
• We can sometimes think of an image in a simpler way to make it seem less complex. We
could think of two overlapping rectangles (in the shape of a plus), or we could think of it
as five squares (which is more complex).
o Top-down expectations are going to affect the speed at which you generate those
images.
The Imagery Debate
• Stephen Kosslyn and Zenon Pylyshyn debated between analog (fluid) and propositional
imagery.
• Pylyshyn’s view was imagery studies were susceptible to demand characteristics, the
picture metaphor was questionable (can you imagine something without knowing what it
is?), and he questioned why we need two distinct codes and can’t just use propositions.
• Images are based on what you know. Some information is not retained, however. For
example: do you know which way the polar bear is facing on a toonie?
• The analog position (picture metaphor theory): a visual code is used that closely
resembles the original state, and visual images are “pictures in the head.”.
o Ex. instead of written directions, you see the map in your head

• The propositional position: visual imagery uses a symbol code, and visual images are
simply descriptions that are constructed of abstract propositions.

• The propositional representation of a cat under a table would be “UNDER(CAT,


TABLE)”, whereas the analog view would be a literal (spatial) depiction of a cat under
some table.
• Critiques:
o Many imagery tasks are cognitively penetrable.
o People may be “mentally pausing” in image scanning experiments because of
demand characteristics (In other words, the task “demands” somehow that the
o Experimenters may unconsciously give subtle cues to participants (experimenter
o expectancy effects). A study was done on this bias by telling researchers the
results of the study were believe turn out one way or another. In all studies,
participants performed as the experimenter expected them to
§ While this doesn’t assert that results from all visual imagery experiments
are result of experimenter effects or demand characteristics, it’s important
for visual imagery researchers to ensure these effects are minimized.
The Imaging Brain
• Brain areas involved in imagery are the same as the brain areas that are involved with
perception.
• Patient “MGS” had her right occipital lobe removed as treatment for epilepsy. The
occipital lobe is primarily responsible for vision. Removing part of the visual cortex has
the consequence of decreasing the field of view.
• Before and after her surgery, MGS performed a mental-walk task. That is, she imagined
walking toward an animal and estimated how close she was when the image began to
overflow her (decreased) visual field. The data showed that removing part of the visual
cortex decreased her field of view and the size of her images (the field of view in
imagery).
• People who have lost the ability to see color are also unable to see colors through
imagery.
• Unilateral neglect patients ignore the left-side of their mental images.
• The same neurons respond to perception and images. Neurons are actually category-
specific, too.
• Kriman et al. (2000) used electrodes in the brains of people with epilepsy to study how
individual brain cells in the medial temporal lobe respond to different things. They found
neurons that specifically react to certain objects but not others.
o Essentially further supports the idea that seeing something with your eyes
(perception) is activating the same neurons as if you closed your eyes and
imagined that same thing (imagery)
• Kosslyn, Pascual-Leone et al., (1999) study: Subjects memorized a display that contained
four quadrants.
o The subjects were scanned as they closed their eyes and visualized the display,
and then heard two numbers, which they had previously learned were labels for
specific quadrants, followed by the name of a dimension (such as "length").
o They were to decide whether the set of stripes in the first-named quadrant was
greater along that dimension than the set of stripes in the second-named quadrant.
o This is difficult, as it requires people to engage in imagery in great depth and
compared pictured stimuli on specific attributes.
o In this study, a 'sham' magnetic pulse, or transcranial magnetic stimulation
(TMS), is applied to an unrelated brain location as a control. TMS temporarily
anesthetizes targeted brain regions. Inhibiting the primary visual cortex with TMS
slows both perception and imagery. This supports the assertion that imagery and
perception activate/use the same brain regions.
Language (Chapter 9)
• If you’re given some distorted audio to listen to, you may not be able to understand it at
all. However, if you’re presented the text visually, you’ll be able to hear the words in the
distorted audio.
• Expectation, knowledge, etc. fills in gaps in auditory stimuli. Like other cognitive
processes, this is an active reconstruction.
What is Language?
• Structure: language has structural principles including a system of rules (grammar) and
principles that specify the properties of expressions.
• Localization: various physical mechanisms in the brain are specific language centers.
• Use: language is used to express thoughts, establish social relationships, and to
communicate & clarify ideas.
• Although animals have highly sophisticated communication system, they are not
structured enough to be a language under these rules. Much of their language is mimicry
and reinforcement. They can’t form infinitely many unique expressions.
• Language is a communication system, but communication systems are not languages.
Characteristics of Language
• The four main characteristics of language are:
o Regular: governed by a system of rules.
o Productive: infinite combinations of things can be expressed.
o Arbitrariness: lack of necessary resemblance between a word or sentence and
what it refers to.
o Discreteness: system can be subdivided into recognizable parts.
• Out of the four characteristics, most other animals fail at meeting the productivity and
arbitrary requirements of languages. Bee dances are not productive, and meerkats are not
arbitrary in their communications.
• Phoneme: the smallest unit of sound we can make. For example: mat vs. cat.
• Morpheme: the smallest unit of sound we can make that has meaning. For example: take
vs. taking.
• Syntax: rules for how to put together sentences and phrases. For example: English is
subject-verb-object, “The girl will hit the boy.” As humans, we are pretty good at syntax
however we aren’t actively conscious of our use of syntax.
• Semantics: meaning. This explains anomalies, contradictions, ambiguities, synonyms,
and entailment.
• Pragmatics: social rules of language. For example, it’s a social rule to not talk until the
person you’re talking to is done speaking. Also, Paris Hilton sitting at a table with a
friend, both of them texting on their cell phones, is violating a social rule.
Speech Perception
• There are two fundamental problems with speech perception:
o Speech is continuous.
o A single phoneme sounds different depending on the context.
• The auditory input we receive is dirty or messy. We may hear mumbles, or accents that
are harder for us to understand.
• We perceive speech by breaking its continuity up into chunks (words).
• Like visual perception, context aids our interpretation of sounds and words, and can
create illusions in some cases.
• Acoustic context effect: perception of speech is context-sensitive. Neighboring stimuli
can change how sound is perceived. These effects occur at a number of levels including
phonetic, lexical and semantic.
• Visual context effect: McGurk Effect: watching a guy’s lips say “ga” when his voice is
actually saying “ba” will make you hear any number of things, even things like “da.”
Language & Cognition
• The Whorfian Hypothesis:
o Language both directs and constrains thought and perception.
o Match retrieval and encoding becomes hard when they occur in different
languages.
o Your inner monologue could change from one language to another as your
proficiency (and use) increases?
o To what degree can you think without having words to go along with it.
o The truth is probably somewhere in the middle
o Foreign language effect – often times people can respond in logic/reasoning
tasks more logically in their second language. This is because your first language
contains more belief and expectations and biases. Second languages carry less
experience and thus can remove some of these biases.
• The Modularity Hypothesis
o Jerry Fodor in 1983 argued that perception and language are modular cognitive
processes.
o Modularity here refers to how certain parts of language function are completely
separate from the rest of cognition, some so far separated they are protected from
expectations and biases initially.
o They’re domain specific – it operates with certain kinds of input but not others.
o They’re informationally encapsulated – it operates independently of the beliefs
and other information available to the processor.
Neuropsychological Views
• The motor cortex, Broca’s area, and Wernicke’s area are some of the underlying brain
structures involved with language.
• An aphasia is a collective deficit in language comprehension and production that results
from brain damage.
• Broca’s aphasia: expressive aphasia, involving halting, agrammatic speech, impaired
function words (nouns and verbs are okay), resulting from damage to the frontal areas of
the brain. This is a motor deficit – the mind is okay.
o Writing ability is impacted as they have difficulty with expressive writing but can
still read better than those with Wernicke’s.
• Wernicke’s aphasia: receptive aphasia involving fluent speech without content. They
cannot comprehend simple commands. This results from damage to the temporal lobe of
the left hemisphere. Words are jumbled. Words are easy to come (it’s not a motor
problem).
• The left hemisphere is much more dominant for language processes for most people.
• Neither of these views are pure deficits – someone with Wernicke’s aphasia may be able
to understand some things (such as “I don’t understand”).
• Other aphasias include Anomia (naming deficit), Alexia (visual language impairment),
Agraphia (inability to write), and Alexia without agraphia (can write, but cannot read
what they have written).

Test 2 Lec + Textbook Notes


Attention (Chapter 4)
• There is a ton of sensory information available to you, but you only think about a very
small portion of it.
• The gorilla video: where you’re asked to count the number of passes between the players
wearing white. A gorilla passes through the scene. 42% of people did not see the Gorilla
in the original study. This particular video was an updated video where in addition to the
Gorilla, a player left the game and the curtain behind the players changed from red to
gold. Note that white-shirted people were used to avoid confusion between black shirts
and the gorilla.
o This example demonstrates inattentional blindness – the phenomenon of not
perceiving a stimulus that might be literally right in front of you, unless you are
paying attention to it
• If the task is harder, attentional focus is stronger, which means there is a higher chance of
missing the gorilla.
• We only perceive that which we attend to.
• How much of this information is actually processed? Attentional research became
important during the cognitive revolution because humans have limited mental capacity.
• How can we filter some information out in order to focus on other things? How much
attention do we spend on out-of-focus things? How do we decide what to pay attention
to?
Selective Attention
• Refers to the fact that we usually focus our attention on one or a few tasks/events at any
given time
• How do we select info to attend to? What happens to all the info that we don’t attend to?
• Attention comes before perception (between retinal processing and actual percept
processing).
• Attention determines which distal stimuli get turned into a percept.
• Attention can move around space without moving your eyes; attention is not the same as
perception
• The main theories of attention differ in relation to how deep information is being
processed.
• Sensation is not the same as perception.
• We’re going to look at early selection and late selection, however we aren’t making any
claims about which is right and which is wrong.
• The key difference between early and late selection is where in the system the attention
filter occurs. These are very debatable models.
• The thing you try not to do is the thing that is the hardest not to do (try not to think about
blinking your eyes, for instance). This is an ironic process of cognition.
• Ironic processes of cognition – that thing you’re told not to do is what is processed
through your mind
Early Selection
• The filter occurs after physical characteristics, but before meaning (based on early
studies)
• Broadbent’s Filter Model: selection is early because it is done on the basis of basic
auditory features. The filter must happen very early on.
o Type of filter model - Dichotic listening task: A person listens to an audiotape
over a set of headphones. On the tapes are different audios played simultaneously
but each in a different ear. Given an audio stream, ask the subject to repeat the
audio (shadow) that was sent to one ear only. Info is presented quickly (150 wpm)
making the task attentionally demanding.
§ This allows researchers to see how much processing is occurring for the
contents of the other ear.
§ Due to its demanding nature, fewer resources are available to process
information from the non-shadowed, unattended message.
§ Cherry (1953) found that people can accurately shadow a spoken message,
even when spoken rapidly.
• Participants could distinguish speech from noise and the gender of
the speaker in the unattended message.
• They noticed something odd about backward speech but couldn't
recall that it was backward.
§ Moray (1959) showed that participants often failed to recognize simple
words in the unattended message, even after 35 repetitions.
• Language changes (English to German) in the unattended message
went unnoticed.
§ According to Broadbent’s filter theory, it should not be possible to recall
any of the meaning in the unattended messag.
• Problems with Filter Theory:
o Cocktail party effect: if one’s own name is in either in- or out-of-focus content,
attention will be diverted to hear more.
§ Certain things (such as our own names) have lower thresholds than other
content, in order to divert our attention. The filter is not occurring just at
the physical level.
§ Perhaps there are lapses of attention explaining why sometimes you can
pick up your name
o Semantic leakage: story switched from shadowing one ear to the other as the
meaning continued on the other, unattended ear. You’re picking up meaning, to
some degree, in the unattended ear.
o Treisman’s experimental paradigm: information is being processed at deep
levels even if it is not at the forefront of attention
o Associations learned unconsciously: shock was paired with city names.
Unattended city names in the unshadowed ear triggered a GSR response
(measures heightened level of arousal) to ALL city names (corteen and wood
study)
§ shows that despite being unconscious and ignored people process info
deeply and can retain it
• Treisman’s Attenuation Model: meaningful information in unattended messages might
still be available, even if hard to recover.
o Subject to three kinds of analysis
§ Physical properties (pitch/loudness)
§ Linguistic (breaking down the message into syllables and words)
§ Semantic (meaning)
• 2 critical stages:
o 1st stage: “attenuator” instead of a “filter”:
§ unattended messages are tuned down with attenuator, instead of by a filter.
§ Analyzes for physical characteristics, language and meaning.
§ Analysis is only done to the necessary level to identify which message
should be attended.
§ Unattended messages are attenuated.
o 2nd stage: dictionary unit: contains stored words that have thresholds.
§ Important items, such as your own name, have lower thresholds and thus
even a weak signal can cause an activation.
§ We all have different dictionary units and thresholds based on personal
experiences and what matters to us. This explains things like the cocktail
party effect, since our own name has a lower threshold
• Different from filter theory which argues that unattended messages, once processed for
physical characteristics, are discarded and fully blocked. Attenuation theory argues that
unattended messages are weaking but the information they contain is still available.
Late Selection
• All information (both attended and unattended) is processed for meaning, and activates
the corresponding representation in long-term memory (LTM).
• Selection of what to pay attention to happens during the response output stage.
• Human limitation for processing two streams of information lies in making a conscious
response to each stimuli.
• The filter occurs very late. Meaning is determined for all information before the filtering
process occurs.
Automaticity
• Automatic processing much: occur without intention, without conscious awareness and
must not interfere with other mental activity
o Ex. walking, learning how to ride a bike; automatic unconscious level
o Word reading is thought by many to be automatic.
• Automated processing does not require attention.
• Things that are controlled require attentional resources
• Over time, the attentional capacity required for a given task decreases. At first, you think
about the mechanics of playing a guitar, for instance, but you stop thinking about the
mechanics as time progresses.
• Down side: automaticity could interfere with other tasks. If a word is presented, you
cannot prevent yourself from reading/processing the word for meaning. It’s very difficult
to stop an automatic process from happening.
• Schneider and Shiffrin (1977) investigated automatic processing in controlled lab
settings.
o Participants were tasked with finding specific targets (letters or numbers) in arrays
of letters or numbers (frames).
o Different target-distractor relationships were explored: numbers among letters,
letters among letters, and numbers among numbers.
o In "consistent-mapping" conditions (where targets and distractors were of
different types), performance depended on frame display time, not on the number
of targets or distractors.
o In "varied-mapping" conditions (where targets and distractors could be of the
same type and alternate), performance was affected by memory set size, frame
size, and frame display time.
o Schneider and Shiffrin distinguished between two processing types: automatic
and controlled.
o Automatic processing was suited for easy and familiar tasks, occurring in
consistent-mapping conditions.
o Controlled processing was used for difficult and unfamiliar tasks, operating
serially, requiring attention, being capacity-limited, and under conscious control.
Controlled processing was observed in varied-mapping conditions.
• The STROOP task: a series of color bars (or color words) are presented in conflicting
colors. The task is to name the color of the ink of each item as quickly as possible. The
size of the STROOP effect increases as you become more proficient at that task.
• Word reading is one of the most automatic processes
o Difficult for literate individuals to not read words which can lead to interference
when the task isn’t to read the words but name ink colours
o STROOP interference increases as we gain more automatic abilities.
§ E.g., preschool children have lower interference since they have lower
automatic reading abilities in comparison to older children
• Capacity limitations are only applicable to tasks that require conscious attention, and do
not apply to automatic processes.
• Practicing a task leads it into being an automatic process over time.
• Controlled processing is serial, requires attention, has a limited capacity, is under
conscious control and is deliberate (intentional)
• Automatic processing is without intention, with no conscious awareness, does not
interfere with other mental activities, runs in parallel (do things at the same time), and
does not constrain capacity limitations
Disorders of Attention
• Visual neglect is a disorder of attention. It’s also known as heavy neglect, hemi-spatial
neglect, or unilateral neglect.
• An attentional deficit rather than a sensory deficit; the ability to attend stimuli and divert
attention is affected, but visual components are intact
• Occurs when the right parietal lobe is damaged, which affects your perception of your
left vision.
• Conscious experience isn’t felt on the left side (contralateral/opposite) side of the body is
neglected
o I.e., the patient “neglects” contralateral (opposite) hemi-space
o In extreme cases, patients even deny that some of their own limbs belong to them
• The visual components are there, but they cannot internally divert attention to the object.
• They can still see everything if an external cue helps them along.
• The left hemisphere has less specialization, which means it is not affected nearly as much
as the right hemisphere.
• Common in people who have had strokes
o Often not a life sentence, people can recover from this deficit
• Line bisection: a patient is asked to bisect the center of a line. Normal people will hit the
middle (almost; there is a slight right bias). Affected people would be far to the right
because they see the line as being shorter. This is a test that’s often used in diagnosis of
visual neglect.
• If you had visual neglect, you wouldn’t be able to drive.
• No two patients will have the exact same behavior, because their lesions differ.
• Anton was a patient who started to attend to more details as time went on, as he was
recovering from his stroke. This illustrates that different patients can have varying
degrees of severity.
• We know that this is an attention disorder because you can point out visual inaccuracies
and they’ll notice them at that point. They just cannot notice these visual inaccuracies
without an external cue.
• Semantic priming: saying the word “doctor” and words like “nurse” will also be activated
to some degree. A word in a neglected field will not be noticed, however it can prime
responses to words in the attended field.
• Their visual perceptual system is not broken! This is purely an attention disorder.
• People with heavy neglect need to be conscious of their disorder but attention is generally
unconscious, so this is hard to do.
• They do not have conscious (explicit) knowledge of information in neglected fields but
they do show some unconscious (implicit) knowledge.
Attention in the Real World
• Strayer & Johnson study: participants performed a pursuit-tracking task where they used
a joystick to move a cursor on a computer, keeping it positions over a moving target. At
various intervals the target flashed either red or green, signaling the ‘driver’ to push a
‘brake’ button on the joystick (red) or ignore the flash (green). Task was performed by
itself and then also while listening to a radio broadcast or having an engaged conversation
with someone on the phone (dual-performance condition of the study).
o Results showed that the talking with someone on the phone caused them to miss
red lights and react more slowly in comparison to the single-task and radio
condition.
• Shows we have finite attentional abilities.
Memory Structures (Chapter 5)
• Encoding – acquiring information.
• Retrieval – the calling to the mind of previously stored information.
• Short-term memory records 20 seconds worth of information. If this information is
rehearsed a sufficient amount, it’ll be moved to long-term memory.
• Episodic memories form in long-term memory starting at 4-5 years old. Semantic
meanings (like word meanings) are learned earlier. You need a developed sense of self for
episodic memories.
Sensory Memory
• The initial brief storage of sensory information, represents about one second of
information. It’s very brief, and only handles basic percepts.
• A sensory “after image” “hangs.”
o Ex. If you take light and spin it in a circle, it appears to be a circle, not a single
point, because of sensory memory.
• The Modal Model: Assumes that information is received, process and stored differently
for each kind of memory.
o We have a sensory store for each modality (1 second for iconic/visual and 4-5
seconds for echoic/auditory).
o Iconic (visual) memory consists of < 1 second of information, containing the
visual field and the physical features within it.
§ Sterling (1960) wondered how long information is stored in sensory
memory, and how much we can store in sensory memory. He displayed
arrays of letters briefly for only 50 milliseconds and found that, on
average, people could recall only 4 or 5 of the 12 letters, regardless of
whether the display time was extended to 500 milliseconds.
• This limitation wasn't due to perception but was because the
information faded quickly from this sensory memory system.
• Sperling developed the partial-report technique to more accurately
measure the content in sensory memory. He used auditory cues
(low, medium, high pitches) to instruct participants to report a
specific row of letters after seeing the display.
• Sperling discovered that participants could remember
approximately 9 of the 12 letters when cued immediately,
suggesting the visual store could hold about nine items briefly.
• However, if the cue was delayed by 1 second, recall dropped to 4
letters, similar to the whole-report method.
• This brief visual memory was termed the "icon" by Neisser (1967).
• Other researchers, like Averbach and Coriell (1961), demonstrated
the icon could be "erased" by subsequent stimuli in a phenomenon
known as masking, where new stimuli replaced the memory trace
of the original information.
• Multiple-choice exams are partial reports. They’re cueing 30
questions out of (say) 500 possible questions.
• Full reports fail because sensory memory fades. Partial reports
allow you to report before the information fades because there is
less information to report.
o Echoic (auditory) memory while less capacity than iconic memory, consists of
4-5 seconds of information (larger storage) and contains categorical contents.
§ Have you ever been in a situation where someone asked you a question,
and you say “what?” but then answer the question a second later. An
example of its duration capabilities
• In real experiments, more trials acts as a way to eliminate the bias of people guessing
rather than truly remembering.
• The 7 ± 2 pieces of information fact only applies to short-term memory, not sensory
memory.
• Sensory memory is bigger than short-term memory. It’s just shorter.
• Information will be wiped out if something takes their place. This is known as the
masking effect for iconic percepts, and the suffix effect for echoic percepts.
• Ecological purpose – ensures that the visual system has some min. amount of time to
process info
o integration of information across time
o processing of entire visual field for directing attention
Short-Term Memory (STM)
• STM is your active consciousness at a particular point in time.
• You can keep track of 7 ± 2 bits of information in STM, according to George Miller.
o You can increase capacity by chunking/reorganizing information into
meaningful units
§ Ex. memorizing N-F-L-C-B-S-F-B-I-M-T-V by chunking it into NFL –
CBS – FBI - MTV
• Short-term memory is where the real, conscious work happens, similar to RAM in your
computer (the number of active programs you can have is similar to your memory span).
Forgetting in Short-Term Memory
• If you don’t use information, it’ll fade away.
• There are two theories in terms of how forgetting works in STM. Both are likely
occurring in some way.
• Trace decay theory: This is the automatic fading of the memory trace as time goes on.
o Brown-Peterson Task: present some letters to a subject, then ask them to count
backwards by 3s from some number (for some period of time), then to recall the
letters.
§ Due to trace decay, they may have trouble recalling the letters.
§ After 20-30 seconds, memory begins to fade due to trace decay, making it
difficult for them to recall the letters
• Interference theory: This is the disruption of the memory trace by other traces, where
the degree of interference that occurs depends on the similarity of the two memory traces
(how similar the old & new memories are). There are two kinds of interference:
o Proactive interference is where early information makes it hard to encode new
information. This is the type of interference that occurs in your mind from
focusing on several different classes at once, for instance.
§ Wickens, Born & Allen used Brown Peterson’s paradigm but switched
categories after a few trials.
• Subjects were asked to remember and recall letters, with intervals
between the various trials of the experiment. There is a control
group that is given three letters to remember every time. The
experimental group has to remember letters for all trials except the
last, at which point they are asked to remember a set of numbers
instead. Switching the last trial like this causes a release from
proactive interference! With the control group, their performance
suffered as the number of trials increased. However, the
experimental group jumped back up to maximum performance on
the final trial, since it was a different task that did not experience
proactive interference.

o Retroactive interference is a more powerful kind of interference where new
information makes it difficult to retrieve old information. This has a bigger impact
on long-term memory than proactive interference does.
Working Memory
• More complex version of short-term memory
• Some researchers (Baddeley and Hitch) questioned whether the notion of short-term
memory was adequate, or too simplistic.
o They claimed that rehearsing digits out loud interfered with reasoning and
comprehension tasks, but the degree of impairment was far from dramatic. This
task is known as the syntactic verification task. The percentage of errors did not
change, but the reasoning time took longer and longer.
o The digit task takes up only one subsystem and the reasoning & comprehension
tasks are free to use the other (unused) subsystems. This suggests that short-term
memory is not a unitary system.
• Working Memory Model: We have a central executive, a visuospatial sketch pad, and
a phonological loop.
o The central executive coordinates resources between the visuospatial subsystem
and the phonological subsystem. The visuospatial subsystem handles visual
information, and the phonological loop handles articulation and other verbal
information.
o The visuospatial sketchpad and the phonological loop are functionally
independent systems.
o You can do two different tasks at the same time if one is verbal and one is visual.
o It’s very hard to stop words from their obligatory access to the phonological loop.
For example: it’s hard to tune out music, TV, or a boring lecture, to study.
o A sentence is shown on the screen, and read aloud. It’s then hidden. You’re asked
how many words there were. Both of these tasks use the phonological loop, so
you offload one task (counting, in this case) to something visual, by counting on
your fingers. If you didn’t do this, this task would be very difficult (it’d have to be
run serially).
• People have different strategies for encoding information (visually, through repetition,
etc.).
• If these systems are independent (as they are), then articulatory suppression (repeating
nonsense like “the the the the”) should disturb memory for linguistic information but not
for visual information. Saying words out loud makes it harder for you to rehearse
information in your head.
• The act of saying something out loud makes it more distinctive, which means it’s
generally easier to retrieve that information from memory because there is a larger variety
of cues for that piece of information (your own voice saying it aloud acts as a cue).
• It’s generally easier to retrieve information that is more distinctive because cues are more
unique to that piece of information, and aren’t shared across multiple pieces of
information in memory.
Long-Term Memory
• Episodic memory – personal, autobiographical memory
• Semantic – fast (general knowledge)
• Procedural – automatized (riding a bike)

• There is much evidence to show that we do actually have separate systems for working
memory and long-term memory.
• Episodic and semantic memories (etc.) are stored in long-term memory.
• Long-term memory is akin to a hard drive in a computer. Both store information
indefinitely. Information in LTM is stored forever, but it becomes harder to retrieve the
information over time.
• Our conscious memories are also a reconstructive process.
• Serial Position Effect:
o You’re given 20 items to recall.
o You’ll remember the last couple of items because of the recency effect. These
items are stored in sensory or working memory, and will be wiped out over time
(i.e. with a delay between the items being presented and the time to recall them).
o You’ll also remember the first few items because of the primacy effect. These
items are stored in long-term memory and can be wiped out through subvocal
rehearsal.
o Recency effect can be reduced/eliminated by delaying the period of time between
recall and the last bits of information processed.
o Primacy effect can be reduced by dividing attention, using articulatory
suppression or engaging in dual task; any interference to the phonological loop
will inhibit ability to rehearse info enough that it’s easier to recall from LTM
• You can have deficits in one memory system but not another.
• Clive Wearing:
o Has a severe case of amnesia because his temporal lobes are damaged (which
contains the hippocampus, etc.). These are the structures that are involved in
remembering and inserting new memories.
§ Temporal lobes and hippocampus, amygdala etc. do not contain all
memories, but rather work as an indexing system or a Google search
engine, allowing people to more easily and functionally retrieve and
pinpoint specific memories.
§ The hippocampus can be wiped out but memories are still stored. They
just can’t be retrieved.
o He has moment-to-moment consciousness.
o He always feels like he’s awaking afresh, all the time.
o He writes a diary / log of things that happen, to act as long-term memory.
However, he doesn’t believe the things he wrote earlier so he crossed them out.
He thinks he was unconscious when he wrote earlier log entries.
o His short-term / working memory is intact, but his episodic long-term memory is
not working. He can still play piano well, so his procedural memories are intact.
• Episodic memories are stored all around the cortex.
• People with amnesia do not suffer a primacy effect because they do not have long-term
memory.
• The capacity of long-term memory is very large or possibly infinite. It goes beyond what
we can actually measure.
o We have hundreds of thousands of synapses, each with the capacity to hold
memories.
• Long-term memory is coded semantically (by meaning). The concept of an apple is
connected to seeds, fruit, pie, worm, food, pizza, and red. Some of those concepts are also
connected to each other, too.
• Long-term memory is a permastore, even without use. Retrieval cues just start failing
over time with no use.
• There are different types of retrieval. Recognition is when you’re asked “do you
recognize X?”, which provides a retrieval cue. Recall is when you’re asked “list all of the
words you saw.”, which clearly is a harder task and therefore has worse performance than
recognition.
o One study showed that recall declined for the first 3 to 6 years in participants who
had taken or were taking a high school/university Spanish class.
o There was not much forgetting over the next three decades and final declined
occurred after 30 – 35 years.
o Participants were able to recognize words more than they were able to recall,
showing that recognition is much less reliant of conscious recall.
• Forgetting memories typically is a very rapid dip, then it levels off.
• Interference is the main cause of forgetting in long-term memory. Concepts get difficult
to retrieve as competition builds for certain retrieval cues (having multiple usual parking
spots, for instance).
• The more cues you have for the same target, the easier it is to remember / retrieve it. The
more distinct these cues are (from cues for other concepts), the easier it becomes.
• The deeper you process something (processed for meaning), the more meaningful it’s
going to be, the easier it’ll be to retrieve later.
• Encoding can occur through two different types of rehearsal.
o Maintenance rehearsal: repetition. Repetition allows you to maintain or hold
information without transferring it into deeper code (deeper meaning). This is not
a very effective encoding method.
o Elaborative rehearsal: elaborate on meaning. This transfers the information to
deeper code, and provides richer multimodal codes as a result. It makes the
memory more unique and therefore easier to retrieve.
• Levels of Processing:
o “We soon forget what we have not deeply thought about.” Craik & Tulving
o People are more likely to remember words which can be semantically
(meaningfully) encoded.
§ Ex. in a study were people are asked to make judgements in associated
words (whether they are upper/lowercase, they rhyme or fit in a sentence)
people will most often the remember the sentence words, followed by the
rhyme and then the case words.
• The Generation Effect: people are much better at remembering things that came from
within. You’re reliving the experience as part of the retrieval process.
o Generating words are more memorable than just reading them, you’re basically
practising the task you will be doing later on
• Encoding Specificity Principle:
o “Recollection of an event, or a certain aspect, occurs if and only if properties of
the trace of the event are sufficiently similar to the retrieval information.” –Endel
Tulving.
§ Basically: retrieving info is essentially reliving and reconstructing
experience
o More cues at encoding time means you’ll store a more accurate representation.
o This principle is why witnesses to crimes are often taken to visit the scene of the
crime again.
o There is a slight benefit to writing a test in the same room you learned it in.
• Context Dependent Memory: information learned in a particular context is better
recalled if recall takes place in the same context. Memory is dependent on the context it
was encoded in.
o Location: a perfect dissociation was seen in the scuba divers recall experiment
(studying one set of words underwater and the other on land, then asked to call on
land and underwater – each set was better recalled in the location it was encoded).
o Alcohol: information that was learned while intoxicated was retrieved well when
intoxicated again. Information learned while intoxicated but retrieved while sober
was the worst case.
o Personality: an individual with dissociative identity disorder was asked to learn
and recall a list of words in each of four personalities. If the study personality
matched the subject’s personality, their own personality will have floored
performance (less errors) and other personalities were had ceiling performance
(more errors). Jonah was the dominant personality that did better than others, with
“average” results for all personalities types (not great at any, and not terrible at
any).
Memory Processes (Chapter 6)
Reconstructive Nature of Memory
• Memory is an active reconstructive process. As we recall memories, we relive those
experiences and fill in gaps (as before). Memories are not accurate replays.
• Bartlett created an experiment where a story was read to participants, then they were
asked to recall the story at a later point in time. As time increased, people reported
aspects of the story in a culturally consistent manner. That is, they inserted details into the
story without being aware that they were doing so.
• A schema (pluralized as schemata) is a framework for organizing memory, they are
developed through years of experience and affect they way you reconstruct memories.
o Your mind will fill in gaps in order to make sense or to make it a better story.
• 80,000 court cases a year occur in the United States where eyewitness testimony is the
only evidence against the accused.
• Eyewitness testimony is very convincing (persuasive), however the validity of those
memories is inconsistent as memories are highly suggestible. To ways the accuracy of
recalled memories can be impacted are:
o Leading questions (misleading questions) can affect recall of the event.
§ Study showed that when people are shown a video of a car crash and then
asked “How fast were the cars going when they __ each other?” their
recollection of the cars’ speed increased if the word smashed was used,
followed by collided, bumped, hit and contacted.
§ Another study also showed how inconsistent memories can alter peoples
memories. When participants were given a photo with a car and a yield
sign to remember, many misremembered seeing a stop sign if asked if they
saw a stop sign despite being showed a yield sign originally.
• True memories can activate different areas of the brain than false/deceptive memories.
• The hippocampus cannot differentiate between true and false memories but the
parahippocampal gyrus can.
• As far as the person is aware, these are all real memories – they don’t consciously realize
that some memories are false, but the brain knows.
Amnesia
• Amnesia is caused by damage to the hippocampal system (which is composed of the
hippocampus and amygdala) and/or the midline diencephalic region. This damage could
be caused by a head injury, stroke, brain tumor, or a disease.
• There are two types of amnesia:
o Anterograde amnesia is the inability to form new memories. It affects long- term
memory but not working memory. Memory for general knowledge remains intact,
as is skilled performance. Anterograde amnesia occurs for a period of time after a
particular event. Mainly associated with episodic memory: that is memory of
experiences as every day is a “new day”
o Retrograde amnesia is the loss of memory of past events. It is always present with
anterograde amnesia. It doesn’t affect overlearned skills (such as general social
skills and language skills), or skill learning (like a minor tracing task). The
retrograde period is a period before a particular event that you cannot remember.
In pure retrograde amnesia, old memories can come back with time.
• You can find people with damaged episodic memory, but intact semantic memory. The
reverse is also true, but it’s much rarer.
• Participants were given lists of words followed by four memory tasks:
o Free recall (explicit task).
o Recognition (explicit task).
o Word fragment identification (e.g. participants had to identify visually degraded
words) (implicit task).
o Word stem completion (e.g. complete the stem: bo ) (implicit task).
• The control group did better for the explicit tasks, but the results were fairly even for the
control group and the amnesia group for the implicit tasks. Why is this? With explicit
tasks, the participant would have to place themselves in a situation consciously, which is
harder for amnesics because they can’t remember those situations.
• Explicit tasks are tasks that involve directly querying memory, whereas implicit tasks
indirectly assess memory.
• Amnesia causes a deficit with explicit (conscious) memory but not implicit (unconscious)
memory.
• Tulving claimed that long-term memory consists of two distinct but interactive systems:
o Episodic memory is memory for information about one’s personal experiences.
These memories have a date and time. For example: remembering where you
were on March 11, 2020 (COVID-19 was declared a pandemic). Instead of
pinpoint the exact place and experience, you use different personal markers (what
season it was, were you on break on still in school etc.) to create a reconstruction
of that experience/memory.
§ Episodic memory commonly occurs in the left temporal lobe
o Semantic memory is general knowledge of language and world knowledge. For
example: shoes go on your feet.
§ The left inferior prefrontal cortex (PFC) and the left posterior temporal
areas are other areas involved.
• Perception disorders like associative agnosia may overlap with attentional disorders;
inability to remember the meaning of common words or recall basic attributes of objects
is a result of damage in semantic memory.
• The hierarchical semantic network model is a model of semantic memory that argues our
knowledge of the world is store in a hierarchical fashion to minimize redundancy.
Semantic memory is organized as a network of nodes that are connected by
pointers/links. This explains why some memories are easier and faster to recall than
others: those which are in the first “level” of the network are more accessible than those
which are in a second or third “level”
o The principle of cognitive economy refers to how properties and facts are stored
at the highest level possible, to recover information you use reference
o The collection of nodes associated with all the words and concepts one knows
about is called a semantic network.
o The typicality effect poses a problem for this model as some concepts may be
recalled at different speeds despite at the same “level” in the network.
• The spreading activation theory disagrees with a hierarchical structure of semantic
memories, instead concepts are represented in a web-like fashion, each identified by a
node and connected/spread across various related concepts. Argues that our experiences
govern how closely certain concepts are related to one another. Evidence of this comes
from priming experiments: when people are shown two items on a trial and asked to
decide if the second item spells a word (known as lexical decision task)
• To learn and remember things most effectively, you should regenerate the information
learned (active recall) and practise this in a distributive matter. Distributive practice likely
makes you less susceptible to interference because you are creating more retrieval cues.
The more you can distribute learn (both in amount of information and length of study
sessions), the longer that information with be easily accessible in memory.
• Levels of processing theory of memory challenges the modal model of memory by
arguing that memory isn’t dependent on different memory stores (such as STM and LTM)
bur rather on the initial encoding of information which later affects the retrieval of that
information. The deeper (more meaningful/semantic) processing improves memory
retention more than rehearsal or repetition.

You might also like