Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lesson 1 Self From Various Perspectives

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 44

Lesson 1: SELF FROM VARIOUS PERSPECTIVES

Philosophical Perspectives

Socrates

(Socrates (469/470-399 BCE) was born in ancient Athens, Greece. His "Socratic method," laid the groundwork for Western
systems of logic and philosophy. When the political climate of Greece turned, Socrates was sentenced to death by
hemlock poisoning in 399 BC. He accepted this judgment rather than fleeing into exile.)

The phrase “Know thyself” has not been invented by Socrates. It is a motto inscribed on the frontispiece of the
Temple of Delphi.

This assertion, imperative in the form, indicates that man must stand and live according to his nature. Man has to
look at himself. To find what? By what means?

Without this work on yourself, life is worthless according to Socrates:


“An unexamined life not worth living.”

The Oracle and Socrates

When he was middle-aged, Socrates' friend Chaerephon asked the famous Oracle at Delphi if there was anyone
wiser than Socrates, to which the Oracle answered, "None." Bewildered by this answer and hoping to prove the Oracle
wrong, Socrates went about questioning people who were held to be 'wise' in their own estimation and that of others. He
found, to his dismay, "that the men whose reputation for wisdom stood highest were nearly the most lacking in it, while
others who were looked down on as common people were much more intelligent" (Plato, Apology, 22).

The youth of Athens delighted in watching Socrates question their elders in the market and, soon, he had a
following of young men who, because of his example and his teachings, would go on to abandon their early aspirations
and devote themselves to philosophy (from the Greek 'Philo', love, and 'Sophia', wisdom - literally 'the love of wisdom').
Among these were Antisthenes (founder of the Cynic school), Aristippus (the Cyrenaic school), Xenophon (whose writings
would influence Zeno of Cithium, founder of the Stoic school) and, most famously, Plato (the main source of our
information of Socrates in his Dialogues) among many others. Every major philosophical school mentioned by ancient
writers following Socrates' death was founded by one of his followers.

Socratic Schools

The diversity of these schools is testimony to Socrates' wide ranging influence and, more importantly, the diversity
of interpretations of his teachings. The philosophical concepts taught by Antisthenes and Aristippus could not be more
different, in that the former taught that the good life was only realized by self-control and self-abnegation, while the latter
claimed a life of pleasure was the only path worth pursuing.

It has been said that Socrates' greatest contribution to philosophy was to move intellectual pursuits away from
the focus on `physical science' (as pursued by the so-called Pre-Socratic Philosophers such as Thales, Anaximander,
Anaximenes, and others) and into the abstract realm of ethics and morality. No matter the diversity of the schools which
claimed to carry on his teachings, they all emphasized some form of morality as their foundational tenet. That the
`morality' espoused by one school was often condemned by another, again bears witness to the very different
interpretations of Socrates' central message.

JRV 1
His Vision

However his teachings were interpreted, it seems clear that Socrates' main focus was on how to live a good and
virtuous life. The claim attributed to him by Plato that "an unexamined life is not worth living" (Apology, 38b) seems
historically accurate, in that it is clear he inspired his followers to think for themselves instead of following the dictates of
society and the accepted superstitions concerning the gods and how one should behave.

While there are differences between Plato's and Xenophon's depictions of Socrates, both present a man who
cared nothing for class distinctions or `proper behavior' and who spoke as easily with women, servants, and slaves as with
those of the higher classes.

In ancient Athens, individual behavior was maintained by a concept known as `Eusebia' which is often translated
into English as `piety' but more closely resembles `duty' or `loyalty to a course'. In refusing to conform to the social
proprieties proscribed by Eusebia, Socrates angered many of the more important men of the city who could, rightly, accuse
him of breaking the law by violating these customs.

Socrates' Trial

In 399 BCE Socrates was charged with impiety by Meletus the poet, Anytus the tanner, and Lycon the orator who
sought the death penalty in the case. The accusation read: “Socrates is guilty, firstly, of denying the gods recognized by
the state and introducing new divinities, and, secondly, of corrupting the young.”

Ignoring the counsel of his friends and refusing the help of the gifted speechwriter Lysias, Socrates chose to defend
himself in court. There were no lawyers in ancient Athens and, instead of a solicitor, one would hire a speechwriter. Lysias
was among the most highly paid but, as he admired Socrates, he offered his services free of charge.

The speechwriter usually presented the defendant as a good man who had been wronged by a false accusation,
and this is the sort of defense the court would have expected from Socrates. Instead of the defense filled with self-
justification and pleas for his life, however, Socrates defied the Athenian court, proclaiming his innocence and casting
himself in the role of Athens' 'gadfly' - a benefactor to them all who, at his own expense, kept them awake and aware. In
his Apology, Plato has Socrates say:

“If you put me to death, you will not easily find another who, if I may use a ludicrous comparison, clings to the
state as a sort of gadfly to a horse that is large and well-bred but rather sluggish because of its size, so that it needs
to be aroused. It seems to me that the god has attached me like that to the state, for I am constantly alighting
upon you at every point to arouse, persuade, and reproach each of you all day long.” (Apology 30e)

Plato makes it clear in his work that the charges against Socrates hold little weight but also emphasizes Socrates'
disregard for the feelings of the jury and court protocol. Socrates is presented as refusing professional counsel in the form
of a speech-writer and, further, refusing to conform to the expected behavior of a defendant on trial for a capital crime.
Socrates, according to Plato, had no fear of death, proclaiming to the court:

“To fear death, my friends, is only to think ourselves wise without really being wise, for it is to think that we know
what we do not know. For no one knows whether death may not be the greatest good that can happen to man.
But men fear it as if they knew quite well that it was the greatest of evils.” (Apology 29a)

Following this passage, Plato gives Socrates' famous philosophical stand in which the old master defiantly states
that he must choose service to the divine over conformity to his society and its expectations. Socrates famously confronts
his fellow citizens with honesty, saying:

JRV 2
“Men of Athens, I honor and love you; but I shall obey God rather than you and, while I have life and strength, I
shall never cease from the practice and teaching of philosophy, exhorting anyone whom I meet after my manner,
and convincing him saying: O my friend, why do you who are a citizen of the great and mighty and wise city of
Athens care so much about laying up the greatest amount of money and honor and reputation and so little about
wisdom and truth and the greatest improvement of the soul, which you never regard or heed at all? Are you not
Ashamed of this? And if the person with whom I am arguing says: Yes, but I do care; I do not depart or let him go
at once; I interrogate and examine and cross-examine him, and if I think that he has no virtue, but only says that
he has, I reproach him with undervaluing the greater, and overvaluing the less. And this I should say to everyone
whom I meet, young and old, citizen and alien, but especially to the citizens, inasmuch as they are my brethren.
For this is the command of God, as I would have you know: and I believe that to this day no greater good has ever
happened in the state than my service to the God. For I do nothing but go about persuading you all, old and young
alike, not to take thought for your persons and your properties, but first and chiefly to care about the greatest
improvement of the soul. I tell you that virtue is not given by money, but that from virtue come money and every
other good of man, public as well as private. This is my teaching, and if this is the doctrine which corrupts the
youth, my influence is ruinous indeed. But if anyone says that this is not my teaching, he is speaking an untruth.
Wherefore, O men of Athens, I say to you, do as Anytus bids or not as Anytus bids, and either acquit me or not; but
whatever you do, know that I shall never alter my ways, not even if I have to die many times (29d-30c).”

Socrates was convicted and sentenced to death (Xenophon tells us that he wished for such an outcome and Plato's
account of the trial in his Apology would seem to confirm this). The last days of Socrates are chronicled in Plato's
Euthyphro, Apology, Crito and Phaedo, the last dialogue depicting the day of his death (by drinking hemlock) surrounded
by his friends in his jail cell in Athens and, as Plato puts it, "Such was the end of our friend, a man, I think, who was the
wisest and justest, and the best man I have ever known" (Phaedo, 118).

https://www.ancient.eu/socrates/

Plato

Born circa 428 B.C.E., ancient Greek philosopher Plato was a student of Socrates and a teacher of Aristotle. His
writings explored justice, beauty and equality, and also contained discussions in aesthetics, political philosophy, theology,
cosmology, epistemology and the philosophy of language. Plato founded the Academy in Athens, one of the first
institutions of higher learning in the Western world. He died in Athens circa 348 B.C.E.

Plato was shocked by Socrates’ execution but maintained faith in rational inquiry. Plato wrote extensively, and in
a series of dialogues, expounded the first (relatively) systematic philosophy of the Western world. [The early dialogues
recount the trial and death of Socrates. Most of the rest of the Platonic dialogues portray Socrates questioning to those
who think they know the meaning of justice (in the Republic), moderation (in the Charmides), courage (in the Laches),
knowledge, (in the Theaetetus), virtue (in the Meno), piety (in the Euthyphro), or love (in the Symposium).] The Republic
is the most famous dialogue. It touches on many of the great philosophical issues including the best form of government,
the best life to live, the nature of knowledge, as well as family, education, psychology and more. It also expounds Plato’s
theory of human nature. [The philosopher Alfred North Whitehead famously said that all of philosophy is just footnotes
to Plato.]

Metaphysical Background: The Forms

Plato is not a theist or polytheist, and he is certainly not a biblical theist. When he talks about the divine he is
referring to reason (logos), a principle that organizes the world from preexisting matter. What is most distinctive about

JRV 3
Plato’s philosophy is his theory of forms, although his description of forms isn’t precise. But Plato thought that knowledge
is an active process through which we organize and classify our perceptions.

The parables of the sun and cave are primarily about understanding forms and the form of the good. [Plato
compares the sun’s illumination of the world with the form of the good’s illumination of reality.] Plato thought that by
using reason we could come to know the good, and then we would do the good. Thus knowledge of the good is sufficient
for virtue, doing the good. [This seems mistaken as Aristotle will point out because our will can be weak.] Thus Plato’s
philosophy responds to intellectual and moral relativism—there are objective truths about the nature of reality and about
human conduct. [The allegory of the cave, the myth of the sun, and the divided line are the devices Plato uses to explain
the forms.

Theory of Human Nature – The Tripartite Structure of the Soul

Plato is a dualist; there is both immaterial mind (soul) and material body, and it is the soul that knows the forms.
Plato believed the soul exists before birth and after death. [We don’t see perfect circles or perfect justice in this world,
but we remember seeing them in Platonic heaven before we were born.] Thus he believed that the soul or mind attains
knowledge of the forms, as opposed to the senses. Needless to say, we should care about our soul rather than our body.

The soul (mind) itself is divided into 3 parts: reason; appetite (physical urges); and will (emotion, passion, spirit.)
The will is the source of love, anger, indignation, ambition, aggression, etc. When these aspects are not in harmony, we
experience mental conflict. The will can be on the side of either reason or the appetites. We might be pulled by lustful
appetite, or the rational desire to find a good partner.

Plato also emphasized the social aspect of human nature. We are not self-sufficient, we need others, and we
benefit from our social interactions, from other person’s talents, aptitudes, and friendship.

Diagnosis

Persons differ as to which part of their nature is predominant. Individual dominated by reason seeks are
philosophical and seek knowledge; individuals dominated by spirit/will/emotion are victory loving and seek reputation;
individuals dominated by appetites are profit loving and seek material gain. Although each has a role to play, reason ought
to rule the will and appetites. And in the same way, those with the most developed reason ought to rule the society. A
well-ordered, harmonious, or just society is one in which each kind of person plays their proper role. Thus there is a
parallel between proper functioning individuals and proper functioning societies. Good societies help produce good
people who in turn help produce good societies, while bad societies tend to produce bad individuals who in turn help
produce bad societies.

Plato differentiates between 5 classifications of societies. 1) The best is a meritocracy, where the talented rule.
This may degenerate into increasingly bad forms, each one worse than the other as we go down the list. 2) The timarchic
society, which values honor and fame while reason is neglected. In such a society spirit dominates the society and the
ruling class. 3) Oligarchy, where money-making is valued and political power lies with the wealthy. In such a society
appetites dominate the society and the ruling class. 4) Democracy, where the poor seize power. They are also dominated
by appetites. He describes the common people as “lacking in discipline [and] pursuing mere pleasure of the moment …”
5) Anarchy is the sequel to the permissiveness and self-indulgence of democracy. It is the total lack of government. Plato
thought this would usher in a tyrant to restore order.

Prescription

Justice is the same in both individuals and society—the harmonious workings of the parts to create a flourishing
whole. But how is this attained? Plato believes that education—academic, musical, and physical—as the key. Education

JRV 4
takes place in the context of a social and political system. Not surprisingly this includes kings (rulers) being philosophers,
those in whom reason dominates. If there really is a truth about how people should live, then only those with such
knowledge should rule.

To achieve this end Plato, the guardians or rulers must engage in a long educational process in which they learn
about the Forms. [After a nearly 50 year-long process, those of the highest moral and intellectual excellence will rule.] The
guardians cannot own personal property and cannot have families. [The idea is that only the desire to serve the common
good motivates them, rather than money or power.] He hopes that the guardians will so love wisdom that they will not
misuse their power. As for those dominated by will/emotion/spirit they are best suited to being auxiliaries—soldiers,
police, and civil servants. The final class is composed of the majority, those in whom the appetites dominate. They will be
farmers, craftsman, traders, and other producers of the materials necessary for living.

Critics have called Plato’s republic authoritarian or totalitarian, and Plato advocated both censorship and
propaganda as means of maintaining social control. He certainly believed that the masses [who he says like to “shop and
spend”] were unable to govern the society and that an elite, composed of the morally and intellectually excellent should
make the important decisions about how best to govern a society.

https://reasonandmeaning.com/2014/10/11/theories-of-human-nature-chapter-7-plato-part-1/

Aristotle

Aristotle is one of the greatest thinkers in the history of western science and philosophy, making contributions to
logic, metaphysics, mathematics, physics, biology, botany, ethics, politics, agriculture, medicine, dance and theatre. He
was a student of Plato who in turn studied under Socrates. Although we do not actually possess any of Aristotle’s own
writings intended for publication, we have volumes of the lecture notes he delivered for his students; through these
Aristotle was to exercise his profound influence through the ages. Indeed, the medieval outlook is sometimes considered
to be the “Aristotelian worldview” and St. Thomas Aquinas simply refers to Aristotle as “The Philosopher” as though there
were no other.

Aristotle was the first to classify areas of human knowledge into distinct disciplines such as mathematics, biology,
and ethics. Some of these classifications are still used today, such as the species-genus system taught in biology classes.
He was the first to devise a formal system for reasoning, whereby the validity of an argument is determined by its structure
rather than its content. Consider the following syllogism: All men are mortal; Socrates is a man; therefore, Socrates is
mortal. Here we can see that as long as the premises are true, the conclusion must also be true, no matter what we
substitute for "men or "is mortal." Aristotle's brand of logic dominated this area of thought until the rise of modern
symbolic logic in the late 19th Century.

Aristotle was the founder of the Lyceum, the first scientific institute, based in Athens, Greece. Along with his
teacher Plato, he was one of the strongest advocates of a liberal arts education, which stresses the education of the whole
person, including one's moral character, rather than merely learning a set of skills. According to Aristotle, this view of
education is necessary if we are to produce a society of happy as well as productive individuals.

Happiness as the Ultimate Purpose of Human Existence

One of Aristotle's most influential works is the Nicomachean Ethics, where he presents a theory of happiness that
is still relevant today, over 2,300 years later. The key question Aristotle seeks to answer in these lectures is "What is the
ultimate purpose of human existence?" What is that end or goal for which we should direct all of our activities?
Everywhere we see people seeking pleasure, wealth, and a good reputation. But while each of these has some value, none
of them can occupy the place of the chief good for which humanity should aim. To be an ultimate end, an act must be self-

JRV 5
sufficient and final, "that which is always desirable in itself and never for the sake of something else" (Nicomachean Ethics,
1097a30-34), and it must be attainable by man. Aristotle claims that nearly everyone would agree that happiness is the
end which meets all these requirements. It is easy enough to see that we desire money, pleasure, and honor only because
we believe that these goods will make us happy. It seems that all other goods are a means towards obtaining happiness,
while happiness is always an end in itself.

The Greek word that usually gets translated as "happiness" is eudaimonia, and like most translations from ancient
languages, this can be misleading. The main trouble is that happiness (especially in modern America) is often conceived
of as a subjective state of mind, as when one says one is happy when one is enjoying a cool beer on a hot day, or is out
"having fun" with one's friends. For Aristotle, however, happiness is a final end or goal that encompasses the totality of
one's life. It is not something that can be gained or lost in a few hours, like pleasurable sensations. It is more like the
ultimate value of your life as lived up to this moment, measuring how well you have lived up to your full potential as a
human being. For this reason, one cannot really make any pronouncements about whether one has lived a happy life until
it is over, just as we would not say of a football game that it was a "great game" at halftime (indeed we know of many
such games that turn out to be blowouts or duds). For the same reason we cannot say that children are happy, any more
than we can say that an acorn is a tree, for the potential for a flourishing human life has not yet been realized. As Aristotle
says, "for as it is not one swallow or one fine day that makes a spring, so it is not one day or a short time that makes a man
blessed and happy." (Nicomachean Ethics, 1098a18)

The Pursuit of Happiness as the Exercise of Virtue

Another important feature of Aristotle's theory is the link between the concepts of happiness and virtue. Aristotle
tells us that the most important factor in the effort to achieve happiness is to have a good moral character — what he
calls "complete virtue." But being virtuous is not a passive state: one must act in accordance with virtue. Nor is it enough
to have a few virtues; rather one must strive to possess all of them.

According to Aristotle, happiness consists in achieving, through the course of a whole lifetime, all the goods —
health, wealth, knowledge, friends, etc. — that lead to the perfection of human nature and to the enrichment of human
life. This requires us to make choices, some of which may be very difficult. Often the lesser good promises immediate
pleasure and is more tempting, while the greater good is painful and requires some sort of sacrifice. For example, it may
be easier and more enjoyable to spend the night watching television, but you know that you will be better off if you spend
it researching for your term paper. Developing a good character requires a strong effort of will to do the right thing, even
in difficult situations.

Aristotle would be strongly critical of the culture of "instant gratification" which seems to predominate in our
society today. In order to achieve the life of complete virtue, we need to make the right choices, and this involves keeping
our eye on the future, on the ultimate result we want for our lives as a whole. We will not achieve happiness simply by
enjoying the pleasures of the moment. Unfortunately, this is something most people are not able to overcome in
themselves. As he laments, "the mass of mankind are evidently quite slavish in their tastes, preferring a life suitable to
beasts" (Nicomachean Ethics, 1095b 20).

There is yet another activity few people engage in which is required to live a truly happy life, according to Aristotle:
intellectual contemplation. Since our nature is to be rational, the ultimate perfection of our natures is rational reflection.
This means having an intellectual curiosity which perpetuates that natural wonder to know which begins in childhood but
seems to be stamped out soon thereafter. For Aristotle, education should be about the cultivation of character, and this
involves a practical and a theoretical component. The practical component is the acquisition of a moral character, as
discussed above. The theoretical component is the making of a philosopher. Here there is no tangible reward, but the
critical questioning of things raises our minds above the realm of nature and closer to the abode of the gods.

JRV 6
Friendship

For Aristotle, friendship is one of the most important virtues in achieving the goal of eudaimonia (happiness).
While there are different kinds of friendship, the highest is one that is based on virtue (arête). This type of friendship is
based on a person wishing the best for their friends regardless of utility or pleasure. Aristotle calls it a “... complete sort
of friendship between people who are good and alike in virtue ...” (Nicomachean Ethics, 1156b07-08). This type of
friendship is long lasting and tough to obtain because these types of people are hard to come by and it takes a lot of work
to have a complete, virtuous friendship. Aristotle notes that one cannot have a large number of friends because of the
amount of time and care that a virtuous friendship requires. Aristotle values friendship so highly that he argues friendship
supersedes justice and honor. First of all, friendship seems to be so valued by people that no one would choose to live
without friends. People who value honor will likely seek out either flattery or those who have more power than they do,
in order that they may obtain personal gain through these relationships. Aristotle believes that the love of friendship is
greater than this because it can be enjoyed as it is. “Being loved, however, people enjoy for its own sake, and for this
reason it would seem it is something better than being honoured and that friendship is chosen for its own sake”
(Nicomachean Ethics, 1159a25-28). The emphasis on enjoyment here is noteworthy: a virtuous friendship is one that is
most enjoyable since it combines pleasure and virtue together, thus fulfilling our emotional and intellectual natures.

The Golden Mean

Aristotle’s ethics is sometimes referred to as “virtue ethics” since its focus is not on the moral weight of duties or
obligations, but on the development of character and the acquiring of virtues such as courage, justice, temperance,
benevolence, and prudence. And anyone who knows anything about Aristotle has heard his doctrine of virtue as being a
“golden mean” between the extremes of excess and deficiency. Courage, for example, is a mean regarding the feeling of
fear, between the deficiency of rashness (too little fear) and the excess of cowardice (too much fear). Justice is a mean
between getting or giving too much and getting or giving too little. Benevolence is a mean between giving to people who
don’t deserve it and not giving to anyone at all. Aristotle is not recommending that one should be moderate in all things,
since one should at all times exercise the virtues. One can’t reason "I should be cruel to my neighbour now since I was too
nice to him before." The mean is a mean between two vices, and not simply a mean between too much and too little.

Aristotle’s doctrine of the mean is well in keeping with ancient ways of thinking which conceived of justice as a
state of equilibrium between opposing forces. In the early cosmologies, the Universe is stabilized as a result of the
reconciliation between the opposing forces of Chaos and Order. The Greek philosopher Heraclitus conceived of right living
as acting in accordance with the Logos, the principle of the harmony of opposites; and Plato defined justice in the soul as
the proper balance among its parts. Like Plato, Aristotle thought of the virtuous character along the lines of a healthy
body. According to the prevailing medical theory of his day, health in the body consists of an appropriate balance between
the opposing qualities of hot, cold, the dry, and the moist. The goal of the physician is to produce a proper balance among
these elements, by specifying the appropriate training and diet regimen, which will of course be different for every person.

Similarly with health in the soul: exhibiting too much passion may lead to reckless acts of anger or violence which
will be injurious to one’s mental well-being as well as to others; but not showing any passion is a denial of one’s human
nature and results in the sickly qualities of morbidity, dullness, and antisocial behavior. The healthy path is the “middle
path,” though remember it is not exactly the middle, given that people who are born with extremely passionate natures
will have a different mean than those with sullen, dispassionate natures. Aristotle concludes that goodness of character
is “a settled condition of the soul which wills or chooses the mean relatively to ourselves, this mean being determined by
a rule or whatever we like to call that by which the wise man determines it.” (Nicomachean Ethics, 1006b36).

https://www.pursuit-of-happiness.org/history-of-happiness/aristotle/

JRV 7
Thomas Aquinas

Thomas Aquinas – Toward a Deeper Sense of Self


Written by: Therese Scarpelli Cory

“Who am I?” If Google’s autocomplete is any indication, it’s not one of the questions we commonly ask online
(unlike other existential questions like “What is the meaning of life?” or “What is a human?”). But philosophers have long
held that “Who am I?” is in some way the central question of human life. “Know yourself” was the inscription that the
ancient Greeks inscribed over the threshold to the Delphic temple of Apollo, the god of wisdom. In fact, self-knowledge is
the gateway to wisdom, as Socrates quipped: “The wise person is the one who knows what he doesn’t know.”

The reality is, we all lack self-knowledge to some degree, and the pursuit of self-knowledge is a lifelong quest—
often a painful one. For instance, a common phenomenon studied in psychology is the “loss of a sense of self” that occurs
when a familiar way of thinking about oneself (for example, as “a healthy person,” “someone who earns a good wage,” “a
parent”) is suddenly stripped away by a major life change or tragedy. Forced to face oneself for the first time without
these protective labels, one can feel as though the ground has been suddenly cut out from under one’s feet: Who am I,
really?

But the reality of self-ignorance is something of a philosophical puzzle. Why do we need to work at gaining
knowledge about ourselves? In other cases, ignorance results from a lack of experience. No surprise that I confuse
kangaroos with wallabies: I’ve never seen either in real life. Of course I don’t know what number you’re thinking about: I
can’t see inside your mind. But what excuse do I have for being ignorant of anything having to do with myself? I already
am myself! I, and I alone, can experience my own mind from the inside. This insider knowledge makes me—as
communications specialists are constantly reminding us—the unchallenged authority on “what I feel” or “what I think.” So
why is it a lifelong project for me to gain insight into my own thoughts, habits, impulses, reasons for acting, or the nature
of the mind itself?

This is called the “problem of self-opacity,” and we’re not the only ones to puzzle over it: It was also of great
interest to the medieval thinker Thomas Aquinas (1225-1274), whose theory of self-knowledge is documented in my new
book Aquinas on Human Self-Knowledge. It’s a common scholarly myth that early modern philosophers (starting with
Descartes) invented the idea of the human being as a “self” or “subject.” My book tries to dispel that myth, showing that
like philosophers and neuroscientists today, medieval thinkers were just as curious about why the mind is so intimately
familiar, and yet so inaccessible, to itself. (In fact, long before Freud, Medieval Latin and Islamic thinkers were speculating
about a subconscious, inaccessible realm in the mind.) The more we study the medieval period, the clearer it becomes
that inquiry into the self does not start with Descartes’ “I think, therefore I am.” Rather, Descartes was taking sides in a
debate about self-knowledge that had already begun in the thirteenth century and earlier.

Aquinas begins his theory of self-knowledge from the claim that all our self-knowledge is dependent on our
experience of the world around us. He rejects a view that was popular at the time, i.e., that the mind is “always on,” never
sleeping, subconsciously self-aware in the background. Instead, Aquinas argues, our awareness of ourselves is triggered
and shaped by our experiences of objects in our environment. He pictures the mind as as a sort of undetermined mental
“putty” that takes shape when it is activated in knowing something. By itself, the mind is dark and formless; but in the
moment of acting, it is “lit up” to itself from the inside and sees itself engaged in that act. In other words, when I long for
a cup of mid-afternoon coffee, I’m not just aware of the coffee, but of myself as the one wanting it. So for Aquinas, we
don’t encounter ourselves as isolated minds or selves, but rather always as agents interacting with our
environment. That’s why the labels we apply to ourselves—“a gardener,” “a patient person,” or “a coffee-lover”—are
always taken from what we do or feel or think toward other things.

But if we “see” ourselves from the inside at the moment of acting, what about the “problem of self-opacity”
mentioned above? Instead of lacking self-knowledge, shouldn’t we be able to “see” everything about ourselves

JRV 8
clearly? Aquinas’s answer is that just because we experience something doesn’t mean we instantly understand everything
about it—or to use his terminology: experiencing that something exists doesn’t tell us what it is. (By comparison: If
someday I encounter a wallaby, that won’t make me an expert about wallabies.) Learning about a thing’s nature requires
a long process of gathering evidence and drawing conclusions, and even then we may never fully understand it. The same
applies to the mind. I am absolutely certain, with an insider’s perspective that no one else can have, of the reality of my
experience of wanting another cup of coffee. But the significance of those experiences—what they are, what they tell me
about myself and the nature of the mind—requires further experience and reasoning. Am I hooked on caffeine? What is
a “desire” and why do we have desires? These questions can only be answered by reasoning about the evidence taken
from many experiences.

Aquinas, then, would surely approve that we’re not drawn to search online for answers to the question, “Who am
I?” That question can only be answered “from the inside” by me, the one asking the question. At the same time,
answering this question isn’t a matter of withdrawing from the world and turning in on ourselves. It’s a matter of
becoming more aware of ourselves at the moment of engaging with reality, and drawing conclusions about what our
activities towards other things “say” about us. There’s Aquinas’s “prescription” for a deeper sense of self.

http://www.cambridgeblog.org/2014/01/thomas-aquinas-toward-a-deeper-sense-of-self/

René Descartes

René Descartes (1596-1650): French philosopher considered the founder of modern philosophy. A mathematician
and scientist as well, Descartes was a leader in the seventeenth-century scientific revolution. In his major work,
Meditations on First Philosophy (1641), he rigorously analyzed the established knowledge of the time.

Although Socrates is often described as the “father of Western philosophy,” the French philosopher René
Descartes is widely considered the “founder of modern philosophy.” As profoundly insightful as such thinkers as Socrates
and Plato were regarding the nature of the self, their understanding was also influenced and constrained by the
consciousness of their time periods. Descartes brought an entirely new—and thoroughly modern—perspective to
philosophy in general and the self in particular.

As an accomplished mathematician (he invented analytic geometry) and an aspiring scientist, Descartes was an
integral part of the scientific revolution that was just beginning. (His major philosophical work, Meditations on First
Philosophy, was published in 1641, the year before Galileo died and Isaac Newton was born.) The foundation of this
scientific revolution was the belief that genuine knowledge needed to be based on independent rational inquiry and real-
world experimentation. It was no longer appropriate to accept without question the “knowledge” handed down by
authorities—as was prevalent during the religion-dominated Middle Ages. Instead, Descartes and others were convinced
that we need to use our own thinking abilities to investigate, analyze, experiment, and develop our own well-reasoned
conclusions, supported with compelling proof. In a passage from his Discourse on Method, Descartes contrasts the process
of learning to construct knowledge by thinking independently with simply absorbing information from authorities:

“For we shall not, e.g., turn out to be mathematicians though we know by heart all the proofs others have
elaborated, unless we have an intellectual talent that fits us to resolve difficulties of any kind. Neither, though we
may have mastered all the arguments of Plato and Aristotle, if yet we have not the capacity for passing solid
judgment on these matters, shall we become Philosophers; we should have acquired the knowledge not of a
science, but of history.”

But reasoning effectively does not mean simply thinking in our own personal, idiosyncratic ways: That type of
common sense thinking is likely to be seriously flawed. Instead, effective use of “the natural light of reason” entails
applying scientific discipline and analytic rigor to our explorations to ensure that the conclusions that we reach have
genuine merit:

JRV 9
“So blind is the curiosity by which mortals are possessed, that they often conduct their minds along
unexplored routes, having no reason to hope for success. . . it were far better never to think of investigating truth
at all, than to do so without a method. For it is very certain that unregulated inquiries and confused reflections of
this kind only confound the natural light and blind our mental powers. . . . In (method) alone lies the sum of all
human endeavor, and he who would approach the investigation of truth must hold to this rule. For to be possessed
of good mental powers is not sufficient; the principal matter is to apply them well. The greatest minds are capable
of the greatest vices as well as of the greatest virtues, and those who proceed very slowly may, provided they
always follow the straight road, really advance much faster than those who, though they run, forsake it.”

Descartes is convinced that committing yourself to a wholesale and systematic doubting of all things you have
been taught to simply accept without question is the only way to achieve clear and well-reasoned conclusions. More
important, it is the only way for you to develop beliefs that are truly yours and not someone else’s. He explains, “If you
would be a real seeker after truth, it is necessary that at least once in your life you doubt, as far as possible, all things.”
This sort of thoroughgoing doubting of all that you have been taught requires great personal courage, for calling into
question things like your religious beliefs, cultural values, and even beliefs about your self can be, in the short term, a very
disruptive enterprise. It may mean shaking up your world, questioning the beliefs of important people in your life, perhaps
challenging your image of yourself. Yet there is a compelling logic to Descartes’s pronouncement: For, if you are not willing
to question all that you have been asked to accept “on faith,” then you will never have the opportunity to construct a
rock-solid foundation for your beliefs about the world and your personal philosophy of life. What’s more, you will never
have the experience to develop the intellectual abilities and personal courage required to achieve your full potential in
the future.

Cogito, ergo sum is the first principle of Descartes’s theory of knowledge because he is confident that no rational
person will doubt his or her own existence as a conscious, thinking entity—while we are aware of thinking about our self.
Even if we are dreaming or hallucinating, even if our consciousness is being manipulated by some external entity, it is still
my self-aware self that is dreaming, hallucinating, or being manipulated. Thus, in addition to being the first principle of his
epistemology, cogito ergo, sum is also the keystone of Descartes’s concept of self. The essence of existing as a human
identity is the possibility of being aware of our selves: Being self-conscious in this way is integral to having a personal
identity. Conversely, it would be impossible to be self-conscious if we didn’t have a personal identity of which to be
conscious. In other words, having a self-identity and being self-conscious are mutually dependent on one another.

For Descartes, then, this is the essence of your self—you are a “thinking thing,” a dynamic identity that engages
in all of those mental operations we associate with being a human self. For example:

 You understand situations in which you find yourself.


 You doubt the accuracy of ideas presented to you.
 You affirm the truth of a statement made about you.
 You deny an accusation that someone has made.
 You will yourself to complete a task you have begun.
 You refuse to follow a command that you consider to be unethical.
 You imagine a fulfilling career for yourself.
 You feel passionate emotions toward another person.

But in addition to engaging in all of these mental operations—and many other besides—your self-identity is
dependent on the fact that you are capable of being aware you are engaging in these mental operations while you are
engaged in them. If you were consistently not conscious of your mental operations, consistently unaware of your thinking,
reasoning, and perceiving processes, then it would not be possible for you to have a self-identity, a unique essence, a you.

But what about your body? After all, a great deal of our self-concept and self-identity is tied up with our physical
existence: our physical qualities, appearance, gender, race, age, height, weight, hair style, and so on. Despite this,

JRV 10
Descartes believes that your physical body is secondary to your personal identity. One reason for this is that he believes
you can conceive of yourself existing independently of your body.

Nevertheless, even though your body is not as central to your “self” as is your capacity to think and reflect, it
clearly plays a role in your self-identity. In fact, Descartes contends, if you reflect thoughtfully, you can see that you have
clear ideas of both your “self” as a thinking entity and your “self” as a physical body. And these two dimensions of your
self are quite distinct.

It is at this point that we can see the pervasive influence of the metaphysical framework created by Socrates and
Plato and perpetuated through the centuries by such thinkers as Plotinus and Saint Augustine. Following directly in their
footsteps, Descartes declares that the essential self—the self as thinking entity—is radically different than the self as
physical body. The thinking self—or soul—is a nonmaterial, immortal, conscious being, independent of the physical laws
of the universe. The physical body is a material, mortal, nonthinking entity, fully governed by the physical laws of nature.
What’s more, your soul and your body are independent of one another, and each can exist and function without the other.
How is that possible? For example, in the case of physical death, Descartes believes (as did Plato) that your soul continues
to exist, seeking union with the spiritual realm and God’s infinite and eternal mind. On the other hand, in cases in which
people are sleeping or comatose, their bodies continue to function even though their minds are not thinking, much like
the mechanisms of a clock.

Thus Descartes ends up with Plato’s metaphysic, a dualistic view of reality, bifurcated into

 a spiritual, nonmaterial, immortal realm that includes conscious, thinking beings, and
 a physical, material, finite realm that includes human bodies and the rest of the physical universe.
In the case of the human self, the soul (or mind) and the physical body could not be more different. For example,
you can easily imagine the body being divided into various parts, whereas it is impossible to imagine your soul as anything
other than an indivisible unity (precisely the point that Socrates makes when he’s arguing for the immortality of the soul).

Although a bifurcated view of the universe solves some immediate problems for Descartes, it creates other
philosophical difficulties, most notably the vexing question, “What is the relationship between the mind and the body?”
In our everyday experience, our minds and bodies appear to be very closely related to one another. Our thinking and
emotions have a profound effect on many aspects of our physical bodies, and physical events with our bodies have a
significant impact on our mental lives. For the most part, we experience our minds and bodies as a unified entity, very
different from the two different and completely independent substances that Descartes proposes. As the writer and
humorist Mark Twain noted, “How come the mind gets drunk when the body does the drinking?” Even Descartes
recognized the need to acknowledge the close, intimate relationship between mind and body.

https://revelpreview.pearson.com/epubs/pearson_chaffee/OPS/xhtml/ch03_sec_04.xhtml

David Hume (1711-1776)

Hume believed that the entire contents of the mind were drawn from experience alone. The stimulus could be
external or internal. In this nexus, Hume describes what he calls impressions in contrast to ideas. Impressions are vivid
perceptions and were strong and lively. "I comprehend all our sensations, passions, and emotions as they make their first
appearance in the soul. Ideas were images in thinking and reason." (Flew 1962 p. 176).

For Hume there is no mind or self. The perceptions that one has are only active when one is conscious. "When my
perceptions are removed for any time, as by sound sleep, so long am I insensible of myself, and may truly be said not to
exist." (Flew 1962, p.259). Hume appears to be reducing personality and cognition to a machine that may be turned on
and off. Death brings with it the annihilation of the perceptions one has. Hume argues passions as the determinants of

JRV 11
behavior. Hume also appears as a behaviorist believing that humans learn in the same manner as lower animals; that is
through reward and punishment (Hergenhahn 2005).

Skepticism is the guiding principle in what is no doubt non recognition of metaphysics in this subject. Hume in the
appendix to A Treatise on Human Nature addresses his conclusions (Hume 1789).

In short there are two principles, which I cannot render consistent; nor is it in my power to
renounce either of them, viz. that all our distinct perceptions are distinct existences, and that the mind
never perceives any real connexion among distinct existences.

Hume's method of inquiry begins with his assumption that experience in the form of impressions cannot give rise
to the constancy of a self in which would be constant to give reference to all future experiences. The idea of self is not one
any one impression. It is several ideas and impressions in itself. There is no constant impression that endures for one's
whole life. Different sensations as pleasure and pain, or heat and cold are in a constant continuum that is invariable and
not constant. Hume states, "It cannot therefore be from any of these impressions, or from any other, that the idea of self
is derived; and consequently there is no such idea" (Hume 1789). It appears the closest thing that Hume could discuss as
the self is similar to watching a film or a play of one's life. These perceptions themselves are separate from one another
and there is no unifying component as a self to organize such for long term reference.

Hume further deliberates over a position of identity of an invariable and uninterrupted existence. Hume confirms
there is no primordial substance as to where all secondary existences of individual existence exist. Everything in our
conscious state is derived from impressions. Objects in the outer world exist as distinct species that are separable from
the secondary qualities in conscious thought. To negate any demonstration of substance Hume posits an analogy that if
life was reduced to below that of an oyster, does this entity have any one perception as thirst or hunger? The only thing
that would exist is the perception. Adding a higher complex of perception would not yield any notion of substance that
could yield an independent and constant self. (Hume 1789). Hume's model of the mind simply records data when such is
manifestly conscious. The model abstracts and isolates objects and secondary qualities without any metaphysics. Unity of
experience is one area, which Hume found elusive in his model and with such denied any configuration of self reference
only perceptions in the conscious (Hume 1789).

The Bundle Theory of the Self

 Theory in which an object consists only of a collection (bundle) of properties.


 According to bundle theory, an object consists of its properties and nothing more
 Hence, there can not be an object without properties nor can one even conceive of such an object
 For example, a ball is really a collection of the properties green (color), 50cm in diameter (size), 5kg (weight), etc.
 Beyond those properties, there is no "ball."
 In particular, there is no substance in which the properties inhere.
 According to David Hume, the idea of an enduring self is an illusion.
 A person is simply a collection of mental states at a particular time; there is no separate subject of these mental
states over and above the states themselves.
 When the states subside, this theory implies, and are replaced with other states, so too the person subsides and
is replaced with another.

http://www.powereality.net/hume-kant.htm

Immanuel Kant (1724-1804)

Kant was alarmed by David Hume’s notion that the mind is simply a container for fleeting sensations and
disconnected ideas, and our reasoning ability is merely “a slave to the passions.” If Hume’s views proved true, then humans
would never be able to achieve genuine knowledge in any area of experience: scientific, ethical, religious, or metaphysical,

JRV 12
including questions such as the nature of our selves. For Kant, Hume’s devastating conclusions served as a Socratic “gadfly”
to his spirit of inquiry, awakening him from his intellectual sleep and galvanizing him to action.

Kant was convinced that philosophers and scientists of the time did not fully appreciate the potential
destructiveness of Hume’s views, and that it was up to him (Kant) to meet and dismantle this threat to human knowledge.

How did Hume’s empirical investigations lead him to the unsatisfying conclusion that genuine knowledge—and
the self—do not exist? Kant begins his analysis at Hume’s starting point—examining immediate sense experience—and
he acknowledges Hume’s point that all knowledge of the world begins with sensations: sounds, shapes, colors, tastes,
feels, smells. For Hume, these sensations are the basic data of experience, and they flow through our consciousness in a
torrential rushing stream.

But in reflecting on his experience, Kant observes an obvious fact that Hume seems to have overlooked, namely,
that our primary experience of the world is not in terms of a disconnected stream of sensations. Instead, we perceive and
experience an organized world of objects, relationships, and ideas, all existing within a fairly stable framework of space
and time. True, at times discreet and randomly related sensations dominate our experience: for example, when we are
startled out of a deep sleep and “don’t know where we are,” or when a high fever creates bizarre hallucinations, or the
instant when an unexpected thunderous noise or blinding light suddenly dominates our awareness. But in general, we live
in a fairly stable and orderly world in which sensations are woven together into a fabric that is familiar to us. And integrated
throughout this fabric is our conscious self who is the knowing subject at the center of our universe. Hume’s problem
wasn’t his starting point—empirical experience—it was the fact that he remained fixated on the starting point, refusing
to move to the next, intelligible level of experience.

Where does the order and organization of our world come from? According to Kant, it comes in large measure
from us. Our minds actively sort, organize, relate, and synthesize the fragmented, fluctuating collection of sense data that
our sense organs take in. For example, imagine that someone dumped a pile of puzzle pieces on the table in front of you.
They would initially appear to be a random collection of items, unrelated to one another and containing no meaning for
you, much like the basic sensations of immediate unreflective experience. However, as you began to assemble the pieces,
these fragmentary items would gradually begin to form a coherent image that would have significance for you. According
to Kant, this meaning-constructing activity is precisely what our minds are doing all of the time: taking the raw data of
experience and actively synthesizing it into the familiar, orderly, meaningful world in which we live. As you might imagine,
this mental process is astonishing in its power and complexity, and it is going on all of the time.

How do our minds know the best way to construct an intelligible world out of a never-ending avalanche of
sensations? We each have fundamental organizing rules or principles built into the architecture of our minds. These
dynamic principles naturally order, categorize, organize, and synthesize sense data into the familiar fabric of our lives,
bounded by space and time. These organizing rules are a priori in the sense that they precede the sensations of experience
and they exist independently of these sensations. We didn’t have to “learn” these a priori ways of organizing and relating
the world—they came as software already installed in our intellectual operating systems.

Kant referred to his approach to perception and knowledge as representing a “Copernican Revolution” in
metaphysics and epistemology, derived from the breakthrough of the Polish astronomer Copernicus (1473–1543), who
was one of the first and most definitive voices asserting that instead of the Sun orbiting around Earth, it’s actually the
reverse—Earth orbits the Sun.

In a similar fashion, empiricists like Hume had assumed that the mind was a passive receptacle of sensations, a
“theatre” in which the raw data of experience moved across without our influence. According to Hume, our minds conform
to the world of which we are merely passive observers. Kant, playing the role of Copernicus, asserted that this is a
wrongheaded perspective. The sensations of experience are necessary for knowledge, but they are in reality the “grist”
for our mental “mills.” Our minds actively synthesize and relate these sensations in the process of creating an intelligible
world. As a result, the sensations of immediate experience conform to our minds, rather than the reverse. We construct

JRV 13
our world through these conceptual operations; and, as a result, this is a world of which we can gain insight and
knowledge.

From Kant’s standpoint, it’s our self that makes experiencing an intelligible world possible because it’s the self
that is responsible for synthesizing the discreet data of sense experience into a meaningful whole. Metaphorically, our self
is the weaver who, using the loom of the mind, weaves together the fabric of experience into a unified whole so that it
becomes my experience, my world, my universe. Without our self to perform this synthesizing function, our experience
would be unknowable, a chaotic collection of sensations without coherence or significance.

The unity of consciousness is a phrase invented by Kant to describe the fact that the thoughts and perceptions of
any given mind are bound together in a unity by being all contained in one consciousness—my consciousness. That’s
precisely what makes your world intelligible to you: It’s your self that is actively organizing all of your sensations and
thoughts into a picture that makes sense to you. This picture is uniquely your picture. You are at the center of your world,
and you view everything in the world from your perspective. For example, think about a time in which you shared an
experience with someone but each of you had radically different experiences: attending a party, watching a movie, having
a communication misunderstanding. Reflect on the way each person instinctively describes the entire situation from his
or her perspective. That’s the unity of consciousness that Kant is describing.

Your self is able to perform this synthesizing, unifying function because it transcends sense experience. Your self
isn’t an object located in your consciousness with other objects—your self is a subject, an organizing principle that makes
a unified and intelligible experience possible. It is, metaphorically, “above” or “behind” sense experience, and it uses the
categories of your mind to filter, order, relate, organize, and synthesize sensations into a unified whole. That’s why Kant
accords the self “transcendental” status: It exists independently of experience. The self is the product of reason, a
regulative principle because the self “regulates” experience by making unified experience possible.

So where did Hume go wrong, from Kant’s standpoint? How could Hume examine his mind’s contents and not
find his self, particularly because, in Kant’s view, the self is required to have intelligible experience? Hume’s problem
(according to Kant) was that he looked for his self in the wrong place! Contrary to what Hume assumed, the self is not an
object of consciousness, one of the contents of the mind. Instead, the self is the transcendental activity that synthesizes
the contents of consciousness into an intelligible whole. Because the self is not a “content” of consciousness but rather
the invisible “thread” that ties the contents of consciousness together, it’s no wonder that Hume couldn’t find it. It would
be analogous to going to a sporting event and looking in vain to see the “team,” when all you see are a collection of
players. The “team” is the network of relationships between the individuals that is not visible to simple perception. The
“team” is the synthesizing activity that creates a unity among the individuals, much like the self creates a unity in
experience by synthesizing its contents into an intelligible whole. And because experience is continually changing, this
intelligible picture of the world is being updated on an instantaneous basis.

We can also see Kant’s refinement of Descartes’s concept of the self, which he interprets as a simple, self-evident
fact: “I think, therefore I am.” Kant was interested in developing a more complex, analytical, and sophisticated
understanding of the self as a thinking identity. To begin with, Descartes was focusing on one dimension of the thinking
process: our ability to reflect, to become aware of our self, to be self-conscious. But from Kant’s standpoint, the thinking
self—consciousness—has a more complex structure than simple self-reflection. The self is a dynamic entity/activity,
continually synthesizing sensations and ideas into an integrated, meaningful whole. The self, in the form of consciousness,
utilizes conceptual categories (or “transcendental rules”) such as substance, cause and effect, unity, plurality, possibility,
necessity, and reality to construct an orderly and “objective” world that is stable and can be investigated scientifically. It
is in this sense that the self constructs its own reality, actively creating a world that is familiar, predictable, and, most
significantly, mine.

https://revelpreview.pearson.com/epubs/pearson_chaffee/OPS/xhtml/ch03_sec_07.xhtml

JRV 14
John Locke
John Locke (1632-1704): British philosopher and physician who laid the groundwork for an empiricist approach to
philosophical questions. Locke’s revolutionary theory that the mind is a tabula rasa, a blank slate on which experience
writes, is detailed in his Essay Concerning Human Understanding (1690).
The English philosopher—and physician—John Locke continued exploring the themes Descartes had initiated,
both in terms of the nature of knowledge (epistemology) and the nature of the self. He shared with Descartes a scientist’s
perspective, seeking to develop knowledge based on clear thinking, rigorous analysis, and real-world observation and
experimentation. However, Locke brought a very different approach to this epistemological enterprise. Descartes believed
that we could use the power of reason to achieve absolutely certain knowledge of the world and then use this rationally
based knowledge to understand our world of experience. His extensive work in mathematics served as a model, convincing
him that there were absolute truths and knowledge waiting to be discovered by reasoned, disciplined reflection.

Locke’s work as a physician, rather than a mathematician, provided him with a very different perspective. The
physician’s challenge is to gather information regarding the symptoms a patient is experiencing, and then relate these
symptoms to his (the physician’s) accumulated knowledge of disease. Although a successful doctor uses sophisticated
reasoning abilities in identifying patterns and making inferences, his conclusions are grounded in experience. Knowledge,
in other words, is based on the careful observation of sense experience and/or memories of previous experiences. Reason
plays a subsequent role in helping to figure out the significance of our sense experience and to reach intelligent
conclusions.

To sum up: For Descartes, our reasoning ability provides the origin of knowledge and final court of judgment in
evaluating the accuracy and value of the ideas produced. For Locke, all knowledge originates in our direct sense experience,
which acts as the final court of judgment in evaluating the accuracy and value of ideas. As a result, Descartes is considered
an archetypal proponent of the rationalist view of knowledge, whereas Locke is considered an archetypal advocate of the
empiricist view of knowledge.

True to his philosophical commitment to grounding his ideas in sense experience, Locke, in his essay entitled “On
Personal Identity” (from his most famous work, An Essay Concerning Human Understanding) engages in a reflective
analysis of how we experience our self in our everyday lives.
In this initial passage, Locke makes the following points, implicitly asking the question of his readers, “Aren’t these
conclusions confirmed by examining your own experiences?”

1. To discover the nature of personal identity, we’re going to have to find out what it means to be a person.
2. A person is a thinking, intelligent being who has the abilities to reason and to reflect.
3. A person is also someone who considers itself to be the same thing in different times and different places.
4. Consciousness—being aware that we are thinking—always accompanies thinking and is an essential part of the
thinking process.
5. Consciousness is what makes possible our belief that we are the same identity in different times and different
places.

Reflect carefully on Locke’s points—do you find that his conclusions match your own personal experience?
Certainly his first three points seem plausible. What about points 4 and 5? Does consciousness always accompany the
thinking process? Locke explains: “When we see, hear, smell, taste, feel, meditate, or will anything, we know that we do
so. Thus it is always as to our present sensations and perceptions: and by this every one is to himself that which he calls
self.” Consider what you are doing at this moment: You are thinking about the words on the page, the ideas that are being
expressed—are you also aware of yourself as you are reading and thinking? Certainly once the question is posed to you,
you’re aware of your self. Perhaps it’s more accurate to say that when you think, you are either conscious of your self—
or potentially conscious of your self. In other words, are there times in which you are fully immersed in an activity—such
as dancing, driving a car, or playing a sport—and not consciously aware that you are doing so? Analogously, are there
times in which you are fully engaged in deep thought—wrestling with a difficult idea, for example—and not aware that

JRV 15
you are doing so? But even if there are times in which you are unreflectively submerged in an activity or thought process,
you always have the potential to become aware of your self engaged in the activity or thought process.

What about Locke’s fifth point, that consciousness is necessary for us to have a unified self-identity in different
times and places? This seems like a point well taken. You consider your self to be the same self who was studying last
night, attending a party at a friend’s house two weeks ago, and taking a vacation last summer. How can you be sure it’s
the same self in all of these situations? Because of your consciousness of being the same self in all of these different
contexts.

These points become clearer when we contrast human thinking with animal thinking. It’s reasonable to believe
that mammals such as chipmunks, dogs, and dolphins are able to see, hear, smell, taste, and feel, just like humans. But
are they conscious of the fact that they are performing these activities as they are performing them? Most people would
say “no.” And because they are not conscious that they are performing these activities, it’s difficult to see how they would
have a concept of self-identity that remains the same over time and place. So consciousness—or more specifically, self-
consciousness—does seem to be a necessary part of having a coherent self-identity. (Some people believe that higher-
order mammals such as chimpanzees and gorillas present more complicated cases.)

In Locke’s mind, conscious awareness and memory of previous experiences are the keys to understanding the self.
In other words, you have a coherent concept of your self as a personal identity because you are aware of your self when
you are thinking, feeling, and willing. And you have memories of times when you were aware of your self in the past, in
other situations—for example, at the party two weeks ago, or your high school graduation several years ago. But, as we
noted earlier, there are many moments when we are not consciously aware of our self when we are thinking, feeling, and
willing—we are simply, unreflectively, existing. What’s more, there are many past experiences that we have forgotten or
have faulty recollections of. All of which means that during those lapses, when we were not aware of our self, or don’t
remember being aware of our self, we can’t be sure if we were the same person, the same substance, the same soul! Our
personal identity is not in doubt or jeopardy because we are aware of our self (or remember being aware of it). But we
have no way of knowing if our personal identity has been existing in one substance (soul) or a number of substances
(souls). For Locke, personal identity and the soul or substance in which the personal identity is situated are two very
different things. Although the idea seems rather strange at first glance, Locke provides a very concrete example to further
illustrate what he means.

“That this is so, we have some kind of evidence in our very bodies, all whose particles, whilst vitally united
to this same thinking conscious self, so that we feel when they are touched, and are affected by, and
conscious of good or harm that happens to them, are a part of ourselves; i.e., of our thinking conscious
self. Thus, the limbs of his body are to every one a part of himself; he sympathizes and is concerned for
them. Cut off a hand, and thereby separate it from that consciousness he had of its heat, cold, and other
affections, and it is then no longer a part of that which is himself, any more than the remotest part of
matter. Thus, we see the substance whereof personal self consisted at one time may be varied at another,
without the change of personal identity; there being no question about the same person, though the limbs
which but now were a part of it, be cut off.”

It’s a rather gruesome example Locke provides, but it makes his point. Every aspect of your physical body
(substance) is integrated with your personal identity—hit your finger with a hammer, and it’s you who is experiencing the
painful sensation. But if your hand is cut off in an industrial accident, your personal identity remains intact, although the
substance associated with it has changed (you now have only one hand). Or to take another example: The cells of our
body are continually being replaced, added to, subtracted from. So it’s accurate to say that in many ways you are not the
same physical person you were five years ago, ten years ago, fifteen years ago, and so on. Nevertheless, you are likely
convinced that your personal identity has remained the same despite these changes in physical substance to your body.
This leads Locke to conclude that our personal identity is distinct from whatever substance it finds itself associated with.

https://revelpreview.pearson.com/epubs/pearson_chaffee/OPS/xhtml/ch03_sec_05.xhtml

JRV 16
Anthropological Perspective

The word anthropology is derived from the Greek words anthropo, meaning “human beings” or “humankind,”
and logia, translated as “knowledge of” or “the study of.” Thus, we can define anthropology as the systematic study of
humankind. This definition in itself, however, does not distinguish anthropology from other disciplines. After all, historians,
psychologists, economists, sociologists, and scholars in many other fields systematically study humankind in one way or
another. Anthropology stands apart because it combines four subfields, or subdisciplines, that bridge the natural sciences,
the social sciences, and the humanities. These four subfields—physical anthropology, archaeology, linguistic anthropology,
and cultural anthropology or ethnology—constitute a broad approach to the study of humanity the world over, both past and
present.

Physical Anthropology

Physical anthropology is the branch of anthropology concerned with humans as a biological species. As such, it is
the subfield most closely related to the natural sciences. Physical anthropologists conduct research in two major areas:
human evolution and modern human variation. The investigation of human evolution presents one of the most tantalizing
areas of anthropological study. Research has now traced the African origins of humanity back over 6 million years.
Fieldwork in other world areas has traced the expansion of early human ancestors throughout the world. Much of the
evidence for human origins consists of fossils, the fragmentary remains of bones and living materials preserved from earlier
periods. The study of human evolution through analysis of fossils is called paleoanthropology (the prefix paleo means
“old” or “prehistoric”). Paleoanthropologists use a variety of scientific techniques to date, classify, and compare fossil bones
to determine the links between modern humans and their biological ancestors. These paleoanthropologists may work closely
with archaeologists when studying ancient tools and activity areas to learn about the behavior of early human ancestors.

Other physical anthropologists explore human evolution through primatology, the study of primates. Primates are
mammals that belong to the same overall biological classification as humans and, therefore, share similar physical
characteristics and a close evolutionary relationship with us. Many primatologists observe primates such as chimpanzees,
gorillas, gibbons, and orangutans in their natural habitats to ascertain the similarities and differences between these other
primates and humans. These observations of living primates may provide insight into the behaviors of early human
ancestors.

Another group of physical anthropologists focuses their research on the range of physical variation within and
among different modern human populations. These anthropologists study human variation by measuring physical
characteristics—such as body size, variation in blood types, or differences in skin color— or various genetic traits. Their
research aims at explaining why such variation occurs, as well as documenting the differences in human populations.
Skeletal structure is also the focus of anthropological research. Human osteology is the particular area of specialization
within physical anthropology dealing with the study of the human skeleton. Such studies have wide-ranging applications,
from the identification of murder victims from fragmentary skeletal remains to the design of ergonomic airplane cockpits.
Physical anthropologists are also interested in evaluating how disparate physical characteristics reflect evolutionary
adaptations to different environmental conditions, thus shedding light on why human populations vary.

Physical anthropologists have also shed light on general questions about humanity such as the propensity of violence
in human societies. Physical anthropologist Philip Walker has conducted in-depth research on human skeletal materials
from various periods of prehistory that attempts to answer general questions about the prevalence of violence in past
societies (2001). Walker finds that human skeletal remains with traumatic injuries such as embedded flint arrow points in
the vertebrate or cut marks on cranial skulls and other archaeological materials from the past suggest that both violence and
cannibalism has been pervasive since the beginning of human prehistory. Although the prehistoric data indicates that there
were periods of peace, Walker’s data based on an enormous amount of skeletal data indicates that warfare and violence
were frequent. The data indicates that the frequency of prehistoric human violence is associated with climatic changes in
the past that resulted in crop failures or other scarcities. Thus, the research in physical anthropology has provided deep
insights into the patterns of human violence that help us understand our condition in the contemporary era.

JRV 17
An increasingly important area of research for some physical anthropologists is genetics, the study of the biological
“blueprints” that dictate the inheritance of physical characteristics. An example of genetic research on a modern population
is that conducted by physical anthropologist Cynthia Beall in the Himalayan Mountains of Tibet. Beall and her team did
detailed genealogical and historical interviews with thousands of women between the ages of 20 and 60 who had moved
and were adapting to new environmental conditions at the altitude of 4,000 meters at low oxygen levels. Ruling out such
factors as age, illness, and smoking, the team found that one group of these women had blood oxygen levels that were 10
percent higher than normal. Beall and her team found that the children of these women were much more likely to survive
to the age of 15 or older; the group’s average for childhood death was .04. In contrast, the low-oxygen group of women had
an average of 2.5 children die during childhood. Thus, Beall and her team found that the gene or genes that determine high-
oxygen blood count for women gave survival and adaptive capacities in this high mountain altitude (Beall, Song, Elston,
and Goldstein 2004). This anthropological research has demonstrated a case of natural selection and human evolution that
is occurring presently within a particular environment.

Genetics has also become an increasingly important complement to paleoanthropological research. Through the
study of the genetic makeup of modern humans, physical anthropologists have been working on calculating the genetic
distance among modern humans, thus providing a means of inferring rates of evolution and the evolutionary relationships
within the species. An important project run by genetic paleoanthropologist Spencer Wells is helping to illuminate the
migrations of humans throughout the world. Wells is the director of the Genographic Project, sponsored by the National
Geographic Society and IBM. The Genographic Project is gathering samples of DNA from populations throughout the
world to trace human evolution. Wells is a pioneer in this form of genetic paleoanthropology. He has developed an
international network of leading anthropologists in genetics, linguistics, archaeology, paleoanthropology, and cultural
anthropology to assist in this project. Labs analyzing DNA have been established in different regions of the world
by the Genographic Project. As DNA is transmitted from parents to offspring, most of the genetic materialis recombined
and mutated. However, some mutated DNA remains fairly stable over the course of generations. This stable mutated DNA
can serve as “genetic markers” that are passed on to each generation and create populations with distinctive sets of DNA.
These genetic markers can serve to distinguish ancient lineages of DNA. By following the pathways of these
genetic markers, genetic paleoanthropologists such as Wells can blend archaeology, prehistoric, and linguistic data with
paleoanthropological data to trace human evolution.

Archaeology

Archaeology, the branch of anthropology that examines the material traces of past societies, informs us about the
culture of those societies—the shared way of life of a group of people that includes their values, beliefs, and norms.
However, as we will see below some archaeologists do research in contemporary societies. Artifacts, the material products
of former societies, provide clues to the past. Some archaeological sites reveal spectacular jewelry like that found by the
movie character Indiana Jones or the treasures of a pharaoh’s tomb. Most artifacts, however, are not so spectacular. Despite
the popular image of archaeology as an adventurous, even romantic pursuit, it usually consists of methodical, time-
consuming, and— sometimes—somewhat tedious research. Archaeologists often spend hours sorting through ancient trash
piles, or middens, to discover how members of past societies ate their meals, what tools they used in their households and
in their work, and what beliefs gave meaning to their lives. They collect and carefully analyze the broken fragments of
pottery, stone, glass, and other materials. It may take them months or even years to fully complete the study
of an excavation. Unlike fictional archaeologists, who experience glorified adventures, real-world archaeologists
thrive on the intellectually challenging adventure of systematic, scientific research that enlarges our understanding of the
past.

Archaeologists have examined sites the world over, from campsites of the earliest humans to modern landfills.
Some archaeologists investigate past societies whose history is primarily told by the archaeological record. Known as
prehistoric archaeologists, they study the artifacts of groups such as the ancient inhabitants of Europe and the first humans
to arrive in the Americas. Because these researchers have no written documents or oral traditions to help interpret the sites
they examine and the artifacts they recover, the archaeological record provides the primary source of information for their
interpretations of the past. Historical archaeologists, on the other hand, work with historians in investigating the artifacts
of societies of the more recent past. For example, some historical archaeologists have probed the remains
of plantations in the southern United States to gain an understanding of the lifestyles of enslaved Africans and slave owners

JRV 18
during the nineteenth century. Other archaeologists, called classical archaeologists, conduct research on ancient
civilizations such as in Egypt, Greece, and Rome.

There are many more areas of specialization within archaeology that reflect the geographic area, topic, or time
period on which the archaeologist works. One more contemporary development in the field of archaeology is called
ethnoarchaeology. Ethnoarchaeology is the study of material artifacts of the past along with the observation of modern
peoples who have knowledge of the use and symbolic meaning of those artifacts. Frances Hayashida has been conducting
ethnoarchaeological research in the coastal areas of Peru regarding the production and consumption of ancient maize beer
called chicha and this tradition carried on in modern breweries (2008). This ethnoarchaeological research involves
the study of the contemporary chicha production along with the investigation of how prehistoric indigenous peoples
were providing inputs of labor, raw materials, and the different technologies in their development of breweries in different
areas of coastal Peru. This ethnoarchaeological research involves in-depth observations and interviews with modern peoples
in order to understand what has been retained from the past regarding chicha production.

There are many other fields of archaeology. For example, some specializations in archaeology include industrial
archaeologists, biblical archaeologists, medieval and postmedieval archaeologists, and Islamic archaeologists. Underwater
archaeologists work on a variety of places and time periods throughout the world; they are distinguished from other
archaeologists by the distinctive equipment, methods, and procedures needed to excavate underwater. One new interesting
approach used in archaeology employs the GIS (Geographical Information Systems), a tool that was adopted by geologists
and environmental scientists as well as physical anthropologists. Archaeologists can use the GIS systems linked to satellites
to help locate specific transportation routes used by peoples and their animals in the past as well as many other patterns
(Tripcevich 2010).

In another novel approach, still other archaeologists have turned their attention to the very recent past. For
example, in 1972, William L. Rathje began a study of modern garbage as an assignment for the students in his
introductory anthropology class. Even he was surprised at the number of people who took an interest in the findings. A
careful study of garbage provides insights about modern society that cannot be ferreted out in any other way. Whereas
questionnaires and interviews depend upon the cooperation and interpretation of respondents, garbage provides an unbiased
physical record of human activity. Rathje’s “garbology project” is still in progress and, combined with information from
respondents, offers a unique look at patterns of waste management, consumption, and alcohol use in contemporary U.S.
society (Rathje and Ritenbaugh 1984).

Linguistic Anthropology

Linguistics, the study of language, has a long history that dovetails with the discipline of philosophy, but is also
one of the integral subfields of anthropology. Linguistic anthropology focuses on the relationship between language and
culture, how language is used within society, and how the human brain acquires and uses language. Linguistic
anthropologists seek to discover the ways in which languages are different from one another, as well as how they are similar
to one another. Two wide-ranging areas of research in linguistic anthropology are structural linguistics and historical
linguistics.

Structural linguistics explores how language works. Structural linguists compare grammatical patterns or other
linguistic elements to learn how contemporary languages mirror and differ from one another. Structural linguistics has also
uncovered some intriguing relationships between language and thought patterns among different groups of people. Do
people who speak different languages with different grammatical structures think and perceive the world differently from
each other? Do native Chinese speakers think or view the world and life experiences differently from native English
speakers? Structural linguists are attempting to answer this type of question.

Linguistic anthropologists also examine the connections between language and social behavior in different cultures.
This specialty is called sociolinguistics. Sociolinguists are interested both in how language is used to define social groups
and in how belonging to particular groups leads to specialized kinds of language use. In Thailand, for example, there are
thirteen forms of the pronoun I. One form is used with equals, other forms come into play with people of higher status, and
some forms are used when males address females (Scupin 1988).

JRV 19
Another area of research that has interested linguistic anthropologists is historical linguistics. Historical linguistics
concentrates on the comparison and classification of different languages to discern the historical links among languages. By
examining and analysing grammatical structures and sounds of languages, researchers are able to discover rules for how
languages change over time, as well as which languages are related to one another historically. This type of historical
linguistic research is particularly useful in tracing the migration routes of various societies through time, confirming
archaeological and paleoanthropological data gathered independently. For example, through historical linguistic research,
anthropologists have corroborated the Asian origins of many Native American populations.

Cultural Anthropology or Ethnology

Cultural anthropology or ethnology is the subfield of anthropology that examines various contemporary
societies and cultures throughout the world. Cultural anthropologists do research in all parts of the world, from the tropical
rainforests of the Democratic Republic of the Congo and Brazil to the Arctic regions of Canada, from the deserts of the
Middle East to the urban areas of China. Until recently, most cultural anthropologists conducted research on non-Western
or remote cultures in Africa, Asia, the Middle East, Latin America, and the Pacific Islands and on the Native American
populations in the United States. Today, however, many cultural anthropologists have turned to research on their own
cultures in order to gain a better understanding of their institutions and cultural values.

Cultural anthropologists (sometimes the terms sociocultural anthropologist and ethnographer are used
interchangeably with cultural anthropologist) use a unique research strategy in conducting their fieldwork in different
settings. This research strategy is referred to as participant observation because cultural anthropologists learn the language
and culture of the group being studied by participating in the group’s daily activities. Through this intensive participation,
they become deeply familiar with the group and can understand and explain the society and culture of the group as insiders.

The results of the fieldwork of the cultural anthropologist are written up as an ethnography, a description of a
culture within a society. A typical ethnography reports on the environmental setting, economic patterns, social organization,
political system, and religious rituals and beliefs of the society under study. This description of a society is based on what
anthropologists call ethnographic data. The gathering of ethnographic data in a systematic manner is the specific research
goal of the cultural anthropologist. Technically, ethnologist refers to anthropologists who focus on the cross-cultural aspects
of the various ethnographic studies done by the cultural anthropologists. Ethnologists analyze the data that are produced by
the individual ethnographic studies to produce cross-cultural generalizations about humanity and cultures.

Applied Anthropology

The four subfields of anthropology (physical anthropology, archaeology, linguistic anthropology, and cultural
anthropology) are well established. However, some scholars recognize a fifth subdiscipline. Applied anthropology is the
use of anthropological data from the other subfields to address modern problems and concerns. These problems may be
environmental, technological, economic, social, political, or cultural. Anthropologists have played an increasing role in the
development of government policies and legislation, the planning of development projects, and the implementation of
marketing strategies. Although anthropologists are typically trained in one of the major subfields, an increasing number are
finding employment outside of universities and museums. Although many anthropologists see at least some aspects of their
work as applied, it is the application of anthropological data that is the central part of some researches’ careers. Indeed,
approximately half of the people with doctorates in anthropology currently find careers outside of academic institutions.

Each of the four major subfields of anthropology has applied aspects. Physical anthropologists, for example,
sometimes play a crucial role in police investigations, using their knowledge of the human body to reconstruct the
appearance of murder victims on the basis of fragmentary skeletal remains or helping police determine the mechanisms of
death. Archaeologists deal with the impact of development on the archaeological record, working to document or preserve
archaeological sites threatened by the construction of housing, roads, and dam projects. Some linguistic anthropologists
work with government agencies and indigenous peoples to document disappearing languages or work in business to help
develop marketing strategies.

JRV 20
The Evolution of Life

Modern scientific findings indicate that the universe as we know it began to develop between 10 billion
and 20 billion years ago. Approximately 4.6 billion years ago, the Sun and the Earth developed, and about a
billion years later, the first forms of life appeared in the sea. Through the process of natural selection, living
forms that developed adaptive characteristics survived and reproduced. Geological forces and environmental
alterations brought about both gradual and rapid changes, leading to the evolution of new forms of life. Plants,
fish, amphibians, reptiles, and eventually mammals evolved over millions of years of environmental change.

About 67 million years ago, a family of mammals known as primates—a diverse group that share
similarities such as increased brain size, stereoscopic vision, grasping hands and feet, longer periods of offspring
dependence on their mothers, a complex social life, and enhanced learning abilities first appeared in the fossil
record. Early primates include ancestors of modern prosimians, such as lemurs, tarsiers, and lorises. Later
primates that appeared in the fossil record include anthropoids, such as monkeys, apes, and humans who shared
a common ancestor and have some fundamental similarities with one another. We can trace the
striking similarities among primates to a series of shared evolutionary relationships. Many people hold a common
misconception about human evolution—the mistaken belief that humans descended from modern apes such as
the gorilla and chimpanzee. This is a highly inaccurate interpretation of both Charles Darwin’s thesis and
contemporary scientific theories of human evolution that suggest that millions of years ago some animals
developed certain characteristics through evolutionary processes that made them precursors of later primates,
including humans. Darwin posited that humans share a common ancestor (now extinct) with living apes, but
evolved along lines completely different from modern gorillas and chimpanzees.

Recently, paleontologists discovered significant fossils in Spain of a primate that has been classified as
the “missing link” or common ancestor between the various ape species of gorillas, chimpanzees, orangutans,
and humans. This creature, named Pierolapithecus catalaunicus, has physical traits that connect it with early
apes and early hominids or ancestors of the human lineage. Pierolapithecus catalaunicus had a very flat face
with nostrils that are in almost the same plane as its eye sockets. Its face would resemble that of a modern
gorilla today. Paleoanthropologists believe that this creature existed in Africa and Europe during the Miocene
epoch, about 13 million years ago (Moyà-Solà et al. 2004).

Human Evolution

It is the possession of culture that distinguishes humans from all other animal species. In all other animal species,
except for primates, social behavior and communication are determined primarily by instinct and are essentially uniform
throughout each species. Though it was originally thought that only humans possessed culture, recent research has revealed
that some primates exhibit behavior that seems to resemble culture. Gombe chimpanzees in Zambia like to eat termites.
During the termite season, they spend a long time at termite mounds, and carry “termite-fishing wands,” grasses, vines, or
twigs, which are inserted into the termite mounds to extract the termites. Chimpanzees in the Tai area of West Africa use
stone and wood hammers and anvils to crack open nuts. McGrew described thirty-four different populations of chimpanzees
that had been observed making and using different tools. They use the same tool to solve different problems, and different
tools to solve the same problem; hence, they have what can be described as a tool kit (McGrew 1993). This behavior is
transmitted intergenerationally and would seem to be proto-cultural behavior.

The cultural behavior of humans, Homo sapiens, is not only learned and transmitted from one generation to the
next, but is also based on language and the capacity to create symbols, in contrast to what we have described above for other
primates. Human cultural behavior is not limited, as is chimpanzee learned behavior, but is infinitely expandable. Ape-
human comparisons, as Tattersall notes, only provide a background for understanding the way in which human mental
capacities for culture evolve (1998: 49).

JRV 21
The evolution of the human species from proto-human and early human forms involved a number of significant
physical changes, including the development of bipedal erect locomotion, increase in brain size, and especially neurological
reorganization. Not only fossil evidence, information from comparisons of molecular, genetic, and DNA evidence from
contemporary forms, but also analysis of ancient DNA and archeological remains are used to provide information about the
nature of the evolutionary tree leading up to modern humans. Earlier hominid forms, most of which belong to the genus
Australopithecus, emerged about 4.2 million years ago, were small, lightly built, and upright but with small brain size.
About 2.5 million years ago, the oldest recognized stone tools, Oldowan, were manufactured from pebbles, and seem to be
associated with Australopithecus. The first Homo species was Homo erectus, “which appeared around 1.9 MYA (million
years ago) in Africa, and exhibited a height and weight similar to modern humans, but with a smaller brain. H. erectus,
associated with Acheulean technology in some places, spread rapidly over much of the old world including Georgia,
Indonesia, and China” (Jobling et al. 2004). There are two theories about the emergence of Homo sapiens, anatomically
modern humans. One sees the development of this species in Africa probably from 130 to 180 thousand years ago, with
migration later to other parts of the world, 50,000 or 60,000 years ago. In contrast, the “multiregional evolutionary model”
sees the evolution of Homo sapiens from Homo erectus forms as occurring in different regions of the world with genetic
interchange between populations in continued contact and natural selection operating as factors in the transformation.
Neanderthal forms are usually seen as a distinct and separate species, Homo neanderthalensis, though they seem to have
continued to exist for a time after the development of Homo sapiens. The DNA that has been extracted from Neanderthal
bones is distinct from that of present-day humans. Their exact relationship to Homo sapiens has been the subject of much
debate.

Language is the vehicle for cultural expression, hence its origin has been the subject of much interest. Some see
bipedalism as setting the stage for the eventual development of vocal language. Recently a more detailed theory has been
propounded that views bipedalism as giving rise to body language and visual gesture, which are seen not only as the
dominant features of human interaction, but as the primary means of communication for our early hominid ancestors (Turner
2000). With the use of visually based language (gestures), the brain expanded, and this resulted in a pre-adaptation for verbal
language. Verbal language could only appear after the anatomical features necessary for its production were in place. The
vocal tract of Homo erectus, the hominid from which Homo sapiens is descended, was not yet organized in the form
necessary for vocal communication.

Although Neanderthals had larger brains and some of the same features that made human language possible, since
the pharynx was still not in the same place as in Homo sapiens and other parts of the vocal tract were different, they could
not produce vowels and were not considered capable of producing fully human language (Lieberman and McCarthy 1999).
A newer hypothesis has now emerged, which holds that changes in facial characteristics, vocal tract, and breathing
apparatus from Homo erectus to Neanderthal would have enabled the latter to speak (Buckley and Steele 2002). In terms of
the latter theory, the evolution of language must have occurred between the time of Homo erectus (1.9 million years) and
that of Neanderthal.

It is clear that language and the use, creation, and manipulation of symbols, which are central to culture, evolved as
did brain size and tool use. However, at this point there is no definitive information about the way in which language
evolved, since there are no “linguistic fossils” representing intermediate forms of language, which would be equivalent to
the tools from the Paleolithic period. Art and music, which employ symbols, make their first appearance in the Upper
Paleolithic with the people called Cro-Magnon, who were Homo sapiens like us.

The marked development of cerebral asymmetry noted above is connected to right- and left-handedness. The earliest
stone tools were made by right-handed individuals (Tattersall 1998: 76). The increase in sophistication and complexity of
the tools manufactured by early human beings occurred with expansion in brain size and intelligence. The early
archaeological record shows the widespread geographical distribution of the same pattern or style of tool type. This indicates
the presence of the features that characterize culture.

Recent research has pointed to another significant development in the evolution of culture—cooking. Many animal
species eat raw meat, but none cooks its food. In contrast, no human society relies on raw meat for significant parts of their
diet, but all cook their food. Cooking transformed vegetation into “food,” making it much more digestible. In examining the
archeological traces of cooking, Wrangham et al. see vegetable food, particularly tubers, which they argue
were collected and brought back by females, as the basis for cooking. They assume that females also did the cooking (1999).

JRV 22
Once cooked, such food became a valuable resource, which had to be protected by males, since it was easily subject to
marauding and theft. These features, together with the formation of an extended period of female sexual receptivity,
probably led to strong male-female bonds, a pattern not found among nonhuman primates. These researchers argued that
the important transformations that resulted from cooking occurred when the first hominids or humans appeared some 1.9
million years ago before the appearance of big game hunters.

Later forms of hominids became efficient hunters of large game animals as well. Meat, too, is more digestible when
cooked. The cooked tuber hypothesis is downplayed by later researchers who claim that “increased meat-eating was
influential in the early Homo clade . . . [and there is] abundant documented evidence of carcass acquisition, transport,
butchery, and increased meat-eating by early Homo” (Bunn and Stanford 2001). Early humans like Homo erectus show the
important fossil changes that one would expect to be associated with eating cooked food. The size of their molars, used to
grind food, is much reduced. Cooking of food, of course, presupposes the control of fire. However, the search for definite
proof of controlled use of fire by human presents many problems. It is difficult to distinguish between naturally occurring
fires and controlled use for cooking or warming. However, many of the sites where Homo erectus was found show evidence
of fire and are presumed to be occupation sites. According to Goren-Inbar, “fire making probably started more than 1 million
years ago among groups of Homo erectus in Africa and Asia” (2005). The Homo erectus site at Zhoukoutien shows definite
evidence of fire. The ability to use fire and cooking are universally found in all human cultures. Claude Lévi-Strauss (1990)
sees cooking as a defining feature of humanity. He has pointed out that the ideas about the discovery of cooking are present
in human myths throughout the world.

Still another feature distinguishes human cultural behavior from animal behavior. Human behavior is governed
primarily by cultural rules, not by the need for immediate gratification. The capacity to defer gratification was increasingly
built into human physiology as humans evolved. Lions and wolves eat immediately after a successful hunt, often gorging
on raw meat. Human beings do not eat the minute they become hungry. With the introduction of cooking, humans deferred
eating until long after the hunt, until cooking was completed. Everything about human eating is controlled by rules. Sex is
similarly subject to cultural rules. Unlike other animals, humans do not have a period of estrus during which they need to
have sexual intercourse and not during any other time. Instead, human beings usually follow their culture’s set of rules as
to when and where to have sex and the various positions to use.

CULTURAL UNIVERSALS

The biological nature of the human species requires that all cultures solve the basic problems of human existence such as
providing themselves with food and reproducing. As a consequence, though cultural differences do exist, all cultures share
certain fundamental similarities, which are referred to as cultural universals. Though languages differ, they are all
characterized by certain universal features, such as the presence of nouns, possessive forms, and verbs that distinguish
between the past, present, and future. Chomsky’s theory of universal grammar postulates that infants have an innate
cognitive structure that enables them to learn the grammatical complexities of any language. Though languages are different
from one another, they all have these universal features. Human consumption of food follows cultural rules regarding what
is eaten, when, with whom, and how—with which utensils you eat, with the right hand, and not the left. All cultures have
some kind of incest taboo, though the relatives with whom they must not have sexual intercourse vary. Rites of passage,
such as birth, reaching adulthood, marriage, and death, are celebrated ceremonially by societies, though not all of them
celebrate each of these rites of passage. Some anthropologists have pointed out that all cultures have law, government,
religion, conceptions of self, marriage, family, and kinship (Brown 1991, Kluckhohn 1953). These universal cultural
categories are present in all human societies since each must deal with the problems and concerns that all humans face
(Goodenough 1970). Ultimately, it is the characteristics of the human species and the human mind that form the basis for
cultural universals. Languages and cultures are structured in a particular manner as a consequence of the fact that the mind
of Homo sapiens is organized in a certain way.

CULTURAL RULES

Cultural rules dictate the way in which basic biological drives are expressed. What is learned and internalized by
human infants during the process of enculturation in different cultures are cultural rules. The enormous variations between
cultures are due to differences in cultural rules. Defining these cultural rules is like trying to identify the rules that govern a
language. All languages operate according to sets of rules, and people follow these in their speech. It is the linguist’s job to

JRV 23
determine the rules of grammar that the speakers of languages use automatically and are usually not aware of. Frequently,
people can tell the anthropologist what the cultural rules are. At other times, they may behave according to rules that they
themselves cannot verbalize. The anthropologist’s job is to uncover those cultural rules of which people may be unaware.
The existence of rules does not imply that speakers of a language or members of a culture are robots who speak and act in
identical fashion. Each infant learns cultural rules in a distinctive manner, and every speaker of a language has his or her
distinctive pronunciation and linguistic mannerisms. Individual variation is considerable in spoken language, and it is
equally present in cultural practice. Rules are meant to be flouted, and often individuals respond to rules that way. Lastly,
individuals are not simply recipients of culture; they are active participants in reworking their cultures and their traditions.
As a consequence, there is variation in observing the rules.

Rules governing sexual behavior in terms of with whom it is allowed, as well as when, where, and how, are highly
variable. For example, when Powdermaker studied the village of Lesu, in Papua New Guinea, it was acceptable for sexual
intercourse to take place before marriage (1933). The marriage relationship was symbolized by eating together. When a
couple publicly shared a meal, this signified that they were married and could henceforth eat only with one another. Even
though husband and wife could have sexual relations with other individuals, they could not eat with them. In our society, in
contrast, until the beginning of the sexual revolution about 60 years ago, couples engaged to be married could eat together,
but sexual intercourse could not take place until after marriage. The act of sexual intercourse symbolized marriage. At that
time, if either spouse had intercourse with another individual after marriage, that constituted the criminal act of adultery.
However, either spouse could have dinner with someone of the opposite sex. From the perspective of someone in our society,
the rules governing marriage in Lesu appear to be like our rules from 60 years ago “stood on their heads.”

We noted earlier that workers and bosses have differing cultural perspectives. Their repertoire of cultural rules
likewise may vary. Similarly, subcultures also exhibit variability in their cultural rules. This is referred to as intracultural
variation.

On occasion, as we noted above, individuals may violate cultural rules. All cultures have some provision for
sanctioning the violation of cultural rules as well as rewards for obeying them. In the same way that the sets of cultural rules
differ, both rewards and punishments also differ from one culture to another. Cultural rules also change over time. When
many individuals consistently interpret a rule differently than it had been interpreted before, the result will be a change in
the rule itself. An example of this sort is the fact that sexual intercourse in our society is no longer solely a symbol of
marriage, as we have noted.

SOCIETY

Another concept paralleling culture is that of society. Culture deals with meanings and symbolic patterning, while
society has been used to deal with the organization of social relationships within groups. Culture is distinctive of humans
alone, although there are some primates that have what we have characterized as proto-culture. However, all animals that
live in groups, humans among them, can be said to have societies. Thus a bee hive, a wolf pack, a deer herd, and a baboon
troop all constitute societies. As in a human society, the individual members of a wolf pack are differentiated as males and
females, as immature individuals and adults, and as mothers, fathers, and offspring. Individual wolves in each of these social
categories behave in particular ways. That there are resemblances between wolf and human societies should not be
surprising, since both wolves and humans are social animals. Today, there are no absolutely bounded social entities of the
type that were labelled societies in the past. Nation-states that are independent political entities are connected to other nation-
states. Many nation-states are multiethnic, containing groups with somewhat different cultural repertoires. Though
anthropologists might begin their research with such groups as if they were separate entities like societies, in the final
analysis, their social and cultural connection to other such groups and to the nation-state must be considered. These groups
share cultural ideas, and still other ideas are contested, but they have some ideas in common as part of the nation-state.

SOCIAL STATUS and ROLE

In societies or social groups, individuals usually occupy more than one position or social status at the same time.
An individual may be a father and a chief at the same time. Societies, of course, vary in the number and kinds of social
statuses. The behavior associated with a particular social status in a society is known as a social role. Social roles involve

JRV 24
behavior toward other people. For example, in Papua New Guinea, a headman will lead his followers to attend a ceremony
sponsored by another headman and his followers. When the headman orates on such an occasion, he speaks for his group,
and he is carrying out the social role of headman. Interaction of people in their social roles and interaction between groups
define two kinds of social relationships. These social relationships can be analyzed in terms of differentials in power,
prestige, and access to resources. The headman has more power, prestige, and resources than his followers. Inequality
characterizes many social roles, so that a father has power over his children, a manager has power over workers, and a
sergeant has power over his squad. The social structure contains a network of social roles, that is, the behavior associated
with a particular position or status, and a distribution of power through that network.

Scupin, R. et al. (2012). Anthropology: A Global Perspective 7th Edition, U.S.A., Pearson.

JRV 25
Psychological Perspectives

Sigmund Freud’s Psychoanalysis

Psychoanalytic personality theory is based on the writings of the Austrian Physician Sigmund
Freud. Developed in the late Victorian period, Freud’s ideas were quite radical in their time. Freud created two
basic models of the workings of the human mind. The first emphasized levels of consciousness and was known
as the topographic model. The second model approached human personality by exploring the interaction
between the three parts of the mind Freud identified (the well-known id, ego, and super-ego). This was known
as the structural model. Later these were combined so that personality was conceptualized as resulting from
the dynamic interplay between levels of consciousness and particular structures.

In Freud’s thinking, the id represents our basic primitive drives, principally sexual and aggressive in
nature. The ego is that aspect of personality which is capable of reason and self-control and helps the individual
to adapt to the demands of the external world. In order to do this, the ego must gain control of id desires and
channel them in socially acceptable ways. Left to its own devices, the id would be seeking immediate
gratification of the drives for pleasure and aggression that Freud believed were the basic motivations for human
beings on this level. So, the ego must step in and guide our behavior in a realistic manner in order to find ways
of satisfying the demands of the id without causing social difficulty for the person. The third structure of the
mind, the superego, develops out of this struggle and helps guide our behavior according to the norms of our
culture. The three mental structures must work in some degree of harmonious balance for a person to be
functioning in a healthy manner, i.e. satisfying their basic pleasure drive in accord with reality, and in a socially
acceptable manner. In terms of levels of consciousness, the ego lies in the domain of the conscious and
preconscious levels of awareness, the superego can be conscious, preconscious, or unconscious, and the id is
unconscious. Freud compared the levels of mental functioning to an iceberg with the smallest part (the
conscious mind) above the water line and the rest below it.

Psychosexual Stages of Development

Freud proposed that psychological development in childhood takes place during five psychosexual stages:
oral, anal, phallic, latency, and genital. These are called psychosexual stages because each stage represents the
fixation of libido (roughly translated as sexual drives or instincts) on a different area of the body. As a person
grows physically certain areas of their body become important as sources of potential frustration (erogenous
zones), pleasure or both.

Freud (1905) believed that life was built round tension and pleasure. Freud also believed that all tension
was due to the build-up of libido (sexual energy) and that all pleasure came from its discharge.

In describing human personality development as psychosexual Freud meant to convey that what
develops is the way in which sexual energy of the id accumulates and is discharged as we mature biologically.
(NB Freud used the term 'sexual' in a very general way to mean all pleasurable actions and thoughts).

Freud stressed that the first five years of life are crucial to the formation of adult personality. The id must
be controlled in order to satisfy social demands; this sets up a conflict between frustrated wishes and social
norms.
The ego and superego develop in order to exercise this control and direct the need for gratification into
socially acceptable channels. Gratification centers in different areas of the body at different stages of growth,
making the conflict at each stage psychosexual.

JRV 26
Oral Stage (Birth to 1 year)
In the first stage of personality development, the libido is centered in a baby's mouth. It gets much
satisfaction from putting all sorts of things in its mouth to satisfy the libido, and thus its id demands. Which at
this stage in life are oral, or mouth orientated, such as sucking, biting, and breastfeeding.

Freud said oral stimulation could lead to an oral fixation in later life. We see oral personalities all around
us such as smokers, nail-biters, finger-chewers, and thumb suckers. Oral personalities engage in such oral
behaviors, particularly when under stress.

Anal Stage (1 to 3 years)


The libido now becomes focused on the anus, and the child derives great pleasure from defecating. The
child is now fully aware that they are a person in their own right and that their wishes can bring them into
conflict with the demands of the outside world (i.e., their ego has developed).

Freud believed that this type of conflict tends to come to a head in potty training, in which adults impose
restrictions on when and where the child can defecate. The nature of this first conflict with authority can
determine the child's future relationship with all forms of authority.

Early or harsh potty training can lead to the child becoming an anal-retentive personality who hates
mess, is obsessively tidy, punctual and respectful of authority. They can be stubborn and tight-fisted with their
cash and possessions.

This is all related to pleasure got from holding on to their faeces when toddlers, and their mum's then
insisting that they get rid of it by placing them on the potty until they perform!

Not as daft as it sounds. The anal expulsive, on the other hand, underwent a liberal toilet-training regime
during the anal stage.

In adulthood, the anal expulsive is the person who wants to share things with you. They like giving
things away. In essence, they are 'sharing their s**t'!' An anal-expulsive personality is also messy, disorganized
and rebellious.

Phallic Stage (3 to 6 years)


Sensitivity now becomes concentrated in the genitals and masturbation (in both sexes) becomes a new
source of pleasure.

The child becomes aware of anatomical sex differences, which sets in motion the conflict between erotic
attraction, resentment, rivalry, jealousy and fear which Freud called the Oedipus complex (in boys) and the
Electra complex (in girls).

This is resolved through the process of identification, which involves the child adopting the characteristics
of the same sex parent.

In the young boy, the Oedipus complex or more correctly, conflict, arises because the boy develops
sexual (pleasurable) desires for his mother. He wants to possess his mother exclusively and get rid of his father
to enable him to do so.

Irrationally, the boy thinks that if his father were to find out about all this, his father would take away
what he loves the most. During the phallic stage what the boy loves most is his penis. Hence the boy develops
castration anxiety.

JRV 27
The little boy then sets out to resolve this problem by imitating, copying and joining in masculine dad-
type behaviors. This is called identification, and is how the three-to-five year old boy resolves his Oedipus
complex.

Identification means internally adopting the values, attitudes, and behaviors of another person. The
consequence of this is that the boy takes on the male gender role, and adopts an ego ideal and values that
become the superego.

For girls, the Oedipus or Electra complex is less than satisfactory. Briefly, the girl desires the father, but
realizes that she does not have a penis. This leads to the development of penis envy and the wish to be a
boy.
The girl resolves this by repressing her desire for her father and substituting the wish for a penis with
the wish for a baby. The girl blames her mother for her 'castrated state,' and this creates great tension.

The girl then represses her feelings (to remove the tension) and identifies with the mother to take on
the female gender role.

Latency Stage (6 years to puberty)


No further psychosexual development takes place during this stage (latent means hidden). The libido is
dormant.

Freud thought that most sexual impulses are repressed during the latent stage, and sexual energy can
be sublimated towards school work, hobbies, and friendships.

Much of the child's energy is channeled into developing new skills and acquiring new knowledge, and
play becomes largely confined to other children of the same gender.

Genital Stage (puberty to adult)


This is the last stage of Freud's psychosexual theory of personality development and begins in puberty. It
is a time of adolescent sexual experimentation, the successful resolution of which is settling down in a loving
one-to-one relationship with another person in our 20's.

Sexual instinct is directed to heterosexual pleasure, rather than self-pleasure like during the phallic stage.
For Freud, the proper outlet of the sexual instinct in adults was through heterosexual intercourse. Fixation and
conflict may prevent this with the consequence that sexual perversions may develop.

For example, fixation at the oral stage may result in a person gaining sexual pleasure primarily from
kissing and oral sex, rather than sexual intercourse.

https://www.simplypsychology.org/psychosexual.html

Defense Mechanisms

Defense mechanisms are psychological strategies that are unconsciously used to protect a person from
anxiety arising from unacceptable thoughts or feelings.

We use defense mechanisms to protect ourselves from feelings of anxiety or guilt, which arise because
we feel threatened, or because our id or superego becomes too demanding.

JRV 28
Defense mechanisms operate at an unconscious level and help ward off unpleasant feelings (i.e., anxiety)
or make good things feel better for the individual.

Ego-defense mechanisms are natural and normal. When they get out of proportion (i.e., used with
frequency), neuroses develop, such as anxiety states, phobias, obsessions, or hysteria.

Some defense mechanisms:

1. COMPENSATION
Compensation is the process of masking perceived negative self-concepts by developing positive self-
concepts to make up for and to cover those perceived negative self-concepts.

For example, if you think you are an idiot, then you may work at becoming physically more fit than
others to make up for this shortcoming by compensating for it in another area of human activity. The
reasoning is that by having a good self-concept about being physically fit, you can then ignore, cover, or even
negate your negative self-concept about your reasoning capability.

2. DENIAL
Denial is the subconscious or conscious process of blinding yourself to negative self-concepts that you
believe exist in you, but that you do not want to deal with or face. It is “closing your eyes” to your negative self-
concepts about people, places, or things that you find too severe to admit or deal with.

For example, a family may pretend and act as if their father is only sick or having a hard time when it is
evident to everyone that he is an abusive alcoholic. The negative self-concept for each family member comes
from identifying with the father because he is a part of the family; the father cannot be viewed as a negative
image, or everyone else in the family, too, will be considered to be that negative image.

3. DISPLACEMENT
Displacement is when you express feelings to a substitute target because you are unwilling to express
them to the real target. The feelings expressed to the substitute target are based on your negative self-concepts
about the real target and yourself in relation to the real target. That is, you think poorly of someone and yourself
in relation to them.

“Crooked anger” or “dumping” on another are examples of displacement. In such examples, you let out
your anger and frustration about the negative self-concepts you are feeling about someone else and yourself in
relation to them onto a safer target. The safer target can be someone below you in rank or position, someone
dependent upon you for financial support, or someone under your power and control.

Generally, alternate targets are targets that cannot object or fight back as opposed to actual targets that
might object and fight back. For example, the father comes home from work angry at his boss, so he verbally
abuses his wife and children. This process is often seen in bureaucracies: abuse and blame are passed down
the ladder.

4. IDENTIFICATION
Identification as a defense mechanism is the identification of yourself with causes, groups, heroes,
leaders, movie stars, organizations, religions, sports stars, or whatever you perceive as being good self-concepts
or self-images. This identification is a way to think of yourself as good self-concepts or images.

For example, you may identify with a crusade to help hungry children so that you can incorporate into
your ego some of the good self-images associated with that crusade. Worldwide sports prey upon this defense
mechanism to make money. Countries also prey upon this defense mechanism to make war by using
identification with the government to enlist cannon fodder, a.k.a. soldiers.
JRV 29
5. INTROJECTION
Introjection is the acceptance of the standards of others to avoid being rated as negative self-concepts
by their standards. For example, you may uncritically accept the standards of your government or religion to be
accepted as good self-concepts by them.

Introjection can be considered as the extreme case of conformity because introjection involves
conforming your beliefs as well as your behaviors. So-called educational systems prey upon this defense
mechanism to produce parrots to spread their dogmas as if they were factual and superior.

6. PROJECTION
Projection is the attribution to others of your negative self-concepts. This projection occurs when you
want to avoid facing negative self-concepts about your behaviors or intentions, and you do so by seeing them,
in other people, instead.

For example, you are mad at your spouse and subconsciously damning them, but, you instead think or
claim that they are mad at you and damning you in their mind. Alternatively, you may believe that you are
inferior and therefore attack another race, ethnicity, or belief system, claiming it is inferior.

7. RATIONALIZATION
Rationalization is the process of explaining why, this time, you do not have to be judged as negative self-
concepts because of your behaviors or intentions. That is, you justify and excuse your misdeeds or mistakes
with reasons that are circumstantial at best and unfounded at worst.

Rationalization is sometimes referred to as the “sour grapes” response when, for example, you rationalize
that you do not want something that you did not get because “It was lousy, anyway.” Rationalization can also
take the opposite tack or what is sometimes referred to as the “sweet lemon” response. In this case, you justify,
for example, an error in purchasing by extolling some of the insignificant good points of the product.
People commonly excuse their poor behavior as being due to poor circumstances but hold other people
accountable for their poor behavior as being due to their poor character.

8. REACTION FORMATION
Reaction formation is the process of developing conscious positive self-concepts to cover and hide
opposite, negative self-concepts. It is the making up for negative self-concepts by showing off their reverse.

For example, you may hate your parents; but, instead of showing that, you go out of your way to show
care and concern for them so that you can be judged to be a loving child. Another typical example is someone
with a speech impediment going to school to become a public announcer to have themselves believe through
others that they are a good speaker. Another example is the concrete thinker joining a group for abstract thinkers
(for example General Semantics) and pretending they understand the abstractions by memorizing and defending
definitions held by the group.

9. REGRESSION
Regression is the returning to an earlier time in your life when you were not so threatened with becoming
negative self-concepts. You return to thoughts, feelings, and behaviors of an earlier developmental stage to
identify yourself as you used to back then.

For example, you may be being criticized as an adult and feeling horrible about it. To escape this, you
revert to acting like a little child, because you did not then own criticism as defining you as negative self-
concepts, because others mostly thought of you as good images back then.

JRV 30
10. REPRESSION
Repression is the unconscious and seemingly involuntary removal from awareness of the negative self-
concepts that your ego finds too painful to tolerate. For example, you may completely block out thoughts that
you have of wanting to kill one of your parents.

Repression is not the same as suppression, which is the conscious removal from the consciousness of
intolerable negative self-concepts. Unconsciousness was Freud’s renaming of the spiritual concept of internal
darkness. Repression is a choice, but a decision that we choose to remain unaware of as part of the defense of
repression.

11. RITUAL AND UNDOING


Ritual and undoing as a defense mechanism is the process of trying to undo negative self-concept ratings
of yourself by performing rituals or behaviors designed to offset the behaviors that the negative evaluations of
you were based on.

For example, a millionaire might give to charities for the poor to make up for profiting from the poor.
Alternatively, a parent might buy his or her children many gifts to make up for not spending time with them.
Classically, a person may wash his or her hands many times in order not to think of themselves as “dirty” like
their mother used to call them.

12. SUBLIMATION
Sublimation is the process of diverting your feelings about the negative self-concepts that you have of
yourself or others into more socially acceptable activities.

For example, if you generally hate people, then you might be an aggressive environmental activist, an
aggressive political activist, or join a fighting army. This way, you can get some approval for the feelings that
you disapprove of. As another example, the criminally minded often become police as a way to think well of
their meanness and attitudes of being entitled to take advantage of and abuse others.

https://kevinfitzmaurice.com/self-esteem/self-esteem-issues/sigmund-freud-the-12-defense-mechanisms/

ERIK ERIKSON’S PSYCHOSOCIAL THEORY

Erik Erikson's theory of psychosocial development is one of the best-known theories of personality in
psychology. Much like Sigmund Freud, Erikson believed that personality develops in a series of stages. Unlike
Freud's theory of psychosexual stages, Erikson's theory describes the impact of social experience across the
whole lifespan.

In each stage, Erikson believed people experience a conflict that serves as a turning point in
development. In Erikson's view, these conflicts are centered on either developing a psychological quality or
failing to develop that quality. During these times, the potential for personal growth is high, but so is the potential
for failure.

According to the theory, successful completion of each stage results in a healthy personality and the
acquisition of basic virtues. Basic virtues are characteristic strengths which the ego can use to resolve subsequent
crises.

JRV 31
Failure to successfully complete a stage can result in a reduced ability to complete further stages and
therefore a more unhealthy personality and sense of self. These stages, however, can be resolved successfully
at a later time.

Trust vs. Mistrust

Trust vs. mistrust is the first stage in Erik Erikson's theory of psychosocial development. This stage begins
at birth continues to approximately 18 months of age. During this stage, the infant is uncertain about the world
in which they live, and looks towards their primary caregiver for stability and consistency of care.

If the care the infant receives is consistent, predictable and reliable, they will develop a sense of trust
which will carry with them to other relationships, and they will be able to feel secure even when threatened. If
these needs are not consistently met, mistrust, suspicion, and anxiety may develop.

JRV 32
If the care has been inconsistent, unpredictable and unreliable, then the infant may develop a sense of
mistrust, suspicion, and anxiety. In this situation the infant will not have confidence in the world around them
or in their abilities to influence events.

Success in this stage will lead to the virtue of hope. By developing a sense of trust, the infant can have
hope that as new crises arise, there is a real possibility that other people will be there as a source of support.
Failing to acquire the virtue of hope will lead to the development of fear.

This infant will carry the basic sense of mistrust with them to other relationships. It may result in anxiety,
heightened insecurities, and an over feeling of mistrust in the world around them.

Autonomy vs. Shame and Doubt

Autonomy versus shame and doubt is the second stage of Erik Erikson's stages of psychosocial
development. This stage occurs between the ages of 18 months to approximately 3 years. According to Erikson,
children at this stage are focused on developing a sense of personal control over physical skills and a sense of
independence.

Success in this stage will lead to the virtue of will. If children in this stage are encouraged and supported
in their increased independence, they become more confident and secure in their own ability to survive in the
world.

If children are criticized, overly controlled, or not given the opportunity to assert themselves, they begin
to feel inadequate in their ability to survive, and may then become overly dependent upon others, lack self-
esteem, and feel a sense of shame or doubt in their abilities.

Erikson states it is critical that parents allow their children to explore the limits of their abilities within an
encouraging environment which is tolerant of failure. For example, rather than put on a child's clothes a
supportive parent should have the patience to allow the child to try until they succeed or ask for assistance. So,
the parents need to encourage the child to become more independent while at the same time protecting the
child so that constant failure is avoided.

A delicate balance is required from the parent. They must try not to do everything for the child, but if
the child fails at a particular task they must not criticize the child for failures and accidents (particularly when
toilet training). The aim has to be “self control without a loss of self-esteem” (Gross, 1992).

Initiative vs. Guilt

Initiative versus guilt is the third stage of Erik Erikson's theory of psychosocial development. During the
initiative versus guilt stage, children assert themselves more frequently. These are particularly lively, rapid-
developing years in a child’s life. According to Bee (1992), it is a “time of vigor of action and of behaviors that
the parents may see as aggressive."

During this period the primary feature involves the child regularly interacting with other children at school.
Central to this stage is play, as it provides children with the opportunity to explore their interpersonal skills
through initiating activities. Children begin to plan activities, make up games, and initiate activities with others.
If given this opportunity, children develop a sense of initiative and feel secure in their ability to lead others and
make decisions.

JRV 33
Conversely, if this tendency is squelched, either through criticism or control, children develop a sense of
guilt. The child will often overstep the mark in his forcefulness, and the danger is that the parents will tend to
punish the child and restrict his initiatives too much.

It is at this stage that the child will begin to ask many questions as his thirst for knowledge grows. If the
parents treat the child’s questions as trivial, a nuisance or embarrassing or other aspects of their behavior as
threatening then the child may have feelings of guilt for “being a nuisance”.

Too much guilt can make the child slow to interact with others and may inhibit their creativity. Some
guilt is, of course, necessary; otherwise the child would not know how to exercise self-control or have a
conscience.

A healthy balance between initiative and guilt is important. Success in this stage will lead to the virtue of
purpose, while failure results in a sense of guilt.

Industry vs. Inferiority

Erikson's fourth psychosocial crisis, involving industry (competence) vs. inferiority occurs during
childhood between the ages of five and twelve. Children are at the stage where they will be learning to read
and write, to do sums, to do things on their own. Teachers begin to take an important role in the child’s life as
they teach the child specific skills.

It is at this stage that the child’s peer group will gain greater significance and will become a major source
of the child’s self-esteem. The child now feels the need to win approval by demonstrating specific competencies
that are valued by society and begin to develop a sense of pride in their accomplishments.

If children are encouraged and reinforced for their initiative, they begin to feel industrious (competent)
and feel confident in their ability to achieve goals. If this initiative is not encouraged, if it is restricted by parents
or teacher, then the child begins to feel inferior, doubting his own abilities and therefore may not reach his or
her potential.

If the child cannot develop the specific skill they feel society is demanding (e.g., being athletic) then they
may develop a sense of inferiority.

Some failure may be necessary so that the child can develop some modesty. Again, a balance between
competence and modesty is necessary. Success in this stage will lead to the virtue of competence.

Identity vs. Role Confusion

The fifth stage of Erik Erikson's theory of psychosocial development is identity vs. role confusion, and it
occurs during adolescence, from about 12-18 years. During this stage, adolescents search for a sense of self
and personal identity, through an intense exploration of personal values, beliefs, and goals.

During adolescence, the transition from childhood to adulthood is most important. Children are becoming
more independent, and begin to look at the future in terms of career, relationships, families, housing, etc. The
individual wants to belong to a society and fit in.

The adolescent mind is essentially a mind or moratorium, a psychosocial stage between childhood and
adulthood, and between the morality learned by the child, and the ethics to be developed by the adult (Erikson,
1963, p. 245)
JRV 34
This is a major stage of development where the child has to learn the roles he will occupy as an adult.
It is during this stage that the adolescent will re-examine his identity and try to find out exactly who he or she
is. Erikson suggests that two identities are involved: the sexual and the occupational.

According to Bee (1992), what should happen at the end of this stage is “a reintegrated sense of self, of
what one wants to do or be, and of one’s appropriate sex role”. During this stage the body image of the
adolescent changes.

Erikson claims that the adolescent may feel uncomfortable about their body for a while until they can
adapt and “grow into” the changes. Success in this stage will lead to the virtue of fidelity.

Fidelity involves being able to commit one's self to others on the basis of accepting others, even when
there may be ideological differences.

During this period, they explore possibilities and begin to form their own identity based upon the outcome
of their explorations. Failure to establish a sense of identity within society ("I don’t know what I want to be when
I grow up") can lead to role confusion. Role confusion involves the individual not being sure about themselves
or their place in society.

In response to role confusion or identity crisis, an adolescent may begin to experiment with different
lifestyles (e.g., work, education or political activities).

Also pressuring someone into an identity can result in rebellion in the form of establishing a negative
identity, and in addition to this feeling of unhappiness.

Intimacy vs. Isolation

Intimacy versus isolation is the sixth stage of Erik Erikson's theory of psychosocial development. This
stage takes place during young adulthood between the ages of approximately 18 to 40 yrs.

During this period, the major conflict centers on forming intimate, loving relationships with other people.
During this period, we begin to share ourselves more intimately with others. We explore relationships leading
toward longer-term commitments with someone other than a family member.

Successful completion of this stage can result in happy relationships and a sense of commitment, safety,
and care within a relationship.

Avoiding intimacy, fearing commitment and relationships can lead to isolation, loneliness, and sometimes
depression. Success in this stage will lead to the virtue of love.

Generativity vs. Stagnation

Generativity versus stagnation is the seventh of eight stages of Erik Erikson's theory of psychosocial
development. This stage takes place during during middle adulthood (ages 40 to 65 yrs).

Generativity refers to "making your mark" on the world through creating or nurturing things that will
outlast an individual.

People experience a need to create or nurture things that will outlast them, often having mentees or
creating positive changes that will benefit other people.

JRV 35
We give back to society through raising our children, being productive at work, and becoming involved
in community activities and organizations. Through generativity we develop a sense of being a part of the bigger
picture.

Success leads to feelings of usefulness and accomplishment, while failure results in shallow involvement
in the world.

By failing to find a way to contribute, we become stagnant and feel unproductive. These individuals may
feel disconnected or uninvolved with their community and with society as a whole. Success in this stage will lead
to the virtue of care.

Ego Integrity vs. Despair

Ego integrity versus despair is the eighth and final stage of Erik Erikson’s stage theory of psychosocial
development. This stage begins at approximately age 65 and ends at death.

It is during this time that we contemplate our accomplishments and can develop integrity if we see
ourselves as leading a successful life.

Erikson described ego integrity as “the acceptance of one’s one and only life cycle as something that had
to be” (1950, p. 268) and later as “a sense of coherence and wholeness” (1982, p. 65).
As we grow older (65+ yrs) and become senior citizens, we tend to slow down our productivity and explore life
as a retired person.

Erik Erikson believed if we see our lives as unproductive, feel guilt about our past, or feel that we did not
accomplish our life goals, we become dissatisfied with life and develop despair, often leading to depression and
hopelessness.

Success in this stage will lead to the virtue of wisdom. Wisdom enables a person to look back on their
life with a sense of closure and completeness, and also accept death without fear.

https://www.simplypsychology.org/Erik-Erikson.html

ABRAHAM MASLOW

Maslow's hierarchy of needs is a motivational theory in psychology comprising a five-tier model of human
needs, often depicted as hierarchical levels within a pyramid.

Needs lower down in the hierarchy must be satisfied before individuals can attend to needs higher up.
From the bottom of the hierarchy upwards, the needs are: physiological, safety, love and belonging, esteem and
self-actualization.

Maslow advanced the following propositions about human behavior:


 Man is a wanting being.
 A satisfied need is not a motivator of behavior, only unsatisfied needs motivate.
 Man’s needs are arranged in a series of levels - a hierarchy of importance. As soon as needs on a lower
level are met those on the next, higher level will demand satisfaction. Maslow believed the underlying
needs for all human motivation to be on five general levels from lowest to highest, shown below.
Within those levels, there could be many specific needs, from lowest to highest.

JRV 36
Deficiency needs vs. growth needs

This five-stage model can be divided into deficiency needs and growth needs. The first four levels are
often referred to as deficiency needs (D-needs), and the top level is known as growth or being needs (B-needs).
Deficiency needs arise due to deprivation and are said to motivate people when they are unmet. Also, the
motivation to fulfil such needs will become stronger the longer the duration they are denied. For example, the
longer a person goes without food, the hungrier they will become.

Maslow (1943) initially stated that individuals must satisfy lower level deficit needs before progressing
on to meet higher level growth needs. However, he later clarified that satisfaction of a needs is not an “all-or-
none” phenomenon, admitting that his earlier statements may have given “the false impression that a need
must be satisfied 100 percent before the next need emerges” (1987, p. 69).

When a deficit need has been 'more or less' satisfied it will go away, and our activities become habitually
directed towards meeting the next set of needs that we have yet to satisfy. These then become our salient
needs. However, growth needs continue to be felt and may even become stronger once they have been engaged.

Growth needs do not stem from a lack of something, but rather from a desire to grow as a person. Once
these growth needs have been reasonably satisfied, one may be able to reach the highest level called self-
actualization.

Every person is capable and has the desire to move up the hierarchy toward a level of self-actualization.
Unfortunately, progress is often disrupted by a failure to meet lower level needs. Life experiences, including
divorce and loss of a job, may cause an individual to fluctuate between levels of the hierarchy.

Therefore, not everyone will move through the hierarchy in a uni-directional manner but may move back
and forth between the different types of needs.

1. Physiological needs - these are biological requirements for human survival, e.g. air, food, drink, shelter,
clothing, warmth, sex, sleep.

JRV 37
If these needs are not satisfied the human body cannot function optimally. Maslow considered
physiological needs the most important as all the other needs become secondary until these needs are met.
2. Safety needs - protection from elements, security, order, law, stability, freedom from fear.

3. Love and belongingness needs - after physiological and safety needs have been fulfilled, the third level
of human needs is social and involves feelings of belongingness. The need for interpersonal relationships
motivates behavior, examples include friendship, intimacy, trust, and acceptance, receiving and giving
affection and love. Affiliating, being part of a group (family, friends, work).

4. Esteem needs - which Maslow classified into two categories: (i) esteem for oneself (dignity, achievement,
mastery, independence) and (ii) the desire for reputation or respect from others (e.g., status, prestige).
Maslow indicated that the need for respect or reputation is most important for children and adolescents and
precedes real self-esteem or dignity.

5. Self-actualization needs - realizing personal potential, self-fulfilment, seeking personal growth and peak
experiences. A desire “to become everything one is capable of becoming”(Maslow, 1987, p. 64).

The expanded hierarchy of needs


1. Biological and physiological needs - air, food, drink, shelter, warmth, sex, sleep, etc.
2. Safety needs - protection from elements, security, order, law, stability, etc.
3. Love and belongingness needs - friendship, intimacy, trust, and acceptance, receiving and giving affection
and love. Affiliating, being part of a group (family, friends, work).
4. Esteem needs - which Maslow classified into two categories: (i) esteem for oneself (dignity, achievement,
mastery, independence) and (ii) the desire for reputation or respect from others (e.g., status, prestige).

5. Cognitive needs - knowledge and understanding, curiosity, exploration, need for meaning and predictability.
6. Aesthetic needs - appreciation and search for beauty, balance, form, etc.

7. Self-actualization needs - realizing personal potential, self-fulfillment, seeking personal growth and peak
experiences.

8. Transcendence needs - A person is motivated by values which transcend beyond the personal self (e.g.,
mystical experiences and certain experiences with nature, aesthetic experiences, sexual experiences, service to
others, the pursuit of science, religious faith, etc.).

JRV 38
Self-actualization

Instead of focusing on psychopathology and what goes wrong with people, Maslow (1943) formulated a
more positive account of human behavior which focused on what goes right. He was interested in human
potential, and how we fulfil that potential.

Psychologist Abraham Maslow (1943, 1954) stated that human motivation is based on people seeking
fulfillment and change through personal growth. Self-actualized people are those who were fulfilled and doing
all they were capable of.

The growth of self-actualization (Maslow, 1962) refers to the need for personal growth and discovery
that is present throughout a person’s life. For Maslow, a person is always 'becoming' and never remains static
in these terms. In self-actualization, a person comes to find a meaning to life that is important to them.

As each individual is unique, the motivation for self-actualization leads people in different directions
(Kenrick et al., 2010). For some people self-actualization can be achieved through creating works of art or
literature, for others through sport, in the classroom, or within a corporate setting.

Maslow (1962) believed self-actualization could be measured through the concept of peak experiences.
This occurs when a person experiences the world totally for what it is, and there are feelings of euphoria, joy,
and wonder.

It is important to note that self-actualization is a continual process of becoming rather than a perfect
state one reaches of a 'happy ever after' (Hoffman, 1988).
Maslow offers the following description of self-actualization:

'It refers to the person’s desire for self-fulfillment, namely, to the tendency for him to become actualized
in what he is potentially.

The specific form that these needs will take will of course vary greatly from person to person. In one
individual it may take the form of the desire to be an ideal mother, in another it may be expressed
athletically, and in still another it may be expressed in painting pictures or in inventions' (Maslow, 1943,
p. 382–383).

Characteristics of self-actualizers:

Although we are all, theoretically, capable of self-actualizing, most of us will not do so, or only to a limited
degree. Maslow (1970) estimated that only two percent of people would reach the state of self-actualization.
He was especially interested in the characteristics of people whom he considered to have achieved their potential
as individuals.

By studying 18 people he considered to be self-actualized (including Abraham Lincoln and Albert Einstein)
Maslow (1970) identified 15 characteristics of a self-actualized person.

1. They perceive reality efficiently and can tolerate uncertainty;


2. Accept themselves and others for what they are;
3. Spontaneous in thought and action;
4. Problem-centered (not self-centered);
5. Unusual sense of humor;
6. Able to look at life objectively;
7. Highly creative;
8. Resistant to enculturation, but not purposely unconventional;
JRV 39
9. Concerned for the welfare of humanity;
10. Capable of deep appreciation of basic life-experience;
11. Establish deep satisfying interpersonal relationships with a few people;
12. Peak experiences;
13. Need for privacy;
14. Democratic attitudes;
15. Strong moral/ethical standards.

Behavior leading to self-actualization:

(a) Experiencing life like a child, with full absorption and concentration;
(b) Trying new things instead of sticking to safe paths;
(c) Listening to your own feelings in evaluating experiences instead of the voice of tradition, authority or the
majority;
(d) Avoiding pretense ('game playing') and being honest;
(e) Being prepared to be unpopular if your views do not coincide with those of the majority;
(f) Taking responsibility and working hard;
(g) Trying to identify your defenses and having the courage to give them up.

The characteristics of self-actualizers and the behaviors leading to self-actualization are shown in the list
above. Although people achieve self-actualization in their own unique way, they tend to share certain
characteristics. However, self-actualization is a matter of degree, 'There are no perfect human beings'
(Maslow,1970a, p. 176).

It is not necessary to display all 15 characteristics to become self-actualized, and not only self-actualized
people will display them.

Maslow did not equate self-actualization with perfection. Self-actualization merely involves achieving
one's potential. Thus, someone can be silly, wasteful, vain and impolite, and still self-actualize. Less than two
percent of the population achieve self-actualization.

https://www.simplypsychology.org/maslow.html#summary

CARL ROGERS

Carl Rogers (1902-1987) was a humanistic psychologist who agreed with the main assumptions of
Abraham Maslow, but added that for a person to "grow", they need an environment that provides them with
genuineness (openness and self-disclosure), acceptance (being seen with unconditional positive regard), and
empathy (being listened to and understood).

Without these, relationships and healthy personalities will not develop as they should, much like a tree
will not grow without sunlight and water.

Rogers believed that every person could achieve their goals, wishes, and desires in life. When, or rather
if they did so, self-actualization took place.

This was one of Carl Rogers most important contributions to psychology, and for a person to reach their
potential a number of factors must be satisfied.

JRV 40
Self-Actualization

"The organism has one basic tendency and striving - to actualize, maintain, and enhance the
experiencing organism” (Rogers, 1951, p. 487).

Rogers rejected the deterministic nature of both psychoanalysis and behaviorism and maintained that
we behave as we do because of the way we perceive our situation. "As no one else can know how we perceive,
we are the best experts on ourselves."

Carl Rogers (1959) believed that humans have one basic motive, that is the tendency to self-actualize -
i.e., to fulfill one's potential and achieve the highest level of 'human-beingness' we can. Like a flower that will
grow to its full potential if the conditions are right, but which is constrained by its environment, so people will
flourish and reach their potential if their environment is good enough.

However, unlike a flower, the potential of the individual human is unique, and we are meant to develop
in different ways according to our personality. Rogers believed that people are inherently good and creative.
They become destructive only when a poor self-concept or external constraints override the valuing process.
Carl Rogers believed that for a person to achieve self-actualization they must be in a state of congruence.
This means that self-actualization occurs when a person’s “ideal self” (i.e., who they would like to be) is
congruent with their actual behavior (self-image).

Rogers describes an individual who is actualizing as a fully functioning person. The main determinant of
whether we will become self-actualized is childhood experience.

The Fully Functioning Person

Rogers believed that every person could achieve their goal. This means that the person is in touch with
the here and now, his or her subjective experiences and feelings, continually growing and changing.

In many ways, Rogers regarded the fully functioning person as an ideal and one that people do not
ultimately achieve. It is wrong to think of this as an end or completion of life’s journey; rather it is a process of
always becoming and changing.

Rogers identified five characteristics of the fully functioning person:

1. Open to experience: both positive and negative emotions accepted. Negative feelings are not denied, but
worked through (rather than resorting to ego defense mechanisms).
2. Existential living: in touch with different experiences as they occur in life, avoiding prejudging and
preconceptions. Being able to live and fully appreciate the present, not always looking back to the past or forward
to the future (i.e., living for the moment).
3. Trust feelings: feeling, instincts, and gut-reactions are paid attention to and trusted. People’s own decisions
are the right ones, and we should trust ourselves to make the right choices.
4. Creativity: creative thinking and risk-taking are features of a person’s life. A person does not play safe all
the time. This involves the ability to adjust and change and seek new experiences.
5. Fulfilled life: a person is happy and satisfied with life, and always looking for new challenges and
experiences.

For Rogers, fully functioning people are well adjusted, well balanced and interesting to know. Often such
people are high achievers in society.

Critics claim that the fully functioning person is a product of Western culture. In other cultures, such as
Eastern cultures, the achievement of the group is valued more highly than the achievement of any one person.
JRV 41
Personality Development

Central to Rogers' personality theory is the notion of self or self-concept. This is defined as "the
organized, consistent set of perceptions and beliefs about oneself."

The self is the humanistic term for who we really are as a person. The self is our inner personality, and
can be likened to the soul, or Freud's psyche. The self is influenced by the experiences a person has in their
life, and out interpretations of those experiences. Two primary sources that influence our self-concept are
childhood experiences and evaluation by others.

According to Rogers (1959), we want to feel, experience and behave in ways which are consistent with
our self-image and which reflect what we would like to be like, our ideal-self. The closer our self-image and
ideal-self are to each other, the more consistent or congruent we are and the higher our sense of self-worth.
A person is said to be in a state of incongruence if some of the totality of their experience is unacceptable to
them and is denied or distorted in the self-image.

The humanistic approach states that the self is composed of concepts unique to ourselves. The self-
concept includes three components:

Self-worth
Self-worth (or self-esteem) comprises what we think about ourselves. Rogers believed feelings of self-
worth developed in early childhood and were formed from the interaction of the child with the mother and father.

Self-image
How we see ourselves, which is important to good psychological health. Self-image includes the influence
of our body image on inner personality.

At a simple level, we might perceive ourselves as a good or bad person, beautiful or ugly. Self-image
affects how a person thinks, feels and behaves in the world.

Ideal-self
This is the person who we would like to be. It consists of our goals and ambitions in life, and is dynamic
– i.e., forever changing.
The ideal self in childhood is not the ideal self in our teens or late twenties etc.

Positive Regard and Self Worth

Carl Rogers (1951) viewed the child as having two basic needs: positive regard from other people and
self-worth. How we think about ourselves, our feelings of self-worth are of fundamental importance both to
psychological health and to the likelihood that we can achieve goals and ambitions in life and achieve self-
actualization.

Self-worth may be seen as a continuum from very high to very low. For Carl Rogers (1959) a person
who has high self-worth, that is, has confidence and positive feelings about him or herself, faces challenges in
life, accepts failure and unhappiness at times, and is open with people.

A person with low self-worth may avoid challenges in life, not accept that life can be painful and unhappy
at times, and will be defensive and guarded with other people.

Rogers believed feelings of self-worth developed in early childhood and were formed from the interaction
of the child with the mother and father. As a child grows older, interactions with significant others will affect
feelings of self-worth.
JRV 42
Rogers believed that we need to be regarded positively by others; we need to feel valued, respected,
treated with affection and loved. Positive regard is to do with how other people evaluate and judge us in social
interaction. Rogers made a distinction between unconditional positive regard and conditional positive regard.

Unconditional Positive Regard


Unconditional positive regardis where parents, significant others (and the humanist therapist) accepts
and loves the person for what he or she is. Positive regard is not withdrawn if the person does something wrong
or makes a mistake.

The consequences of unconditional positive regard are that the person feels free to try things out and
make mistakes, even though this may lead to getting it worse at times.
People who are able to self-actualize are more likely to have received unconditional positive regard from others,
especially their parents in childhood.

Conditional Positive Regard


Conditional positive regard is where positive regard, praise, and approval, depend upon the child, for
example, behaving in ways that the parents think correct.
Hence the child is not loved for the person he or she is, but on condition that he or she behaves only in ways
approved by the parent(s).
At the extreme, a person who constantly seeks approval from other people is likely only to have
experienced conditional positive regard as a child.

Congruence
A person’s ideal self may not be consistent with what actually happens in life and experiences of the
person. Hence, a difference may exist between a person’s ideal self and actual experience. This is called
incongruence.
Where a person’s ideal self and actual experience are consistent or very similar, a state of congruence
exists. Rarely, if ever, does a total state of congruence exist; all people experience a certain amount of
incongruence.

The development of congruence is dependent on unconditional positive regard. Carl Rogers believed that
for a person to achieve self-actualization they must be in a state of congruence.
JRV 43
According to Rogers, we want to feel, experience and behave in ways which are consistent with our self-
image and which reflect what we would like to be like, our ideal-self.

The closer our self-image and ideal-self are to each other, the more consistent or congruent we are and
the higher our sense of self-worth. A person is said to be in a state of incongruence if some of the totality of
their experience is unacceptable to them and is denied or distorted in the self-image.
Incongruence is "a discrepancy between the actual experience of the organism and the self-picture of the
individual insofar as it represents that experience.

As we prefer to see ourselves in ways that are consistent with our self-image, we may use defense
mechanisms like denial or repression in order to feel less threatened by some of what we consider to be our
undesirable feelings. A person whose self-concept is incongruent with her or his real feelings and experiences
will defend because the truth hurts.

https://www.simplypsychology.org/carl-rogers.html

JRV 44

You might also like