Problem Centered Curriculum PDF
Problem Centered Curriculum PDF
Problem Centered Curriculum PDF
ABSTRACT
This exploratory research describes and compares how doubt evolves while
learning about electricity in two different learning contexts: problem-centered
and teaching-centered. It provides descriptions of performance variations
when Secondary 2 to Secondary 4 students begin appropriating basic
electricity knowledge with attitudes of either certainty or uncertainty in the two
aforementioned contexts. A simple classification of every test answer under
five categories (“legitimate certainty”, “legitimate doubt”, “under-estimation”,
“over-estimation” and “do not know”) was used to track how students’ answers
migrated from one category to another, and allowed researchers to monitor how
students’ “certainty vs. doubt” profile evolved in conjunction with variations in
their performance. Results indicate, among other things, that problem-centered
learning approaches seem to be more profitable for students who express at
least a little certainty about their answers, and that teacher-centered approaches
seem to be more appropriate for students who express doubts. Results also
suggest that problem-centered approaches better prevents the development of
unexpected conceptions.
INTRODUCTION
This research explores how initial certainty and doubt influence knowledge acquisition. This exploration
was made in two different contexts: a problem-centered learning content and teaching-centered learning
context, enabling descriptions and comparisons. The article first describes the general role of doubt in
the process of knowledge acquisition, and more specifically scientific knowledge acquisition, and then
concentrates on the differences between two fundamentally different ways to learn in school.
1
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
Although much enthusiasm about problem-centered approaches can be noted, scant research exists that
can be used to elucidate the implementation of this type of pedagogy in the context of K-12 science
classes (Hmelo-Silver, 2004). Two reasons stand out to explain this state of affairs. First, comparing the
effects of problem-centered learning to other pedagogical approaches is a very difficult task because it
often doesn’t aim to develop the same competencies in learners. Where more traditional approaches to
teaching (such as direct teaching) are not as suitable for the development of high level skills (such as
open problem solving), problem-centered approaches appear to provide a fertile terrain. If the targeted
performances are different in nature, comparison between the two treatments might appear akin to
comparing apples with oranges, or will often lead to a debate about the purpose of education rather
than its effectiveness. Moreover, and quite understandably, K-12 teachers—many of whom work every
day with overcrowded classes and are constantly managing demanding situations (Perrenoud, 1996)—
might be very reticent to adhere to research results that cannot clearly demonstrate the superiority of
methods that can be more time consuming, over those they deem to have yielded “proven” results.
Secondly, research on problem-centered approaches to teaching has not focused primarily on
K-12 science education, but more on post-secondary education. Hmelo-Silver (2004) suggests that
constraints and difficulties typical of K-12 science education (and common across classrooms at this
level) such as classroom organization, subject-domain centered curricula, and instructional periods
of 50 to 75 minutes in duration, are likely to be factors explaining the lack of research in this area.
Therefore, more research is needed to enlighten the effects of pedagogy at these levels.
2
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
that science is all about doubt and uncertainty (Barrette, 2005; Morin, 1977). Science is therefore often
depicted not as an activity that aims to discover truth, an endeavour which is often philosophically
presented as impossible, but rather as a process that essentially aims to discard inappropriate models
and hypotheses, leaving the most resistant ones standing (Popper, 1995). In this refutational perspective,
doubt and dissatisfaction (Strike, Posner, Hewson, & Gertzog, 1982) are perceived as active agents of
scientific progress, whereas certainty becomes an impediment to such progress.
Nevertheless, it is important to distinguish doubt about one’s abilities or competence from doubt as a
driving and supportive mechanism through which scientific understanding and scientific knowledge can
be developed and acquired. The former can be assessed through standardized testing of self-esteem or
self-worth in science. Such tests have showed, for example, that females usually get lower scores than
males (Acker & Oatley, 1993; CCA/CCL, 2007; Thompson & Dinnel, 2007). It can also be assessed by
allowing students the opportunity to freely express their inability to produce an answer to a question
without penalty of any type. The latter, in which this research is interested, can occur when students
have the opportunity to propose answers while also being given the possibility to express doubt about
these answers. This particular form of doubt was studied by Merenluoto and Lehtinen (2002) in the
field of mathematics education. These researchers concluded that performance and certainty usually go
together, except for the students who perform best. For the latter, they found that certainty was almost
as low as for the weakest performers. This led them to argue for the use of pedagogies that favour a
tolerance for uncertainty. Personal doubt has also been studied by Hasan, Bagayoko and Kelley
(1990), who used a 6-level scale (0 = totally guessed answer, 1 = almost a guess, 2 = not sure, 3 = sure,
4 = almost certain, 5 = certain) called the certainty of response index (CRI), and used it to
3
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
measure doubt about expressed answers in order to distinguish between “lack of knowledge” and
“misconceptions”; the former being given by wrong-and-doubted answers, and the latter by wrong-
and-certain answers. Other research efforts in neurology (Burton, 2008; Koriat, 1993; Reder &
Shunn, 1996), and psychology and evaluation (Gilles, 1997; Leclercq, Poumay, & Gagnayre, 2003),
have showed that certainty of knowledge is often very different from evidence of knowledge and that
certainty and doubt can have an important impact on further learning (e.g. René de Cotret and Larose
(2007), who argued for the development of systematic vigilance toward one’s own conceptions).
According to Lee, Kwon, Park, and Kim (2003) (who developed a degree-of-confidence scale), doubt is
also an important part of cognitive conflict, a step oftentimes presented as important in science learning
(Dreyfus, Jungwirth, & Eliovitch, 1990; Hewson & Hewson, 1984; Limon, 2001; Nussbaum & Novick,
1982).
Although these research efforts have mainly concentrated on the diagnostics of what psychologists and
neurologists refer to as the “feeling of knowing” (Burton, 2008; Koriat, 1993; Modirrousta & Fellows,
2008; Schnyder, Verfaellie, Alexander, & Lafleche, 2004) that comes with the production of answers,
none (to our knowledge) have experimentally explored the effect of initial doubt and initial certainty
(i.e. the state preceding a task) on science learning, even if this question has been presented many times
as philosophically very important.
The exploratory nature of the research has lead us to consider two perspectives framing this question.
The first focuses on the different categories of initial given answers, while the second focuses on
students’ certainty/doubt profiles, understood here as their general attitude of certainty toward a test, as
indicated by the sum of certainty expressions (see methods section below).
4
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
We hope that answers to the aforementioned question will cast new light on what happens when problem-
centered and teaching-centered approaches are used in a classroom context. We hope further that it will help
increase understanding of which students might profit the most from the use of one pedagogical treatment
over the other, and of how these influence the development of certainty/doubt toward knowledge.
METHODS
Instrument
The pre-test (test) and post-test (retest) questionnaire were the same. They included eight multiple-
choice questions (three to eight response choices per question) about simple electricity concepts. We
chose to study learning about electricity because it is a typical topic addressed in Quebec’s provincial
curriculum, where the research was conducted. The questions evaluated the capacity of 13- to 15-year-
old, French-speaking students to solve simple electricity comprehension problems often involving
“unexpected conceptions” (Potvin, 2007). Unexpected conceptions are conceptions that are not expected
from a pedagogical standpoint. They contradict the scientific content found in science textbooks.
The questionnaire synthesized many classical multiple choice questions available in the conceptual
change literature regarding the learning of electricity (Chambers & Andre, 1997; Duit & von Rhoneck,
1998) and aimed to identify the following unexpected conceptions (among others):
• Q1) One wire is sufficient to light a bulb (sink theory) (Duit & von Rhoneck, 1998; Shipstone,
1984; Thouin, 2008).
• Q2) One of the poles (the positive one, most of the time) is a source or electrical current and
the other “unloads” the « excess » and used current.
• Q2) Two different currents collide in the bulb to light it (clash theory (Shipstone, 1984)).
• Q2 and Q3) It is not necessary for the current to go back to the source to light a bulb
(Shipstone, 1984).
• Q4 and Q6) A bulb consumes current. A second one, wired in series with the first one, will
therefore light less (Shipstone, 1984; Thouin, 2008).
• Q5) A node or a long wire are substantial obstacles to the circulation of current.
• Q6 et Q7) Current distributes equally in parallel-connected wires.
The pre-test was administered just before treatment and the post-test was given 25-30 days after
treatment. Figure 1 offers an example of an item from the questionnaire.1 In this question, students
had to choose between: a) Bulb X; b) Bulb Y; c) One of the two; and d) Both will light up equally. For
each question, students could also select a fifth option, namely e) I have no idea. If they tried another
answer, they could also indicate whether they were “certain of their answer” or they “still had doubts”
about it. For the purpose of the research, and even though we were aware that expressing uncertainty
might be used by the students as a safety valve, we constructed doubt as being present when choosing
to express some uncertainty while being certain enough to propose an answer.
5
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
6
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
Participants
There were 251 secondary school students from eight classrooms who received treatment A (Group 1:
problem-centered treatment) and 265 students from another eight classrooms who received treatment B
(Group 2: teaching-centered treatment), for a total n=516 with an average of 27 students per classroom.
These students came from 18 different schools and a total of 12 teachers were involved. Schools freely
responded to an invitation to participate in the study made publicly on institutional websites. All those
manifesting interest in the project were accepted. Each participating classroom was randomly assigned
to one of the two treatments, forming groups 1 and 2. None of the students had prior formal training
in problem-centered learning, nor in electricity—this topic being absent from previous curriculum—
although we can assume that some of them had acquired some knowledge of the subject informally.
Treatment conditions
Treatment A was a problem-centered approach called “The Electrical Challenge”, during which students
had to solve, in lab conditions and with no help from their teacher, as many problems as they could in a
75-minute period. Twenty-four hands-on electrical problems or challenges classified from the simplest
to the most complex were presented to students. These problems were all expressed in qualitative terms
(ex: “Challenge No. 6: Switch A must light up bulb No.1 while switch B must light up bulb No. 2”)
and addressed electrical circulation, the effects of parallel and serial circuits, as well as the effects
of electrical resistance. Students worked in pairs and had access to all available materials (electrical
sources, wires, bulbs, resistors, etc.). Every time a team thought it had solved a problem, one of its
members would raise his/her hand and a project assistant would confirm the solution by signing a form
and allowing the team to move on to the next challenge/problem. This treatment can be considered as
conforming to micro-PBL models where “PBL happens at a small scale, in a single course (50-180
minutes), where research of information is made on site […] by experiments, teamwork, using prior
experience or logical reasoning” (free translation from Guilbert & Ouellet, 1997 p. 77).
After the end of every treatment, a delay of 25 to 30 days was respected before administrating the
retest. Teachers were asked not to discuss the topic for this period. We were aware that because of
this delay the difference between test and retest would then be considerably reduced. However the
durability of learning was considered important. We did not retest the students immediately after the
treatment to avoid contamination of our delayed post-test (retest).
7
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
For the purpose of the analysis, we used a categorisation of answers inspired by earlier research like
Merenluoto & Lentinen (2002) and Hasan et al. (1990), who used multiple-level certainty scales,
but more closely by the work of Vachey (2001) who used a simpler categorisation that we deemed
sufficient for our needs. These categories appear in the following table.
8
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
Two different analyses of the data were made. The first one focused on answers and their migration
from one category to another, from testing to retesting, while the second focused on the students,
their initial categorisation, and the migration of this category from test to retest. These two analyses
were carried out for each of the two groups of students, allowing a comparison of the two pedagogical
treatments. Calculations and commentaries about validity and reliability of the obtained data follow in
the next section.
9
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
Figures 3 and 4 give a synthesized view of the percentage variation for each category for each of the
two groups. Each grey area in the graph represents the evolution of a different category. Percentage
variation for each category is also given. Increases in performance (% of right answers, including
legitimate certainty (LC) and underestimation (UE)) and increases in certainty (% of certain answers,
including legitimate certainty (LC) and overestimation (OE)) are given at the right.
10
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
The first relevant observation we can make about the two treatments pertains to the important
difference between the increase of performance (LC+UE) versus the increase of certainty (LC+OE).
While the two treatments produced comparable increases in performance (9.8% and 12.1%), treatment
B yielded a greater increase—almost double—in certainty than treatment A (22.4% vs. 12.2%).
These results are very important because 1) they allow us to extend our comparison between the two
treatments (increases in performance are comparable) and 2) they give an insight about the importance
of doubt evolution between the two treatments.
The increase in certainty of Group 2 is manifested in both correct and incorrect answers. Legitimate
certainty (LC) and overestimation (OE) are both about 5 points higher in Group 2 than in Group 1.
This increase is compensated mostly by a decrease in legitimate doubt (LD) (-15.1% instead of -7.8%
for Group 1) and to a smaller degree by a decrease in under-estimation (UE) (-4,7% instead of -1,6%).
Moreover, a greater number of students in Group 2 were confirmed in their belief that their answers
were correct (LC). However, a greater proportion of students from Group 2, relative to Group 1,
also believed that their answers were correct when in fact they were not (OE). These findings may
suggest that Group 2 students were less responsive to or influenced by the treatment to which they were
exposed.
Figures 5 and 6 give a more detailed view of the migration of answers from one category to another,
for both treatments A and B. In these figures, the reader’s attention should be drawn to the arrows. For
example, in figure 5, the arrow marked “3.6%” indicates that the balance of answers migrated from
UE to LC. This does not signify that there was no migration from LC to UE, but rather that the overall
migration was toward LC, in this particular proportion. The black arrows show the migrations from the
11
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
do not know category (DNK) to each one of the other categories. The two bigger arrows depict general
performance and general certainty. The absence of an arrow reveals a null balance. Moreover, all the
arrows in these figures are depicted such that their surface is proportional to the balance of migrations.
12
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
In addition to the observations already made about these figures, one can notice that the directions of
the arrows are nearly identical from group 1 to group 2. The difference lies in the values associated
with the migrations. For example, we notice that do not know (DNK) answers (black arrows) will
migrate mainly to legitimate doubt (LD) for group 1 and to legitimate certainty (LC) for group 2,
which is in line with earlier observations.
Moreover, the figures give an interesting overview of the general dynamics of doubt, that go mainly
from legitimate doubt (LD) to overestimation (OE) (preferably) or underestimation (UE), and then to
legitimate certainty (LC); or directly from legitimate doubt (LD) to legitimate certainty (LC). One can
almost perceive a flux, from legitimate doubt to legitimate certainty, that seems to “prefer” passing
through, in both cases, to overestimation rather than under-estimation.
13
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
An interesting difference can be observed with regard to the total balance of answers labelled
overestimation (OE). These answers could, as Hasan et al. (1990) suggested for wrong-and-
sure answers, be a good indication of the presence of misconceptions (instead of simple lack of
knowledge), so it is important to consider them. In Group 1, the total balance is almost null (3,4%-
2,6%+0,1%=0,9%), and every migration out of this category is made toward legitimate certainty (LC).
In Group 2, the migration to overestimation is clearly positive (6,1%-1,8%-1,0%+0,2%=3,5%) and
answers sometimes migrate out to under-estimation (UE). These results are important because they
suggest that treatment B might generate more misconceptions (positive migration balance) and that it
might sometimes lead students to undervalue their progress (some migrations toward UE). This result
is in line with Akinoglu and Tandogan’s (2007) positive results about the prevention of misconceptions
by problem-based active learning.
Another interesting observation we can make of this data is about the migration of answers out of
legitimate doubt (LD). While migrations from this category to certainty (LC or OE) increase as much
in Group 1 than in Group 2, migrations to under-estimation (UE) decrease. We can say about this
interesting but difficult to interpret result, that it appears that traditional pedagogical treatments do
not really encourage students to persist with doubting. We can also observe this while analysing the
migrations in and out of under-estimation. While in-migrations are comparable for both groups, “out-
migrations” mostly lead students to legitimate certainty (LC): this finding is also revealing because it
suggests that traditional pedagogy seems to be more capable of renewing students’ confidence in things
they already know.
14
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
Given the considerable difficulty of making sense of information as dense as that presented in this
figure, we further refined the representation of these migrations. Figure 8 gives such a representation
where vectors are regrouped in 16 areas given by initial test positions, and where resulting migrations
(X and Y) are represented by a single vector, centered in the square. The zones containing too few
vectors (less than 5) were not compiled. The linear vectors are the ones detailing migrations for Group
1 students, while the dotted ones represent migrations for Group 2.
15
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
FIGure 8: Resultants of student’s migrations for both treatments A (Group 1) and B (Group 2)
Even with this refined representation, it is quite difficult to discern appreciable tendencies, especially
when one considers how normal “edge effects” (an effect created by the impossibility for a value to be
outside the matrix, thereby moving means away from the edges) seem to push the performance/doubt
profiles toward the center of the figure. It is however still possible to observe that, surprisingly, students
who show the best initial scores and more doubt (upper right) decrease considerably in performance
16
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
while others generally increase. It would seem that an initial underestimating attitude is not the best
one to expect effective learning in both treatments, and especially in treatment A (problem-centered).
A final and more interesting representation (see figure 9) can be produced data depicted from figure 8.
To obtain it, we carried out a subtraction between vectors from groups 1 and vectors from group 2
(treatment B minus treatment A). Thus the arrows give an idea of the difference between the effects
of the two treatments. The numbers in each box represent the total number of students considered in
each subtraction. For example, the vector found in the lower right box was obtained by considering
115 student’s migrations (initial and final “performance/doubt” profiles). It points toward the right
indicating that doubt increased more (or was better preserved) in treatment A than in treatment B.
It also slightly points down indicating that performance gain was less in treatment A. This new
representation (figure 9) has the advantage of cancelling all edge effects (as these were present in both
subtracted treatments), thereby revealing more about interesting differences. Grey areas or “boxes”
give the standard error of measurement. The latter was obtained independently for both the X and Y
axis by considering only the points for which the number of students was superior to 5. Boxes are of
equal size because we have indicated only the most conservative one in every square. We can see that
all migrations are larger than the standard error boxes. The shortest arrow is about 1.5 standard error
and the longest is about 4 times the standard error. Assuming normal distributions, the probabilities that
these results could be randomly generated are respectively below 14% and 1%.
17
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
A first observation can be made based on figure 9 about the general tendency for treatment A (a
problem-centered approach) to better preserve doubt than treatment B (a teaching-centered approach)
for most students, as revealed by most vectors heading toward the right in treatment A. We can see that
students who combine poor initial performance and good certainty (lower left square, representing some
15% of students) would best preserve doubt if they were being taught in a traditional teaching-centered
way (left pointing arrows). Interestingly, this conforms to interpretations made from preceding figures.
A second interesting observation is about students who were initially most uncertain about their
answers (right column). It is quite striking to see that while almost every other group of students
18
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
clearly benefited from the problem-centered approach (treatment A), the group of students whose
initial uncertainty about their answers was highest did not seem to benefit at all, regardless of their
initial performance. The difference here appears to be a question of “certainty/doubt” attitude. Many
possible explanations might explain this finding. While it could be argued that the observations reflect
the lack of familiarity of students for problem-centered approaches (treatment A), it could also be
argued that uncertain students might not get as involved as they should in the teamwork required for
problem solving, and that, therefore, they ultimately learned less. We could also argue that uncertain
students might not be able to learn as much because they do not possess clear hypotheses on which
they could build their comprehension and on which they could base their trial and error initiatives; or
that they may not be able to find sufficient solid (scientific or not) grounds upon which to formulate
or test hypotheses; etc. It would seem though that most students (a little less than two-thirds of them)
are sufficiently certain of their proposed answers to attempt to test them in such a way that they can
learn more from a “micro-problem-centered” approach (two-person teams). This is also an interesting
result because it requires consideration of the importance of “dissatisfaction” (Strike, Posner, Hewson,
& Gertzog, 1982) with one’s own unexpected conceptions in the process of conceptual change. Our
results (figure 9) seem to strengthen this claim because it is difficult to imagine how weakly endorsed
conceptions could be associated with strong and clear dissatisfaction; conversely, it is easy to imagine
higher levels of dissatisfaction when events confront conceptions deemed or felt to be certain.
An analogy involving a river and a waterfall may be useful in understanding the general dynamics of
doubt illustrated in figure 9. A generally “beneficial” river of “problem-centered learning” flows from
left to right but, at a certain point, the “waterfall of doubt” makes learning tumble down and leads to
wasted effort. This suggests that when students feel most uncertain while engaged in problem-centered
learning (or inquiry-based learning), a lecture at the right time may be beneficial, as Hmelo-Silver
(2004) suggested. Although it is clear that we need more research on this, it seems that the provision by
the teacher of at least a small number of knowledge elements that are presented as certain might from
time to time be appropriate for further learning in such exploratory contexts.
A third observation concerns students who initially have the lowest performance (lower row).
These students seem to be the ones for whom the difference in treatments has the smallest effect on
performance. We can observe a very slight and not statistically significant advantage for the most
confident of these students, but it would not be appropriate to draw further conclusions about this.
We can also observe that, for such students taking part in the problem-centered treatment, confident
students see their confidence grown while uncertain ones tend to become een more uncertain.
DISCUSSION
This research cannot draw conclusions about the general performance increases given by one or the
other of our treatments because both treatments were not really comparable; for treatment B involved
more teaching time with students). However, what does research does allow is for an important
19
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
comparison of the dynamics of doubt with individual variations in performance, notwithstanding the
observation that general performance increases were comparable for both treatments. It helps provide
insight into what a teacher can expect to unfold following his decision to evolve from a “teacher-
centered” practice toward a more “problem-centered” approach. The research thus suggests that such
teachers should be alert to students who feel less certainty about their answers: they are the ones that
are less susceptible to benefit from problem-centered teaching.
Our other main conclusions from this exploratory study are summarized next. It is very important
to keep in mind that these conclusions can only be applied to the context of teaching physics and
particularly electricity. Generalisation to other fields or other content would be unwarranted. Finally,
considering the delay (25-30 days, with no mention of the topic) after which retest was administered,
one should understand that our results are not meant to reflect immediate learning, but more
appropriately mid-term learning.
• In problem-centered approaches, students might develop more doubt, or better preserve doubt
(most of the time legitimate doubt) while learning, except for those students who combine very
poor initial knowledge and very good certainty (in this study, they represented about 15% of the
population studied). If we consider that doubt is an important part of science, this general result
could be interpreted as a better conformity of problem-centered approaches to the conditions of
real science. The observation that, in problem-centered approaches, doubt seems to accompany
learning has been reported elsewhere. Galand and Frenay (2005), for example, have already
recorded that students learning in these contexts feel they know less, while knowledge
tests show that their knowledge is equal, if not superior, and that they have developed more
competencies. We don’t know if this situation (feeling doubtful despite actually knowing)
is positively beneficial for learning, but our results suggest that while doubt seems to help
learning in problem-based approaches, an excess of it might be harmful. Our results suggest
that, when this happens, more traditional methods of teaching might be more appropriate, at
least for a time.
• Students might develop fewer unexpected conceptions in problem-centered approaches. Since
no content communication was made in treatment A, we could suppose that teachers or teacher-
centered methods or tools used in this context could be responsible for the development of
“misconceptions”. We could also suppose that more conformity of our problem-centered
treatment to real science could be an explanation. Being in direct contact with the reality of
phenomena while learning could also improve conceptual change.
• About a third of the students in this study did not feel sufficiently certain of their initial
conceptions to benefit from a problem-centered approach, while two-thirds did. Further
research is required to determine if it could be possible to “boost” learner certainty before entering
a problem-centered learning context, for example through classroom explanation of concepts, or
by giving students the opportunity to have their ideas about a topic confronted to those of their peers.
• Students who present the poorest initial performance are not the ones who will benefit the most
from a problem-centered approach. For them, the choice of treatment will make no measurable
difference.
20
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
This research provided new insight into the application of problem-centered learning by using a
perspective—the dynamics of doubt—that enabled further understanding of students’ reactions toward
different methods. We are, as a result, able to suggest that problem-centered approaches might be
considered more interesting in cases where it is not possible to give differentiated pedagogical treatments
in a class, because most of the students in this study seemed to benefit from such an approach.
The research also provided a good illustration that problem-centered approaches, given equal increases
in performance, can better avoid the development of wrong-and-sure answers that we believe can be
associated with unexpected conceptions.
CONCLUSION
This research allowed us to better understand the circumstances in which doubt plays a role in learning
about electricity. Our results suggest the importance of considering general “doubt/certainty” attitudes
in choosing the best pedagogical treatment for students. We believe our results reveal strengths, limits
and a paradox in the use of these treatments.
First, students expressing certainty are the ones that benefited the most of the problem-based treatment,
even when they were wrong about their initial answers (i.e. if they overestimated them). This might be a
product of the use of “challenges” in this study, which we suggest may have given them the opportunity to
confirm or invalidate initial hypotheses directly; in other words, to confront them with reality rather than
simply with contradictory arguments. Two-thirds of these students seemed better prepared to benefit from
the problem-based approach and developed less overestimated answers (less “misconceptions”).
Second, it is possible that students who expressed more doubt underperformed in the problem-based
treatment because they simply lacked solid hypotheses they could test themselves, or that they did
not hold them strongly enough, even if these were right. For these students, is appears that teaching-
centered approaches might be more beneficial for conceptual change and that providing them with
knowledge presented validated by the scientific community might be a more suitable strategy. They
also might benefit more from laboratory manipulations if hypotheses are simply given to them, such as
in the teaching-centered approach.
As for doubt migration, it appears that, in problem-centered approaches, most students clearly moved
toward doubt, but it seems that too strong a move toward a doubting disposition lead to a reduction
in performance (cf. the “waterfall” metaphor discussed earlier, in relation to figure 9). The paradox is
that, in our problem-centered approach, students who profited the most from the treatment were the
ones who presented the highest level of certainty. Yet a problem-centred approach reduces certainty
and therefore does not favour further learning. Meanwhile, in the teaching-centered approach, students
seemed to develop certainty to the point that they appeared to have lost some of their capacity for
reflection (see the analyses associated with figures 5 and 6) about what they learned (wrong-and-
sure answers or unexpected conceptions), because they overestimated more often. One interesting
21
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
interpretation from this research is that the best pedagogical strategy may be to avoid using only one
or the other of the approaches described in this study, but rather to alternate them, even though such a
strategy was not explicitly the object of this study.
Acknowledgements
Our thanks to Amélie Perron-Singh, Maude-Bouchard Fortier, Éric Durocher, Guillaume Cyr, Jean-
Sébastien Renaud, Jean-Mathieu Lavoie-Lebeau and the Montreal Science Center (CSM) for their
participation in this study. This research was made possible through funding from the FQRSC.
Akerson, V. L., & Hanuscin, D. L. (2007). Teaching nature of science through inquiry: Results of a 3-year professional development
program. Journal of research on science teaching, 44(5), 653-680.
Akinoglu, O., & Tandogan, R. O. (2007). The effects of problem-based active learning in science education on student’s
academic achievement, attitude and concept learning. Eurasia journal of mathematics, science and technology
education, 3(1), 71-81.
Astolfi, J.-P., Darot, É., Ginsburger-Vogel, Y., & Toussaint, J. (1997). Mots-clés de la didactique des sciences. Bruxelles:
DeBoeck-Université.
Barrette, C. (2005). Mystère sans magie, science, doute et vérité: notre seul espoir pour l’avenir. Montréal: Multimondes.
Bissonnette, S., Richard, M., & Gauthier, C. (2005). Échec scolaire et réforme éducative. St-Nicolas: Presses de l’Université Laval.
114 pages.
Brown, A. L., Bransford, J. D., Ferrara, R. A., & Campione, J. C. (1983). Handbook of child psychology : Cognitive development.
Vol. 3. In J. H. Flavel & E. L. Markman (Eds.), Learning, remembering, and understanding. New York: John Wiley & son.
Burton, R. A. (2008). On being certain: believing you are right even when you’re not. New-York: St. Martin’s Press.
CCA/CCL. (2007). Écart entre les sexes sur le plan du choix de carrière : Pourquoi les filles n’aiment pas les sciences. Rapport du
Conseil canadien de l’apprentissage
Chambers, S. K., & Andre, T. (1997). Gender, Prior Knowledge, Interest, and Experience in Electricity and Conceptual Change
text manipulations in learning about direct current. Journal of research in science teaching (34), 107-123.
Dreyfus, A., Jungwirth, E., & Eliovitch, R. (1990). Applying the “Cognitive Conflict” Strategy for Conceptual Change - Some
Applications, Difficulties and Problems. Science Education, 74(5), 555-569.
Duit, R., & von Rhoneck, C. (1998). Learning and understanding key concepts of electricity. In A. Thibergien, L. Jossem & B.
Jorge (Eds.), Connecting Research in Physics Education with Teacher Education: The International Commission on
Physics Education.
Dunlosky, J., & Nelson, T. O. (1992). Importance of the kind of cue for judgments of learning (JOL) and the delayed-JOL effect.
Memory and cognition, 20, 374-380.
Flavel, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American
Psychologist, 34, 906-911.
Galand, B., & Frenay, M. (2005). Impact sur les connaissances des étudiants. In L’approche par problèmes et par projets dans
l’enseignement supérieur (pp. 138-161). Louvain: Presses Universitaires de Louvain.
22
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
Gigerenzer, G., & Todd, P. M. (1999). Simple heuristics that make us smart. New York: Oxford university press.
Gilles, G. L. (1997). Impact de deux entraînements à l’utilisation des degrés de certitude chez les étudiants de 1ere candidature
de la Faculté de Psychologie et des Sciences de l’Education de l’ULg, Liege.
Guilbert, L., & Ouellet, L. (1997). Étude de cas. Apprentissage par problèmes. Sainte-Foy: Presses de l’Université du Québec.
Hasan, S., Bagayoko, D., & Kelley, E. L. (1990). Misconceptions and the certainty of response index (CRI). Physics education,
34(5), 294-299.
Hewson, P. W., & Hewson, M. G. (1984). The Role of Conceptual Conflict in Conceptual Change and the Design of Science
Instruction. Instructional Science, 13(1), 1-13.
Hmelo-Silver, C. E. (2004). Problem-based learning: what and how do students learn? Educational psychology review, 16(3),
236-262.
Koriat, A. (1993). How do we know that we know? The accessibility model of the feeling of knowing. Psychological review, 100,
609-639.
Leclercq, D., Poumay, M., & Gagnayre, E. A. (2003). La connaissance partielle chez l’apprenant : pourquoi et comment la
mesurer. In E.A. Gagnayre (Ed.) L’évaluation de l’Éducation Thérapeutique du Patient. Paris: IPCEM.
Lee, G., Kwon, J., Park, S.-S., & Kim, J. W. (2003). Development of an instrument for measuring cognitive conflict in secondary-
level science classe. Journal of research in science teaching, 40(6), 585-603.
Levy, A. J., Minner, D. D., & Jablonski, E. (2007). Inquiry-based science instruction and students’ science content knowledge: A
research synthesis. Proceedings of the National Association for Research in Science Teaching (NARST) Conference.
Limon, M. (2001). On the cognitive conflict as an instructional strategy for conceptual change : a critical appraisal. Learning and
instruction, 11, 357-380.
Merenluoto, K., & Lehtinen, E. (2002). Certainty bias as an indicator of problems in conceptual change: the case of the number
line. Paper presented at the Proceedings of the 26th annual conference of the international group for the psychology of
mathematics education, University of East Anglia, Norwich, UK.
Metcalfe, J. S. A. (1994). Metacognition: Knowing about knowing. Cambridge, Mass.: Bradford books.
Modirrousta, M., & Fellows, L. K. (2008). Medial prefrontal cortex plays a critical and selective role in “feeling of knowing” meta-
memory judgements. Neuropsychologia, 46(12), 2958-2965.
Nelson, T. O. (1992). Metacognition: Core readings. Boston, MA: Allyn and Bacon.
Nelson, T. O., & Dunlosky, J. (1991). The delayed-JOL effect: When delaying your judgments of learning can improve the
accuracy of your metacognitive monitoring. Psychological science, 2, 267-270.
Nussbaum, J., & Novick, S. (1982). Alternative Frameworks, Conceptual Conflict and Accommodation : Toward a Principled
Teaching Strategy. Instructional science, 11, 183-200.
Pajares, F. (1997). Current directions in self-efficacy research. In M. Maehr & P. R. Pintrich (Eds.), Advances in motivation and
achievement, Volume 10 (pp. 1-49). Greenwich, CT: JAI press.
Perrenoud, P. (1996). Enseigner : agir dans l’urgence, décider dans l’incertitude. Paris: ESF Éditeur.
Potvin, P. (2007). Enseigner les sciences en considérant le rôle de l’intuition dans l’apprentissage des sciences. In P. Potvin, M.
Riopel & S. Masson (Eds.), Regards multiples sur l’enseignement des sciences. Québec: Multimondes.
Reder, L. M., & Shunn, C. D. (1996). Implicit Memory and Metacognition. In L. M. Reder (Ed.), Metacognition does not imply
awareness: Strategy choice is governed by implicit learning and memory. Florence, Kentuchy: Erlbaum.
René de Cotret, S., & Larose. (2007). Les choses que l’on sait et les choses dont on se sert, Paper presented at the Journées
Internationales sur la communication, l’éducation et la culture scientrifiques et industrielles, Chamonix.
Schnyder, D. M., Verfaellie, M., Alexander, M. P., & Lafleche, G. (2004). A role for right medial prefrontal cortex in accurate
feeling-of-knowing judgements: evidence from patients with lesions to frontal cortex. Neuropsychologia, 42(7), 957-966.
Schunk, D. H. (1989). Self efficacy and achievement behaviours. Educational psychology review, 1(3), 173-208.
Shipstone, D. M. (1984). A study of children’s understanding of electricity in simple DC circuits. European journal of science
education, 6, 185-198.
23
Journal of Applied Research on Learning Vol. 3, Article 5, 2010
Smith, J. D., Shield, W. E., & Washburn, D. A. (2003). The comparative psychology of uncertainty monitoring and metacognition.
Behavioral and brain sciences, 26, 317-373.
Strike, K., Posner, G. J., Hewson, P. W., & Gertzog, W. A. (1982). Accomodation of a Scientific Conception : Toward a Theory of
Conceptual Change. Science Education, 66(2), 211-227.
Thompson, T., & Dinnel, D. L. (2007). Poor performance in mathematics: is there a basis for a self-worth explanation for women?
Educational psychology, 27(3), 377-399.
Vachey, E., Miquel, J. L., & Quinton, A. (2001). Quel intérêt avons-nous à intégrer la notion de certitude en contrôle continu?
Odonto-stomatologie tropicale (95), 1-8.
Vernon, D. T. A., & Blake, R. L. (1993). Does problem-based learning work? A meta-analysis of evaluative research. Academic
medicine, 69(7), 550-563.
Wu, H.-K., & Hsieh, C.-E. (2006). Developing sixth graders’ inquiry skills to construct explanations in inquiry-based learning
environments. International Journal of Science Education, 28(11), 1289-1313.
24