Texte - Chang Etal 2023 - Educational Design Principles of Using AI Chatbot
Texte - Chang Etal 2023 - Educational Design Principles of Using AI Chatbot
Texte - Chang Etal 2023 - Educational Design Principles of Using AI Chatbot
Article
Educational Design Principles of Using AI Chatbot That Supports
Self-Regulated Learning in Education: Goal Setting, Feedback,
and Personalization
Daniel H. Chang 1, * , Michael Pin-Chuan Lin 2 , Shiva Hajian 3 and Quincy Q. Wang 1
1 Faculty of Education, Simon Fraser University, Burnaby, BC V5A 1S6, Canada; quincy_wang@sfu.ca
2 Faculty of Education, Mount Saint Vincent University, Halifax, NS B3M 2J6, Canada; michael.lin@msvu.ca
3 Faculty of Psychology, Kwantlen Polytechnic University, Surrey, BC V3W 2M8, Canada; shiva.hajian@kpu.ca
* Correspondence: dth7@sfu.ca
Abstract: The invention of ChatGPT and generative AI technologies presents educators with signifi-
cant challenges, as concerns arise regarding students potentially exploiting these tools unethically,
misrepresenting their work, or gaining academic merits without active participation in the learning
process. To effectively navigate this shift, it is crucial to embrace AI as a contemporary educational
trend and establish pedagogical principles for properly utilizing emerging technologies like Chat-
GPT to promote self-regulation. Rather than suppressing AI-driven tools, educators should foster
collaborations among stakeholders, including educators, instructional designers, AI researchers, and
developers. This paper proposes three key pedagogical principles for integrating AI chatbots in
classrooms, informed by Zimmerman’s Self-Regulated Learning (SRL) framework and Judgment of
Learning (JOL). We argue that the current conceptualization of AI chatbots in education is inadequate,
so we advocate for the incorporation of goal setting (prompting), self-assessment and feedback, and
personalization as three essential educational principles. First, we propose that teaching prompting
is important for developing students’ SRL. Second, configuring reverse prompting in the AI chatbot’s
capability will help to guide students’ SRL and monitoring for understanding. Third, developing a
data-driven mechanism that enables an AI chatbot to provide learning analytics helps learners to
Citation: Chang, D.H.; Lin, M.P.-C.;
reflect on learning and develop SRL strategies. By bringing in Zimmerman’s SRL framework with
Hajian, S.; Wang, Q.Q. Educational
JOL, we aim to provide educators with guidelines for implementing AI in teaching and learning
Design Principles of Using AI Chatbot
That Supports Self-Regulated Learning
contexts, with a focus on promoting students’ self-regulation in higher education through AI-assisted
in Education: Goal Setting, Feedback, pedagogy and instructional design.
and Personalization. Sustainability
2023, 15, 12921. https://doi.org/ Keywords: chatbot; self-regulated learning; AI pedagogy; judgement of learning
10.3390/su151712921
2. Theoretical Framework
Let us conceptualize a learning scenario on writing and learning. A student accesses
their institution’s learning management system (LMS) and selects the course titled “ENGL
100—Introduction to Literature”, a foundational writing course under the Department of
English. Upon navigating to an assignment, the student delves into its details and reads
the assignment instructions. After a brief review, the student copies the assignment’s
instructions. In a separate browser tab, the student opens up ChatGPT and decides to
engage with it. The student then pastes the assignment instructions, prompting ChatGPT
with, “Plan my essay based on the provided assignment instructions, [copied assignment
instructions]”.
In response, ChatGPT outlines a structured plan, beginning with the crafting of
an introduction. However, the student is puzzled about the nature and structure of an
introduction, so the student inquires and re-prompts again, “Could you provide an example
instruction for the assignment?” ChatGPT then offers a sample. After studying the example,
the student clicks a word processing software on their computer and commences the writing
process. Upon completing the introduction, the student seeks feedback from ChatGPT,
asking, “Could you assess and evaluate the quality of my introduction?” ChatGPT provides
its evaluation. Throughout the writing process, the student frequently consults ChatGPT
for assistance on various elements, such as topic sentences, examples, and argumentation,
refining their work until the student is satisfied with the work they produce for the ENGL
100 assignment.
This scenario depicts a perfect and ideal SRL cycle executed by the student, encom-
passing goal-setting, standard reproduction, iterative engagement with ChatGPT, and
solicitation of evaluative feedback. However, in real-world educational contexts, students
might not recognize this cycle. They might perceive ChatGPT merely as a problem-solving
AI chatbot, which can help them with the assignment. On the side of instructors, instructors
are not fully aware of how AI tools can be integrated as part of their pedagogy, yet they are
afraid that students will use this AI chatbot unethically for learning.
In our position, we argue that generative AI tools, like ChatGPT, have the potential
to facilitate SRL when instructors are aware of the SRL process from which students can
benefit. To harness the potential of generative AI tools, educators must be cognizant of
their capabilities, functions, and pedagogical values. To this end, we employ Zimmerman’s
multi-level SRL framework, which will be elaborated upon in the subsequent section.
Sustainability 2023, 15, x FOR PEER REVIEW 3 of 16
To our best knowledge, the design of chatbots has focused greatly on the backend
design [43], user interface [44], and improving learning [36,45,46]. For example, Winkler
and Söllner [46] reviewed the application of chatbots in improving student learning out-
comes and suggested that chatbots could support individuals’ development of procedural
knowledge and competency skills such as information searching, data collection, decision
making, and analytical thinking.
Specifically for learning improvement, since the rise of Open AI’s ChatGPT, there have
been several emerging calls for examining how ChatGPT can be integrated pedagogically
to support the SRL process. As Dwivedi et al. [47] writes, “Applications like ChatGPT can
be used either as a companion or tutor, [or] to support . . . self-regulated learning” [47]
(p. 9). A recent case study also found that ChatGPT gave feedback to student assignments
is comparable to that of a human instructor [48]. Lin and Chang’s study [49] and Lin’s
doctoral dissertation have also provided a clear bulletin for designing and implementing
chatbots for educational purposes and documented several interaction pathways leading
to effective peer reviewing activities and writing achievement [49]. Similarly, Zhu et al. [50]
argued that “self-regulated learning has been widely promoted in educational settings,
the provision of personalized support to sustain self-regulated learning is crucial but
inadequately accomplished” (p. 146). Therefore, we are addressing the emerging need to
integrate chatbots in education and how chatbots can be developed or used to support
learners’ SRL activities. This will be the reason why the fundamental educational principles
of pedagogical AI chatbots need to be established. To do so, we have identified several
instructional dimensions that we argue should be featured in the design of educational
chatbots to facilitate effective learning for students or at least to supplement classroom
instructions. These instructional dimensions include (1) goal setting, (2) feedback and
self-assessment, and (3) personalization and adaptation.
by reasoning about the relative merits of alternative learning strategies in the current
circumstances” (p. 389).
SRL also consists of learners exercising their metacognitive control and metacog-
nitive monitoring. These two processes are guided by pre-determined result-oriented
outcomes: objectives or goals [8,42,55–57]. SRL researchers generally agree that goals can
trigger several SRL events and metacognitive activities that should be investigated as they
occur during learning and problem-solving activities [55,58,59]. Moreover, Paans et al.’s
study [60] argues that learner-initiated SRL activities occurring at the micro-level and macro-
level can be developed and occur simultaneously, including goal setting or knowledge
acquisition. It implies that, in certain pedagogical tasks or problem-solving environments,
such as working with chatbots, students need to identify goals by prompting the AI chatbot
in a learning session corresponding to the tasks.
Additionally, goals can function as benchmarks by which learners assess the efficacy
of their learning endeavors. When students possess the capacity to monitor their progress
toward these goals, they are more likely to sustain their motivation and active involvement
in the learning process [61]. Within the context of AI chatbot interaction, consider a scenario
where a student instructs a chatbot to execute certain actions, such as synthesizing a given
set of information. Subsequently, the chatbot provides the requested synthesis, allowing
students to evaluate its conformity with their expectations and the learning context. Within
Zimmerman’s framework of Self-Regulated Learning, this process aligns with the stages
of emulation and self-control. Once a student prompts the chatbot for a response, they
continuously monitor and self-assess its quality, subsequently re-prompting the chatbot for
further actions. This bidirectional interaction transpires within the stages of simulation and
self-control, as students actively participate in a cycle of prompts, monitoring and adjust-
ments, and subsequent re-prompts, persisting until they attain a satisfactory outcome. Yet
we have to acknowledge that the interaction assumes student autonomy, in which students
keep prompting the chatbot and relying on the chatbot’s output. A more sophisticated
way of student–chatbot interaction is bidirectional, where a chatbot is capable of reverse
prompting, a concept which we will dive into deeper in our next section.
We believe it is crucial to teach students how to effectively prompt a generative AI
chatbot. As we mentioned earlier, prompts are the goals that students set for the AI
chatbot, but often students just want the tool’s output without engaging in the actual
process. To better understand this, we can break prompts down into two types: cognitive
prompts and metacognitive prompts, by drawing on Bloom’s Taxonomy [62]. Cognitive
prompts are goal-oriented, strategic inquiries that learners feed into a generative AI chatbot.
Metacognitive prompts, on the other hand, are to foster learners’ learning judgement and
metacognitive growth. For example, in the case of a writing class, a cognitive prompt could
be, “Help me grasp the concept of a thesis statement”. An outcome-based prompt might
be, “Revise the following sentence for clarity”. In the case of metacognitive prompts, a
teacher could encourage the students to reflect on their essays by asking the AI chatbot,
“Evaluate my essay and suggest improvements”. The AI chatbot may function as a writing
consultant that provides feedback. Undeniably, students might take a quicker route by
framing the process more “outcome-oriented”, such as asking the AI, “Refine and improve
this essay”. This is where the educator’s role comes in to explain the ethics of conduct
and its associated consequences. Self-regulated learners stand as ethical AI users who
care about the learning journey, valuing more than just the end product. In summary,
goals, outcomes, or objectives can be utilized as defined learning pathways (also known
as prompts) when students interact with chatbots. Students defining goals while working
with a chatbot can be seen as setting a parameter for their learning. This goal defining (or
prompting) helps students clearly understand what they are expected to achieve during a
learning session and facilitates their work self-assessment while working with a chatbot.
Self-assessment is a process in which individuals evaluate their learning
mance, and understanding of a particular subject or skill. Research has shown
Sustainability 2023, 15, 12921
assessment can positively impact learning outcomes, motivation, and 8metacognit of 15
[63–65]. Specifically, self-assessment can help learners identify their strengths an
nesses, re-set goals, and monitor their progress toward achieving those goals. Se
ment, grounded
3.3. Feedback in JOL, involves
and Self-Assessment Mechanism learners reflecting on their learning and makin
ments about their islevel
Self-assessment of understanding
a process in which individuals andevaluate
progress their[66]. Self-assessment
learning, performance, is als
and understanding of a particular subject or skill. Research has
ponent of SRL, as it allows learners to monitor their progress and adjust their shown that self-assessment
can positively impact learning outcomes, motivation, and metacognitive skills [63–65].
strategies or learning goals as needed [67]. Self-assessment can therefore be a fea
Specifically, self-assessment can help learners identify their strengths and weaknesses, re-set
chatbot
goals, and regardless
monitor their ofprogress
whether learners
toward employ
achieving it to self-assess
those goals. Self-assessment, their learning, or
grounded
automatically promoted by the chatbot system to guide students to self-assess.
in JOL, involves learners reflecting on their learning and making judgements about their
levelHowever,
of understanding and progress [66]. Self-assessment is also
so far, we have found that the current AI-powered chatbots, a component of SRL, as it like C
allows learners to monitor their progress and adjust their learning strategies or learning
have limited capabilities in reverse prompting when used for educational purp
goals as needed [67]. Self-assessment can therefore be a feature of a chatbot regardless of
verse
whether prompting
learners employ functions as guiding
it to self-assess questions
their learning, after
or it can students prompt
be automatically promoted the cha
suggested
by the chatbot in system
the last to section,
guide studentsaftertolearners identify their prompts and goals, cha
self-assess.
However, so far, we have found that
ask learners to reflect on their learning and provide the current AI-powered chatbots,
“reverse like ChatGPT,
prompts” for self-ass
have limited capabilities in reverse prompting when used for educational purposes. Re-
The
verseconcept
prompting of functions
reverse promptsas guidingis similarafter
questions to reciprocal
students prompt questioning.
the chatbot.Reciprocal
As q
ing is a group-based
suggested in the last section, process in which
after learners two
identify students
their prompts pose theirchatbots
and goals, own questions
can
other to answer
ask learners [68].
to reflect Thislearning
on their method andhas been“reverse
provide used mainly
prompts”to forfacilitate the reading
self-assessment.
Theemergent
for concept of reverse
readers prompts is similar
[69–71]. For to reciprocalaquestioning.
instance, chatbot could Reciprocal
ask questioning
a learner an exp
is a group-based process in which two students pose their own questions for each other
question
to answer like[68]. “Now,
This method I give hasyoubeen two
used thesis
mainly statements
to facilitate the you requested.
reading Can you
process for
more
emergentexamples of the relationship
readers [69–71]. between
For instance, a chatbot theasktwo
could statements
a learner of X and
an explanatory ques-Y?” or “
provide more details on the requested speech or action?” as well as reflective q
tion like “Now, I give you two thesis statements you requested. Can you provide more
examples
like “How of the
do relationship
you generalize betweenthis the two statements
principle of X and Y?”
to similar or “Can
cases?” toyou
rateprovide
their under
more details on the requested speech or action?” as well as reflective questions like “How
ofdoayou
particular
generalizeconcept
this principleon toa similar
scale from
cases?”1toto 5 their
rate or tounderstanding
identify areas where they ne
of a particular
practice.
concept onWe mock
a scale froman example
1 to of such
5 or to identify areasa where
conversation
they need below in Figure
more practice. 2.
We mock
an example of such a conversation below in Figure 2.
Figure 2.AAmocked
Figure 2. mockedexample of reverse
example prompting
of reverse from a chatbot.
prompting from a chatbot.
The chatbot could then provide feedback and resources to help the learner improve in
areasThe
with chatbot could then
potential knowledge gapsprovide
and low feedback and resources
confidence levels. to helpcan
In this way, chatbots thebelearner
inanareas with
effective tool potential knowledge
for encouraging gaps and low
student self-assessment confidence
and SRL. levels.
A great body In this way,
of evidence
shows that the integrative effect of self-assessment and just-in-time feedback
can be an effective tool for encouraging student self-assessment and SRL. A grea goes beyond
evidence shows that the integrative effect of self-assessment and just-in-time
goes beyond understanding and learning new concepts and skills [72]. Goal-orie
criteria-based self-assessment (e.g., self-explanation and reflection prompts) al
Sustainability 2023, 15, 12921 9 of 15
understanding and learning new concepts and skills [72]. Goal-oriented and criteria-based
self-assessment (e.g., self-explanation and reflection prompts) allows the learner to identify
the knowledge gaps and misconceptions that often lead to incorrect conceptions or cognitive
conflicts. Just-in-time feedback (i.e., the information provided by an agent/tutor in response
to the diagnosed gap) can then act as a knowledge repair mechanism if the provided
information is perceived as clear, logical, coherent, and applicable by the learner [73].
Based on Table 1 and the previous section on prompting and reverse prompting,
teachers can also focus on facilitating learning judgement while teaching students to work
with an AI chatbot. However, we propose that reverse prompting from an AI chatbot is
also important so that educational values and SRL can be achieved.
According to Zimmerman [8], a chatbot is the social assistance that students can
obtain. If the chatbot can provide reverse prompts that guide thinking, reflection, and
self-assessment, students can then execute strategies that fit their goals and knowledge
level. When learners engage in self-assessment activities, they are engaging in the process
of making judgments about their learning. Throughout self-assessment, learners develop
an awareness of their strengths and weaknesses, which can help them modify or set new
goals. If they are satisfied with their goals, they can use their goals to monitor their progress
and adjust their strategies as needed. This process also aligns with Zimmerman’s SRL
model of self-control. At this phase, students can decide whether to go with what the
chatbot suggests or if they need to take what they have and implement the suggestions
that the chatbot provides. For example, a chatbot could reversibly ask learners to describe
their strategies to solve a particular problem or reflect on what they have learned from a
particular activity. This type of reflection can help learners become more aware of their
learning processes and develop more effective strategies for learning [74,75]. Thus, the
reverse interaction from chatbot to students provides an opportunity for developing self-
awareness because learners become more self-directed or self-regulated and independent
in their learning while working with the chatbot, which can lead to improved academic
performance and overall success. Furthermore, by incorporating self-assessment prompts
into educational chatbots, learners can receive immediate feedback and support as they
engage in the self-assessment process, which can help to develop their metacognitive skills
further and promote deeper learning.
Figure3.3.An
Figure Anexample
exampleof of a chatbot
a chatbot in ain a learning
learning management
management systemsystem that supports
that supports SRL by delivering
SRL by delivering
personalizedfeedback.
personalized feedback.
Sustainability 2023, 15, x FOR PEER REVIEW 11 of
Sustainability 2023, 15, 12921 11 of 15
Figure4.4.An
Figure Anexample
example
of aof a chatbot
chatbot that supports
that supports SRL bySRL by delivering
delivering learning analytics.
learning analytics.
4.
4. Limitations
Limitations
Lo’s [78] comprehensive rapid review indicates three primary limitations inherent in
Lo’s [78] comprehensive rapid review indicates three primary limitations inheren
generative AI tools: 1. biased information, 2. constrained access to current knowledge,
generative
and AI tools:
3. propensity for 1. biased information,
disseminating 2. constrained
false information access to current
[78]. Baidoo-Anu and Ansa knowledge,
[79] a
3. propensity
underscore that for
the disseminating falseAI
efficacy of generative information [78]. Baidoo-Anu
tools is intricately and Ansa
linked to the training data[79] und
score
that thatfed
were theinto
efficacy of wherein
the tool, generative AI tools is intricately
the composition of training linked
data cantoinadvertently
the training data t
contain
were fed biases
intothat
thesubsequently
tool, wherein manifest in the AI-generated
the composition content,
of training datapotentially compro- cont
can inadvertently
mising
biases that subsequently manifest in the AI-generated content, potentiallyusers.
the neutrality, objectivity, and reliability of information imparted to student compromis
Moreover, the precision and accuracy of the information generated by generative AI tools
the neutrality, objectivity, and reliability of information imparted to student users. Mo
further emerge as a key concern. Scholarly investigations have discovered several instances
over, the
where precision
content producedandby accuracy
ChatGPT ofhas
thedemonstrated
information inaccuracy
generated andby generative
spuriousness, AI tools f
ther emerge as a key concern. Scholarly investigations
particularly when tasked with generating citations for academic papers [79,80]. have discovered several instan
where content
Amidst theseproduced
acknowledged by ChatGPT
limitations,has ourdemonstrated inaccuracy
position leans toward and spuriousne
an emphasis on
particularly when tasked with generating citations for academic papers [79,80].
students’ educational use of these tools, transcending the preoccupation with the tools’
inherent characteristics
Amidst of bias, inaccuracy,
these acknowledged or falsity.our
limitations, Based on our leans
position proposal, we want
toward to
an emphasis
develop
students’ students’ capacityuse
educational for of
self-regulation
these tools,and discernmentthe
transcending when evaluating received
preoccupation with the too
information. Furthermore, educators bear an important role in guiding students on har-
inherent characteristics of bias, inaccuracy, or falsity. Based on our proposal, we want
nessing the potential of generative AI tools to enhance the learning process, instead of the
develop students’
generative AI tools can capacity
providefor self-regulation
information akin toand discernment
a textbook. when the
This justifies evaluating
reason receiv
information.
why we integrate Furthermore,
Zimmerman’seducators
SRL model,bear an important
illustrating role in guiding
how the judicious students
incorporation of on h
nessing the
generative AIpotential of generative
tools can foster students’ AI tools to enhance
self-regulation, the learning
synergizing with theprocess,
guidanceinstead
of of
educators and the efficacy of instructional technology design.
generative AI tools can provide information akin to a textbook. This justifies the reas
why we integrate Zimmerman’s SRL model, illustrating how the judicious incorporat
5. Concluding Remarks
of generative AI tools can foster students’ self-regulation, synergizing with the guidan
This paper explores how educational chatbots, or so-called conversational agents, can
of educators and the efficacy of instructional technology design.
support student self-regulatory processes and self-evaluation in the learning process. As
shown in Figure 5 below, drawing on Zimmerman’s SRL framework, we postulate that chat-
5. Concluding
bot Remarks
designers should consider pedagogical principles, such as goal setting and planning,
self-assessment, and personalization, to ensurechatbots,
This paper explores how educational that the chatbot effectively
or so-called supports stu-agents, c
conversational
dent learning and improves academic performance. We suggest that such a chatbot could
support student self-regulatory processes and self-evaluation in the learning process.
provide personalized feedback to students on their understanding of course material and
shown in Figure 5 below, drawing on Zimmerman’s SRL framework, we postulate t
promote self-assessment by prompting them to reflect on their learning process. We also em-
chatbotthe
phasize designers should
importance consider
of establishing thepedagogical principles,
pedagogical functions such astogoal
of chatbots fit thesetting
actual and pl
ning, self-assessment, and personalization, to ensure that the chatbot effectively suppo
student learning and improves academic performance. We suggest that such a chat
could provide personalized feedback to students on their understanding of course ma
rial and promote self-assessment by prompting them to reflect on their learning proce
Sustainability 2023, 15, 12921 12 of 15
purposes of education and supplement teacher instruction. The paper provides examples
of successful implementations of educational chatbots that can inform SRL process as well
as self-assessment
process as well as and reflection based
self-assessment and on JOL principles.
reflection based onOverall, this paper
JOL principles. highlights
Overall, this
the potential
paper benefits
highlights of educational
the potential benefitschatbots for personalized
of educational chatbots forand interactiveand
personalized learning
inter-
experiences
active learningwhile emphasizing
experiences whilethe importancethe
emphasizing of importance
consideringofpedagogical
consideringprinciples in
pedagogical
their design. Educational chatbots may supplement classroom instruction
principles in their design. Educational chatbots may supplement classroom instruction by by providing
personalized feedback and
providing personalized prompting
feedback reflection on
and prompting studenton
reflection learning
studentprogress.
learningHowever,
progress.
chatbot
However, designers
chatbot must carefully
designers must consider
carefully how thesehow
consider tools fit into
these toolsexisting pedagogical
fit into existing ped-
practices to ensure to
agogical practices their effectiveness
ensure in supporting
their effectiveness student learning.
in supporting student learning.
Figure 5.
Figure 5. Putting
Putting it all
all together:
together: educational principles, SRL,
SRL, and
and directionality.
directionality.
Through
Throughthe theapplication
application of of
ourour
framework,
framework, future researchers
future are encouraged
researchers are encouraged to delve to
into three important topics of inquiry that can empirically validate
delve into three important topics of inquiry that can empirically validate our conceptualour conceptual model.
The firstThe
model. dimension entails scrutiny
first dimension of educational
entails scrutiny principles.
of educational For instance,
principles. how canhow
For instance, AI
chatbots be designed
can AI chatbots to support
be designed to learners
support in settinginand
learners pursuing
setting personalized
and pursuing learning
personalized
goals,
learningfostering a sense ofaownership
goals, fostering over the learning
sense of ownership over theprocess?
learning Addressing this question
process? Addressing this
involves exploring how learners can form a sense of ownership over
question involves exploring how learners can form a sense of ownership over their inter- their interactions with
the AI chatbots,
actions with thewhile working
AI chatbots, towards
while workingthe learning
towardsobjectives.
the learning objectives.
The
The second dimension involves a closer examination
second dimension involves a closer examination of of the
the actual
actual Self-Regulated
Self-Regulated
Learning
Learning (SRL) process. This necessitates an empirical exploration of the
(SRL) process. This necessitates an empirical exploration of the ways
ways AIAI chatbots
chatbots
can
can effectively
effectivelyfacilitate
facilitatelearners’
learners’self-regulated
self-regulated reflections
reflectionsandandthethe
honing
honing of self-regulation
of self-regula-
skills. For For
tion skills. example,
example,how howeffective
effectiveis AI’s
is AI’s feedback
feedbacktotoa astudent’s
student’sessay
essayand and howhow dodo
students develop subsequent SRL strategies to address the AI’s feedback
students develop subsequent SRL strategies to address the AI’s feedback and evaluation? and evaluation?
Additionally,
Additionally, inquiries mightalso
inquiries might alsorevolve
revolvearoundaround educators’
educators’ instructional
instructional methods
methods in
in lev-
leveraging AI chatbots to not only nurture learners’ skills in interacting
eraging AI chatbots to not only nurture learners’ skills in interacting with the technology with the technology
but
but also
alsofoster
fostertheir
theirself-regulatory
self-regulatoryprocesses.
processes. Investigating
Investigating thethe
extent to which
extent to whichAI chatbots
AI chat-
can provide learning analytics as feedback that harmonizes with individual learners’ self-
bots can provide learning analytics as feedback that harmonizes with individual learners’
regulation strategies is also of significance. Moreover, ethical considerations must be
self-regulation strategies is also of significance. Moreover, ethical considerations must be
taken into account when integrating AI chatbots into educational settings, ensuring the
taken into account when integrating AI chatbots into educational settings, ensuring the
preservation of learners’ autonomy and self-regulation.
preservation of learners’ autonomy and self-regulation.
The third dimension is related to user interface research. A research endeavor could
The third dimension is related to user interface research. A research endeavor could
revolve around identifying which conversational interface proves the most intuitive for
revolve around identifying which conversational interface proves the most intuitive for
learners as they engage with an AI chatbot. Additionally, an inquiry might probe the
learners as they engage with an AI chatbot. Additionally, an inquiry might probe the ex-
extent to which the AI chatbot should engage in dialogue within educational contexts.
tent to which the AI chatbot should engage in dialogue within educational contexts. Fur-
Furthermore, delineating the circumstances under which AI chatbots should abstain from
thermore, delineating the circumstances under which AI chatbots should abstain from de-
delivering outcome-based outputs to learners constitutes a worthwhile avenue of inves-
livering outcome-based outputs to learners constitutes a worthwhile avenue of investiga-
tigation. Numerous additional inquiries can be derived from our conceptual model, yet
tion.central
the Numerous messageadditional
that weinquiries
want tocan be derived
deliver remains from ourOur
clear: conceptual
objectivemodel, yet the
is to engage
central message that we want to deliver remains clear: Our objective
educators, instructional designers, and students in the learning process while navigating is to engage educa-in
tors, instructional designers, and students in the learning process while
this AI world. It is important to educate students on the potential of AI chatbots to enhance navigating in this
AI world.
their It is important
self-regulation to educate
skills while students on the
also emphasizing the importance
potential of ofAIavoiding
chatbotsactions
to enhance
that
their self-regulation skills while also
contravene the principles of academic integrity. emphasizing the importance of avoiding actions that
contravene the principles of academic integrity.
Sustainability 2023, 15, 12921 13 of 15
References
1. Pérez, J.Q.; Daradoumis, T.; Puig, J.M.M. Rediscovering the use of chatbots in education: A systematic literature review. Comput.
Appl. Eng. Educ. 2020, 28, 1549–1565. [CrossRef]
2. Smutny, P.; Schreiberova, P. Chatbots for learning: A review of educational chatbots for the Facebook Messenger. Comput. Educ.
2020, 151, 103862. [CrossRef]
3. Kuhail, M.A.; Alturki, N.; Alramlawi, S.; Alhejori, K. Interacting with educational chatbots: A systematic review. Educ. Inf.
Technol. 2023, 28, 973–1018. [CrossRef]
4. Okonkwo, C.W.; Ade-Ibijola, A. Chatbots applications in education: A systematic review. Comput. Educ. Artif. Intell. 2021,
2, 100033. [CrossRef]
5. Koriat, A. Monitoring one’s own knowledge during study: A cue-utilization approach to judgments of learning. J. Exp. Psychol.
Gen. 1997, 126, 349–370. [CrossRef]
6. Son, L.K.; Metcalfe, J. Judgments of learning: Evidence for a two-stage process. Mem. Cogn. 2005, 33, 1116–1129. [CrossRef]
[PubMed]
7. Panadero, E. A Review of Self-regulated Learning: Six Models and Four Directions for Research. Front. Psychol. 2017, 8, 422.
[CrossRef] [PubMed]
8. Zimmerman, B.J. Attaining Self-Regulation. In Handbook of Self-Regulation; Elsevier: Amsterdam, The Netherlands, 2000; pp. 13–39.
[CrossRef]
9. Baars, M.; Wijnia, L.; de Bruin, A.; Paas, F. The Relation Between Students’ Effort and Monitoring Judgments During Learning:
A Meta-analysis. Educ. Psychol. Rev. 2020, 32, 979–1002. [CrossRef]
10. Leonesio, R.J.; Nelson, T.O. Do different metamemory judgments tap the same underlying aspects of memory? J. Exp. Psychol.
Learn. Mem. Cogn. 1990, 16, 464–470. [CrossRef]
11. Double, K.S.; Birney, D.P.; Walker, S.A. A meta-analysis and systematic review of reactivity to judgements of learning. Memory
2018, 26, 741–750. [CrossRef]
12. Janes, J.L.; Rivers, M.L.; Dunlosky, J. The influence of making judgments of learning on memory performance: Positive, negative,
or both? Psychon. Bull. Rev. 2018, 25, 2356–2364. [CrossRef]
13. Hamzah, M.I.; Hamzah, H.; Zulkifli, H. Systematic Literature Review on the Elements of Metacognition-Based Higher Order
Thinking Skills (HOTS) Teaching and Learning Modules. Sustainability 2022, 14, 813. [CrossRef]
14. Veenman, M.V.J.; Van Hout-Wolters, B.H.A.M.; Afflerbach, P. Metacognition and learning: Conceptual and methodological
considerations. Metacognition Learn. 2006, 1, 3–14. [CrossRef]
15. Nelson, T.; Narens, L. Why investigate metacognition. In Metacognition: Knowing about Knowing; MIT Press: Cambridge, MA,
USA, 1994. [CrossRef]
16. Tuysuzoglu, B.B.; Greene, J.A. An investigation of the role of contingent metacognitive behavior in self-regulated learning.
Metacognition Learn. 2015, 10, 77–98. [CrossRef]
17. Bandura, A. Social Cognitive Theory: An Agentic Perspective. Asian J. Soc. Psychol. 1999, 2, 21–41. [CrossRef]
18. Bem, D.J. Self-Perception Theory. In Advances in Experimental Social Psychology; Berkowitz, L., Ed.; Academic Press: Cambridge,
MA, USA, 1972; Volume 6, pp. 1–62. [CrossRef]
19. Abu Shawar, B.; Atwell, E. Different measurements metrics to evaluate a chatbot system. In Proceedings of the Workshop on
Bridging the Gap: Academic and Industrial Research in Dialog Technologies, Rochester, NY, USA, 26 April 2007; pp. 89–96.
[CrossRef]
20. Turing, A.M. Computing machinery and intelligence. Mind 1950, 59, 433–460. [CrossRef]
21. Weizenbaum, J. ELIZA—A computer program for the study of natural language communication between man and machine.
Commun. ACM 1966, 9, 36–45. [CrossRef]
22. Wallace, R.S. The anatomy of A.L.I.C.E. In Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking
Computer; Epstein, R., Roberts, G., Beber, G., Eds.; Springer: Dordrecht, The Netherlands, 2009; pp. 181–210. [CrossRef]
23. Hwang, G.-J.; Chang, C.-Y. A review of opportunities and challenges of chatbots in education. Interact. Learn. Environ. 2021, 1–14.
[CrossRef]
Sustainability 2023, 15, 12921 14 of 15
24. Yamada, M.; Goda, Y.; Matsukawa, H.; Hata, K.; Yasunami, S. A Computer-Supported Collaborative Learning Design for Quality
Interaction. IEEE MultiMedia 2016, 23, 48–59. [CrossRef]
25. Muniasamy, A.; Alasiry, A. Deep Learning: The Impact on Future eLearning. Int. J. Emerg. Technol. Learn. (iJET) 2020, 15, 188–199.
[CrossRef]
26. Bendig, E.; Erb, B.; Schulze-Thuesing, L.; Baumeister, H. The Next Generation: Chatbots in Clinical Psychology and Psychotherapy
to Foster Mental Health—A Scoping Review. Verhaltenstherapie 2022, 32, 64–76. [CrossRef]
27. Kennedy, C.M.; Powell, J.; Payne, T.H.; Ainsworth, J.; Boyd, A.; Buchan, I. Active Assistance Technology for Health-Related
Behavior Change: An Interdisciplinary Review. J. Med. Internet Res. 2012, 14, e80. [CrossRef]
28. Poncette, A.-S.; Rojas, P.-D.; Hofferbert, J.; Sosa, A.V.; Balzer, F.; Braune, K. Hackathons as Stepping Stones in Health Care
Innovation: Case Study with Systematic Recommendations. J. Med. Internet Res. 2020, 22, e17004. [CrossRef] [PubMed]
29. Ferrell, O.C.; Ferrell, L. Technology Challenges and Opportunities Facing Marketing Education. Mark. Educ. Rev. 2020, 30, 3–14.
[CrossRef]
30. Behera, R.K.; Bala, P.K.; Ray, A. Cognitive Chatbot for Personalised Contextual Customer Service: Behind the Scene and beyond
the Hype. Inf. Syst. Front. 2021, 1–21. [CrossRef]
31. Crolic, C.; Thomaz, F.; Hadi, R.; Stephen, A.T. Blame the Bot: Anthropomorphism and Anger in Customer–Chatbot Interactions.
J. Mark. 2022, 86, 132–148. [CrossRef]
32. Clarizia, F.; Colace, F.; Lombardi, M.; Pascale, F.; Santaniello, D. Chatbot: An education support system for student. In CSS 2018:
Cyberspace Safety and Security; Castiglione, A., Pop, F., Ficco, M., Palmieri, F., Eds.; Lecture Notes in Computer Science Book Series;
Springer International Publishing: Cham, Switzerland, 2018; Volume 11161, pp. 291–302. [CrossRef]
33. Firat, M. What ChatGPT means for universities: Perceptions of scholars and students. J. Appl. Learn. Teach. 2023, 6, 57–63.
[CrossRef]
34. Kim, H.-S.; Kim, N.Y. Effects of AI chatbots on EFL students’ communication skills. Commun. Ski. 2021, 21, 712–734.
35. Hill, J.; Ford, W.R.; Farreras, I.G. Real conversations with artificial intelligence: A comparison between human–human online
conversations and human–chatbot conversations. Comput. Hum. Behav. 2015, 49, 245–250. [CrossRef]
36. Wu, E.H.-K.; Lin, C.-H.; Ou, Y.-Y.; Liu, C.-Z.; Wang, W.-K.; Chao, C.-Y. Advantages and Constraints of a Hybrid Model K-12
E-Learning Assistant Chatbot. IEEE Access 2020, 8, 77788–77801. [CrossRef]
37. Brandtzaeg, P.B.; Følstad, A. Why people use chatbots. In INSCI 2017: Internet Science; Kompatsiaris, I., Cave, J., Satsiou, A.,
Carle, G., Passani, A., Kontopoulos, E., Diplaris, S., McMillan, D., Eds.; Lecture Notes in Computer Science Book Series; Springer
International Publishing: Cham, Switzerland, 2017; Volume 10673, pp. 377–392. [CrossRef]
38. Deng, X.; Yu, Z. A Meta-Analysis and Systematic Review of the Effect of Chatbot Technology Use in Sustainable Education.
Sustainability 2023, 15, 2940. [CrossRef]
39. de Quincey, E.; Briggs, C.; Kyriacou, T.; Waller, R. Student Centred Design of a Learning Analytics System. In Proceedings of the
9th International Conference on Learning Analytics & Knowledge, Tempe, AZ, USA, 4 March 2019; pp. 353–362. [CrossRef]
40. Hattie, J. The black box of tertiary assessment: An impending revolution. In Tertiary Assessment & Higher Education Student
Outcomes: Policy, Practice & Research; Ako Aotearoa: Wellington, New Zealand, 2009; pp. 259–275.
41. Wisniewski, B.; Zierer, K.; Hattie, J. The Power of Feedback Revisited: A Meta-Analysis of Educational Feedback Research. Front.
Psychol. 2020, 10, 3087. [CrossRef]
42. Winne, P.H. Cognition and metacognition within self-regulated learning. In Handbook of Self-Regulation of Learning and Performance,
2nd ed.; Routledge: London, UK, 2017.
43. Serban, I.V.; Sankar, C.; Germain, M.; Zhang, S.; Lin, Z.; Subramanian, S.; Kim, T.; Pieper, M.; Chandar, S.; Ke, N.R.; et al. A deep
reinforcement learning chatbot. arXiv 2017, arXiv:1709.02349.
44. Shneiderman, B.; Plaisant, C. Designing the User Interface: Strategies for Effective Human-Computer Interaction, 4th ed.; Pearson:
Boston, MA, USA; Addison Wesley: Hoboken, NJ, USA, 2004.
45. Abbasi, S.; Kazi, H. Measuring effectiveness of learning chatbot systems on student’s learning outcome and memory retention.
Asian J. Appl. Sci. Eng. 2014, 3, 57–66. [CrossRef]
46. Winkler, R.; Soellner, M. Unleashing the Potential of Chatbots in Education: A State-Of-The-Art Analysis. Acad. Manag. Proc.
2018, 2018, 15903. [CrossRef]
47. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja,
M.; et al. Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and
implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [CrossRef]
48. Dai, W.; Lin, J.; Jin, F.; Li, T.; Tsai, Y.S.; Gasevic, D.; Chen, G. Can large language models provide feedback to students? A case
study on ChatGPT. 2023, preprint. [CrossRef]
49. Lin, M.P.-C.; Chang, D. Enhancing post-secondary writers’ writing skills with a Chatbot: A mixed-method classroom study.
J. Educ. Technol. Soc. 2020, 23, 78–92.
50. Zhu, C.; Sun, M.; Luo, J.; Li, T.; Wang, M. How to harness the potential of ChatGPT in education? Knowl. Manag. E-Learn. 2023,
15, 133–152. [CrossRef]
51. Prøitz, T.S. Learning outcomes: What are they? Who defines them? When and where are they defined? Educ. Assess. Eval.
Account. 2010, 22, 119–137. [CrossRef]
Sustainability 2023, 15, 12921 15 of 15
52. Burke, J. (Ed.) Outcomes, Learning and the Curriculum: Implications for Nvqs, Gnvqs and Other Qualifications; Routledge: London, UK,
1995. [CrossRef]
53. Locke, E.A. New Developments in Goal Setting and Task Performance, 1st ed.; Routledge: London, UK, 2013. [CrossRef]
54. Leake, D.B.; Ram, A. Learning, goals, and learning goals: A perspective on goal-driven learning. Artif. Intell. Rev. 1995, 9, 387–422.
[CrossRef]
55. Greene, J.A.; Azevedo, R. A Theoretical Review of Winne and Hadwin’s Model of Self-Regulated Learning: New Perspectives
and Directions. Rev. Educ. Res. 2007, 77, 334–372. [CrossRef]
56. Pintrich, P.R. A Conceptual Framework for Assessing Motivation and Self-Regulated Learning in College Students. Educ. Psychol.
Rev. 2004, 16, 385–407. [CrossRef]
57. Schunk, D.H.; Greene, J.A. (Eds.) Handbook of Self-Regulation of Learning and Performance, 2nd ed.; In Educational Psychology
Handbook Series; Routledge: New York, NY, USA; Taylor & Francis Group: Milton Park, UK, 2018.
58. Chen, C.-H.; Su, C.-Y. Using the BookRoll e-book system to promote self-regulated learning, self-efficacy and academic achieve-
ment for university students. J. Educ. Technol. Soc. 2019, 22, 33–46.
59. Michailidis, N.; Kapravelos, E.; Tsiatsos, T. Interaction analysis for supporting students’ self-regulation during blog-based CSCL
activities. J. Educ. Technol. Soc. 2018, 21, 37–47.
60. Paans, C.; Molenaar, I.; Segers, E.; Verhoeven, L. Temporal variation in children’s self-regulated hypermedia learning. Comput.
Hum. Behav. 2019, 96, 246–258. [CrossRef]
61. Morisano, D.; Hirsh, J.B.; Peterson, J.B.; Pihl, R.O.; Shore, B.M. Setting, elaborating, and reflecting on personal goals improves
academic performance. J. Appl. Psychol. 2010, 95, 255–264. [CrossRef]
62. Krathwohl, D.R. A Revision of Bloom’s Taxonomy: An Overview. Theory Pract. 2002, 41, 212–218. [CrossRef]
63. Bouffard, T.; Boisvert, J.; Vezeau, C.; Larouche, C. The impact of goal orientation on self-regulation and performance among
college students. Br. J. Educ. Psychol. 1995, 65, 317–329. [CrossRef]
64. Javaherbakhsh, M.R. The Impact of Self-Assessment on Iranian EFL Learners’ Writing Skill. Engl. Lang. Teach. 2010, 3, 213–218.
[CrossRef]
65. Zepeda, C.D.; Richey, J.E.; Ronevich, P.; Nokes-Malach, T.J. Direct instruction of metacognition benefits adolescent science
learning, transfer, and motivation: An in vivo study. J. Educ. Psychol. 2015, 107, 954–970. [CrossRef]
66. Ndoye, A. Peer/self assessment and student learning. Int. J. Teach. Learn. High. Educ. 2017, 29, 255–269.
67. Schunk, D.H. Goal and Self-Evaluative Influences During Children’s Cognitive Skill Learning. Am. Educ. Res. J. 1996, 33, 359–382.
[CrossRef]
68. King, A. Enhancing Peer Interaction and Learning in the Classroom Through Reciprocal Questioning. Am. Educ. Res. J. 1990, 27,
664–687. [CrossRef]
69. Mason, L.H. Explicit Self-Regulated Strategy Development Versus Reciprocal Questioning: Effects on Expository Reading
Comprehension Among Struggling Readers. J. Educ. Psychol. 2004, 96, 283–296. [CrossRef]
70. Newman, R.S. Adaptive help seeking: A strategy of self-regulated learning. In Self-Regulation of Learning and Performance: Issues
and Educational Applications; Lawrence Erlbaum Associates, Inc.: Hillsdale, NJ, USA, 1994; pp. 283–301.
71. Rosenshine, B.; Meister, C. Reciprocal Teaching: A Review of the Research. Rev. Educ. Res. 1994, 64, 479–530. [CrossRef]
72. Baleghizadeh, S.; Masoun, A. The Effect of Self-Assessment on EFL Learners’ Self-Efficacy. TESL Can. J. 2014, 31, 42. [CrossRef]
73. Moghadam, S.H. What Types of Feedback Enhance the Effectiveness of Self-Explanation in a Simulation-Based Learning
Environment? Available online: https://summit.sfu.ca/item/34750 (accessed on 14 July 2023).
74. Vanichvasin, P. Effects of Visual Communication on Memory Enhancement of Thai Undergraduate Students, Kasetsart University.
High. Educ. Stud. 2020, 11, 34–41. [CrossRef]
75. Schumacher, C.; Ifenthaler, D. Features students really expect from learning analytics. Comput. Hum. Behav. 2018, 78, 397–407.
[CrossRef]
76. Marzouk, Z.; Rakovic, M.; Liaqat, A.; Vytasek, J.; Samadi, D.; Stewart-Alonso, J.; Ram, I.; Woloshen, S.; Winne, P.H.; Nesbit, J.C.
What if learning analytics were based on learning science? Australas. J. Educ. Technol. 2016, 32, 1–18. [CrossRef]
77. Akhtar, S.; Warburton, S.; Xu, W. The use of an online learning and teaching system for monitoring computer aided design
student participation and predicting student success. Int. J. Technol. Des. Educ. 2015, 27, 251–270. [CrossRef]
78. Lo, C.K. What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature. Educ. Sci. 2023, 13, 410. [CrossRef]
79. Baidoo-Anu, D.; Ansah, L.O. Education in the era of generative Artificial Intelligence (AI): Understanding the potential benefits
of ChatGPT in promoting teaching and learning. SSRN Electron. J. 2023, 1–22. [CrossRef]
80. Mogali, S.R. Initial impressions of ChatGPT for anatomy education. Anat. Sci. Educ. 2023, 1–4. [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.