Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
  • My research designs adaptive systems for online content, like intelligent lessons that emulate your favorite teacher ... moreedit
  • Tania Lombrozo, Tom Griffithsedit
Due to the scale of online environments, large numbers of learners interact with the exact same resources, such as online math homework problems and videos. It is therefore essential these are of the highest quality to help learners.... more
Due to the scale of online environments, large numbers of learners interact with the exact same resources, such as online math homework problems and videos. It is therefore essential these are of the highest quality to help learners. Ideally, online educational resources would constantly improve based on data and input from each learner, giving a better outcome for the next. This symposium explores issues around the use of crowdsourcing to harness learners’ interactions with resources like online problems and videos in order to improve these resources for the next learner. We hope to explore the benefits and limitations of thinking about learners through the lens of crowdsourcing, to imagine learnersourcing. We will discuss four ways in which researchers have leveraged crowdsourcing to help students learn in a variety of educational contexts, and in doing so we will also discuss ways in which educational theory can guide the future of learnersourcing.
A long history of laboratory and field experiments has demonstrated that dividing study time into many sessions is often superior to massing study time into few sessions, a phenomenon widely known as the “spacing effect.” Massive open... more
A long history of laboratory and field experiments has demonstrated that dividing study time into many sessions is often superior to massing study time into few sessions, a phenomenon widely known as the “spacing effect.” Massive open online courses (MOOCs) collect abundant data about student activity over time, but little of its early research has used learning theory to interrogate these data. Taking inspiration from this psychology literature, here we use data collected from MOOCs to identify observational evidence for the benefits of spaced practice in educational settings. We investigated tracking logs from 20 HarvardX courses to examine whether there was any relationship between how students allocated their participation and what performance they achieved.  While controlling for the effect of total time on-site, we show that the number of sessions students initiate is an important predictor of certification rate, across students in all courses. Furthermore, we demonstrate that...
Text components of digital lessons and problems are often static: they are written once and too often not improved over time. This is true for both large text components like webpages and documents as well as the small components that... more
Text components of digital lessons and problems are often static: they are written once and too often not improved over time. This is true for both large text components like webpages and documents as well as the small components that form the building blocks of courses: explanations, hints, examples, discussion questions/answers, emails, study tips, motivational messages. This represents a missed opportunity, since it should be technologically straightforward to enhance learning by improving text, as instructors get new ideas and data is collected about what helps learning. We describe how instructors can use recent work (Williams, Kim, Rafferty, Maldonado, Gajos, Lasecki, & Heffernan, 2016a) to make text components into adaptive resources that semi-automatically improve over time, by combining crowdsourcing methods from human computer interaction (HCI) with algorithms from statistical machine learning that use data for optimization.
Given the prevalence of mental illness, it may be impossible for professionals to provide treatment to all affected individuals, let alone those at risk. This gap could be addressed through the use of laypeople to provide peer counseling... more
Given the prevalence of mental illness, it may be impossible for professionals to provide treatment to all affected individuals, let alone those at risk. This gap could be addressed through the use of laypeople to provide peer counseling interventions; however, training laypeople on a large scale poses its own challenges. We propose that non-professionals may be able to learn counseling skills through a scalable massive open online course (MOOC), and conducted a pilot trial of the MOOC content with 60 participants. The mutual peer counseling intervention showed preliminary efficacy in teaching non-professionals some, though not all, of the intended peer counseling skills, increasing the use of active listening behaviors.
Seeking explanations is central to science, education, and everyday thinking, and prompting learners to explain is often beneficial. Nonetheless, in 2 category learning experiments across artifact and social domains, we demonstrate that... more
Seeking explanations is central to science, education, and everyday thinking, and prompting learners to explain is often beneficial. Nonetheless, in 2 category learning experiments across artifact and social domains, we demonstrate that the very properties of explanation that support learning can impair learning by fostering overgeneralizations. We find that explaining encourages learners to seek broad patterns, hindering learning when patterns involve exceptions. By revealing how effects of explanation depend on the structure of what is being learned, these experiments simultaneously demonstrate the hazards of explaining and provide evidence for why explaining is so often beneficial. For better or for worse, explaining recruits the remarkable human capacity to seek underlying patterns that go beyond individual observations.
Due to substantial scientific and practical progress, learning technologies can effectively adapt to the characteristics and needs of students. This article considers how learning technologies can adapt over time by crowdsourcing... more
Due to substantial scientific and practical progress, learning technologies can effectively adapt to the characteristics and needs of students. This article considers how learning technologies can adapt over time by crowdsourcing contributions from teachers and students – explanations, feedback, and other pedagogical interactions. Considering the context of ASSISTments, an online learning platform, we explain how interactive mathematics exercises can provide the workflow necessary for eliciting feedback contributions and evaluating those contributions, by simply tapping into the everyday system usage of teachers and students. We discuss a series of randomized controlled experiments that are currently running within ASSISTments, with the goal of establishing proof of concept that students and teachers can serve as valuable resources for the perpetual improvement of adaptive learning technologies. We also consider how teachers and students can be motivated to provide such contributions, and discuss the plans surrounding PeerASSIST, an infrastructure that will help ASSISTments to harness the power of the crowd. Algorithms from machine learning (i.e., multi-armed bandits) will ideally provide a mechanism for managerial control, allowing for the automatic evaluation of contributions and the personalized provision of the highest quality content. In many ways, the next 25 years of adaptive learning technologies will be driven by the crowd, and this article serves as the road map that ASSISTments has chosen to follow.
How do explaining and prior knowledge contribute to learning? Four experiments explored the relationship between explanation and prior knowledge in category learning. The experiments independently manipulated whether participants were... more
How do explaining and prior knowledge contribute to learning? Four experiments explored the relationship between explanation and prior knowledge in category learning. The experiments independently manipulated whether participants were prompted to explain the category membership of study observations and whether category labels were informative in allowing participants to relate prior knowledge to patterns underlying category membership. The experiments revealed a superadditive interaction between explanation and informative labels, with explainers who received informative labels most likely to discover (Experiments 1 and 2) and generalize (Experiments 3 and 4) a pattern consistent with prior knowledge. However, explainers were no more likely than controls to discover multiple patterns (Experiments 1 and 2), indicating that effects of explanation are relatively targeted. We suggest that explanation recruits prior knowledge to assess whether candidate patterns are likely to have broad scope (i.e., to generalize within and beyond study observations). This interpretation is supported by the finding that effects of explanation on prior knowledge were attenuated when learners believed prior knowledge was irrelevant to generalizing category membership (Experiment 4). This research provides evidence that explanation can serve as a mechanism for deploying prior knowledge to assess the scope of observed patterns.
Students' performance in Massive Open Online Courses (MOOCs) is enhanced by high quality discussion forums or recently emerging educational Community estion Answering (CQA) systems. Nevertheless , only a small number of students answer... more
Students' performance in Massive Open Online Courses (MOOCs) is enhanced by high quality discussion forums or recently emerging educational Community estion Answering (CQA) systems. Nevertheless , only a small number of students answer questions asked by their peers. is results in instructor overload, and many unan-swered questions. To increase students' participation, we present an approach for recommendation of new questions to students who are likely to provide answers. Existing approaches to such question routing proposed for non-educational CQA systems tend to rely on a few experts, what is not applicable in educational domain where it is important to involve all kinds of students. In tackling this novel educational question routing problem, our method (1) goes beyond previous question-answering data as it incorporates additional non-QA data from the course (to improve prediction accuracy and to involve more of the student community) and (2) applies constraints on users' workload (to prevent user overloading). We use an ensemble classiier for predicting students' willingness to answer a question, as well as students' expertise for answering. We conducted an online evaluation of the proposed method using an A/B experiment in our CQA system deployed in edX MOOC. e proposed method outperformed a baseline method (non-educational question routing enhanced with workload restriction) by improving recommendation accuracy, keeping more community members active, and increasing an average number of their contributions.
Researchers invested in K-12 education struggle not just to enhance pedagogy, curriculum, and student engagement, but also to harness the power of technology in ways that will optimize learning. Online learning platforms offer a powerful... more
Researchers invested in K-12 education struggle not just to enhance pedagogy, curriculum, and student engagement, but also to harness the power of technology in ways that will optimize learning. Online learning platforms offer a powerful environment for educational research at scale. The present work details the creation of an automated system designed to provide researchers with insights regarding data logged from randomized controlled experiments conducted within the ASSISTments TestBed. The Assessment of Learning Infrastructure (ALI) builds upon existing technologies to foster a symbiotic relationship beneficial to students, researchers, the platform and its content, and the learning analytics community. ALI is a sophisticated automated reporting system that provides an overview of sample distributions and basic analyses for researchers to consider when assessing their data. ALI's benefits can also be felt at scale through analyses that crosscut multiple studies to drive iterative platform improvements while promoting personalized learning.
Improving volunteer performance leads to better caregiving in dementia care settings. However, caregiving knowledge systems have been focused on eliciting and sharing expert, primary caregiver knowledge, rather than volunteer-provided... more
Improving volunteer performance leads to better caregiving in dementia care settings. However, caregiving knowledge systems have been focused on eliciting and sharing expert, primary caregiver knowledge, rather than volunteer-provided knowledge. Through the use of an experience prototype, we explored the content of volunteer caregiver knowledge and identified ways in which such non-expert knowledge can be useful to dementia care. By using lay language, sharing information specific to the client and collaboratively finding strategies for interaction, volunteers were able to boost the effectiveness of future volunteers. Therapists who reviewed the content affirmed the reliability of volunteer caregiver knowledge and placed value on its recency, variety and its ability to help bridge language and professional barriers. We discuss how future systems designed for eliciting and sharing volunteer caregiver knowledge can be used to promote better dementia care.
Online instructional videos are ubiquitous, but it is difficult for instructors to gauge learners' experience and their level of comprehension or confusion regarding the lecture video. Moreover, learners watching the videos may become... more
Online instructional videos are ubiquitous, but it is difficult for instructors to gauge learners' experience and their level of comprehension or confusion regarding the lecture video. Moreover, learners watching the videos may become disengaged or fail to reflect and construct their own understanding. This paper explores instructor and learner perceptions of in-video prompting where learners answer reflective questions while watching videos. We conducted two studies with crowd workers to understand the effect of prompting in general, and the effect of different prompting strategies on both learners and instructors. Results show that some learners found prompts to be useful checkpoints for reflection, while others found them distracting. Instructors reported the collected responses to be generally more specific than what they have usually collected. Also, different prompting strategies had different effects on the learning experience and the usefulness of responses as feedback.
We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based... more
We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a person-alized ranking over the difficulties of question in the target set and updates it in real-time according to students' progress. We show in simulations that MAPLE was able to improve students' learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.
Randomized experiments can lead to improvements in educational technologies, but often require many students to experience conditions associated with inferior learning outcomes. Multi-armed bandit (MAB) algorithms can address this issue... more
Randomized experiments can lead to improvements in educational technologies, but often require many students to experience conditions associated with inferior learning outcomes. Multi-armed bandit (MAB) algorithms can address this issue by modifying experiment designs to direct more students to more helpful conditions. Using simulations as well as modeling data from previous educational experiments, we explore the statistical impact of using MAB for experiment design, focusing on the tradeoff between acquiring statistically reliable information and the benefits to students. Results suggest that MAB experiments can lead to much higher average benefits to students than traditional experimental designs, but at least twice as many participants are needed to attain power of 0.8 and the false positive rate is doubled. Optimistic prior distributions in the MAB algorithm can mitigate the loss in power to some extent, without meaningful reductions in benefits or further increasing false positive rates.
The Internet has enabled learning at scale, from Massive Open Online Courses (MOOCs) to Wikipedia. But online learners may become passive, instead of actively constructing knowledge and revising their beliefs in light of new facts.... more
The Internet has enabled learning at scale, from Massive Open Online Courses (MOOCs) to Wikipedia. But online learners may become passive, instead of actively constructing knowledge and revising their beliefs in light of new facts. Instructors cannot directly diagnose thousands of learners' misconceptions and provide remedial tutoring. This paper investigates how instructors can prompt learners to reflect on facts that are anomalies with respect to their existing misconceptions, and how to choose these anomalies and prompts to guide learners to revise incorrect beliefs without any feedback. We conducted two randomized experiments with online crowd workers learning statistics. Results show that prompts to explain why these anomalies are true drive revision towards correct beliefs. But prompts to simply articulate thoughts about anomalies have no effect on learning. Furthermore, we find that explaining multiple anomalies is more effective than explaining only one, but the anomalies should rule out multiple misconceptions simultaneously.
Randomized experiments in online educational environments are ubiquitous as a scientific method for investigating learning and motivation, but they rarely improve educational resources and produce practical benefits for learners. We... more
Randomized experiments in online educational environments are ubiquitous as a scientific method for investigating learning and motivation, but they rarely improve educational resources and produce practical benefits for learners. We suggest that tools for experimentally comparing resources are designed primarily through the lens of experiments as a scientific methodology , and therefore miss a tremendous opportunity for online experiments to serve as engines for dynamic improvement and personalization. We present the MOOClet requirements specification to guide the implementation of software tools for experiments to ensure that whenever alternative versions of a resource can be experimentally compared (by randomly assigning versions), the resource can also be dynamically improved (by changing which versions are presented), and personalized (by presenting different versions to different people). The MOOClet specification was used to implement DEXPER, a proof-of-concept web service backend that enables dynamic experimentation and personalization of resources embedded in frontend educational platforms. We describe three use cases of MOOClets for dynamic experimentation and personalization of motivational emails, explanations, and problems.
Background and objectives: Progress toward establishing treatments for mental disorders has been good, particularly for cognitive behavior therapy (CBT). However, there is considerable room for improvement. The goal of this study was to... more
Background and objectives: Progress toward establishing treatments for mental disorders has been good, particularly for cognitive behavior therapy (CBT). However, there is considerable room for improvement. The goal of this study was to begin the process of investigating the potential for improving treatment outcome via improving our understanding of learning processes. Methods: Individuals diagnosed with major depressive disorder (N ¼ 20) participated in three computer-delivered CBT lessons for depression. Indices of learning were taken after each lesson, during three phone calls over the week following the lesson, and one week later. These were: (a) whether the participant thought about the lesson, (b) whether the participant applied the lesson, and (c) whether the participant generalized the lesson. Based on a predetermined list of therapy points (i.e., distinct ideas and principles), all participant responses were coded for the number of therapy points they thought about, applied, or generalized following each lesson. Results: Less than half of the thoughts and applications were accurate. Generalization, but not thoughts nor application, was associated with improved depression scores one week later. Limitations: The follow up period was only one week later and there was no comparison group so we cannot speak to the long term outcome of these measures or generalize to other mental disorders. Conclusions: These results point to the importance of improving transfer of learning in CBT and represent a promising first step toward the development of methods to study and optimize learning of CBT so as to improve patient outcomes.
While explanations may help people learn by providing information about why an answer is correct, many problems on online platforms lack high-quality explanations. This paper presents AXIS (Adaptive eXplanation Improvement System), a... more
While explanations may help people learn by providing information about why an answer is correct, many problems on online platforms lack high-quality explanations. This paper presents AXIS (Adaptive eXplanation Improvement System), a system for obtaining explanations. AXIS asks learners to generate, revise, and evaluate explanations as they solve a problem, and then uses machine learning to dynamically determine which explanation to present to a future learner, based on previous learners' collective input. Results from a case study deployment and a randomized experiment demonstrate that AXIS elicits and identifies explanations that learners find helpful. Providing explanations from AXIS also objectively enhanced learning, when compared to the default practice where learners solved problems and received answers without explanations. The rated quality and learning benefit of AXIS explanations did not differ from explanations generated by an experienced instructor.
Research Interests:
Education is one of the eight Millennium Development Goals (MDG) of the United Nations. Considerable interest has been displayed in online education at scale, a new arising concept to realize this goal. Yet connecting online education to... more
Education is one of the eight Millennium Development Goals (MDG) of the United Nations. Considerable interest has been displayed in online education at scale, a new arising concept to realize this goal. Yet connecting online education to real jobs is still a challenge. This CHI workshop bridges this gap by bringing together groups and insights from related work at HCOMP, CSCW, and Learning at Scale. The workshop aims at providing opportunities for groups not yet in the focus of online education, exemplified by students who have not have equal access to higher education, compared to typical students in MOOCs.
Research Interests:
Theories of how people learn relationships between continuous variables have tended to focus on two possibilities: one, that people are estimating explicit functions, or two that they are performing associative learning supported by... more
Theories of how people learn relationships between continuous variables have tended to focus on two possibilities: one, that people are estimating explicit functions, or two that they are performing associative learning supported by similarity. We provide a rational analysis of function learning, drawing on work on regression in machine learning and statistics. Using the equivalence of Bayesian linear regression and Gaussian processes, which provide a probabilistic basis for similarity-based function learning, we show that learning explicit rules and using similarity can be seen as two views of one solution to this problem. We use this insight to define a rational model of human function learning that combines the strengths of both approaches and accounts for a wide variety of experimental results.
A long history of laboratory and field experiments has demonstrated that dividing study time into many sessions is often superior to massing study time into few sessions, a phenomenon widely known as the “spacing effect.” Massive open... more
A long history of laboratory and field experiments has demonstrated that dividing study time into many sessions is often superior to massing study time into few sessions, a phenomenon widely known as the “spacing effect.” Massive open online courses (MOOCs) collect abundant data about student activity over time, but little of its early research has used learning theory to interrogate these data. Taking inspiration from this psychology literature, here we use data collected from MOOCs to identify observational evidence for the benefits of spaced practice in educational settings. We investigated tracking logs from 20 HarvardX courses to examine whether there was any relationship between how students allocated their participation and what performance they achieved. While controlling for the effect of total time on-site, we show that the number of sessions students initiate is an important predictor of certification rate, across students in all courses. Furthermore, we demonstrate that when students spend similar amounts of time in multiple courses, they perform better in courses where that time is distributed among more sessions, suggesting the benefit of spaced practice independently of student characteristics. We conclude by proposing interventions to guide students’ study schedules and for leveraging such an effect.
Research Interests:
What happens when well-known universities offer online courses, assessments, and certificates of completion for free? Early descriptions of Massive Open Online Courses (MOOCs) have emphasized large enrollments, low certification rates,... more
What happens when well-known universities offer online courses, assessments, and certificates of completion for free? Early descriptions of Massive Open Online Courses (MOOCs) have emphasized large enrollments, low certification rates, and highly educated registrants. We use data from two years and 68 open online courses offered by Harvard University (via HarvardX) and MIT (via MITx) to broaden the scope of answers to this question. We describe trends over this two-year span, depict participant intent using comprehensive survey instruments, and chart course participation pathways using network analysis. We find that overall participation in our MOOCs remains substantial and that the average growth has been steady. We explore how diverse audiences — including explorers, teachers-as-learners, and residential students — provide opportunities to advance the principles on which HarvardX and MITx were founded: access, research, and residential education.
Research Interests:
High attrition rates in massive open online courses (MOOCs) have motivated growing interest in the automatic detection of student “stopout”. Stopout classifiers can be used to orchestrate an intervention before students quit, and to... more
High attrition rates in massive open online courses (MOOCs) have motivated growing interest in the automatic detection of student “stopout”. Stopout classifiers can be used to orchestrate an intervention before students quit, and to survey students dynamically about why they ceased participation.
In this paper we expand on existing stop-out detection research by (1) exploring important elements of classifier design such as generalizability to new courses; (2) developing a novel framework inspired by control theory for how to use a classifier’s outputs to make intelligent decisions; and (3) presenting results from a “dynamic survey intervention” conducted on 2 HarvardX MOOCs, containing over 40000 students, in early 2015. Our results suggest that surveying  students based on an automatic stopout classifier achieves higher response rates compared to traditional post-course surveys, and may boost students’ propensity to “come back” into the course.
Research Interests:
We explain and provide examples of a formalism that supports the methodology of discovering how to adapt and personalize technology by combining randomized experiments with variables associated with user models. We characterize a formal... more
We explain and provide examples of a formalism that supports the methodology of discovering how to adapt and personalize technology by combining randomized experiments with variables associated with user models. We characterize a formal relationship between the use of technology to conduct A/B experiments and use of technology for adaptive personalization. The MOOClet Formalism [11] captures the equivalence between experimentation and personalization in its conceptualization of modular components of a technology. This motivates a unified software design pattern that enables technology components that can be compared in an experiment to also be adapted based on contextual data, or personalized based on user characteristics.
With the aid of a concrete use case, we illustrate the potential of the MOOClet formalism for a methodology that uses randomized experiments of alternative micro-designs to discover how to adapt technology based on user characteristics, and then dynamically implements these personalized improvements in real time.
Research Interests:
Mental disorders are prevalent and lead to significant impairment. Progress toward establishing treatments has been good. However, effect sizes are small to moderate, gains may not persist, and many patients derive no benefit. Our goal is... more
Mental disorders are prevalent and lead to significant impairment. Progress toward establishing treatments has been good. However, effect sizes are small to moderate, gains may not persist, and many patients derive no benefit. Our goal is to highlight the potential for empirically-supported psychosocial treatments to be improved by incorporating insights from cognitive psychology and educational research. Our central question is: If it were possible to improve memory for content of sessions of psychosocial treatments, would outcome substantially improve? This question arises from five lines of evidence: (a) mental illness is often characterized by memory impairment, (b) memory impairment is modifiable, (c) psychosocial treatments often involve the activation of emotion, (d) emotion can bias memory and (e) memory for psychosocial treatment sessions is poor. Insights from scientific knowledge on learning and memory are leveraged to derive strategies for a transdiagnostic and transtreatment cognitive support intervention. Applications within and between sessions and to interventions delivered via the internet are considered. We discuss additional novel pathways to improving memory, such as improving sleep and the differential provision of services to children and older adults (memory and learning processes change across the lifespan). Finally, we highlight the relevance to doctor-patient relationships.
Errors in detecting randomness are often explained in terms of biases and misconceptions. We propose and provide evidence for an account that characterizes the contribution of the inherent statistical difficulty of the task. Our account... more
Errors in detecting randomness are often explained in terms of biases and misconceptions. We propose and provide evidence for an account that characterizes the contribution of the inherent
statistical difficulty of the task. Our account is based on a Bayesian statistical analysis, focusing on the fact that a random process is a special case of systematic processes, meaning that the hypothesis of randomness is nested within the hypothesis of systematicity. This analysis shows that randomly generated outcomes are still reasonably likely to have come from a systematic process, and are thus only weakly diagnostic of a random process. We tested this account through three experiments. Experiments 1 and 2 showed that the low accuracy in judging whether a sequence of coin flips is random (or biased towards heads or tails) is due to the weak evidence provided by random sequences. While randomness judgments were less accurate than judgments involving non-nested hypotheses in the same task domain, this difference disappeared once the strength of the available evidence was equated. Experiment 3 extended
this finding to assessing whether a sequence was random or exhibited sequential dependence, showing that the distribution of statistical evidence has an effect that complements known
misconceptions.
Research Interests:
Research Interests:
Research Interests:
Abstract 1. Seeking explanations is central to science, education, and everyday thinking, and prompting learners to explain is often beneficial. Nonetheless, in 2 category learning experiments across artifact and social domains, we... more
Abstract 1. Seeking explanations is central to science, education, and everyday thinking, and prompting learners to explain is often beneficial. Nonetheless, in 2 category learning experiments across artifact and social domains, we demonstrate that the very properties of explanation that support learning can impair learning by fostering overgeneralizations. We find that explaining encourages learners to seek broad patterns, hindering learning when patterns involve exceptions.
Children’s and adults’ attempts to explain the world around them plays a key role in promoting learning and understanding, but little is known about how and why explaining has this effect. An experiment investigated explaining in... more
Children’s and adults’ attempts to explain the world around
them plays a key role in promoting learning and understanding,
but little is known about how and why explaining has this
effect. An experiment investigated explaining in the social
context of learning to predict and explain individuals’ behavior,
examining if explaining observations exerts a selective
constraint to seek patterns or regularities underlying the
observations, regardless of whether such patterns are harmful or
helpful for learning. When there were reliable patterns– such as
personality types that predict charitable behavior– explaining
promoted learning. But when these patterns were misleading,
explaining produced an impairment whereby participants
exhibited less accurate learning and prediction of individuals’
behavior. This novel approach of contrasting explanation’s
positive and negative effects suggests that explanation’s
benefits are not merely due to increased motivation, attention or
time, and that explaining may undermine learning in domains
where regularities are absent, spurious, or unreliable.
How does explaining novel observations influence the extent to which learners revise beliefs in the face of anomalies – observations inconsistent with their beliefs? On one hand, explaining could recruit prior beliefs and reduce... more
How does explaining novel observations influence the
extent to which learners revise beliefs in the face of
anomalies – observations inconsistent with their beliefs? On
one hand, explaining could recruit prior beliefs and reduce
belief revision if learners “explain away” or discount
anomalies. On the other hand, explaining could promote
belief revision by encouraging learners to modify beliefs to
better accommodate anomalies. We explore these possibilities
in a statistical judgment task in which participants learned to
rank students’ performance across courses by observing
sample rankings. We manipulated whether participants were
prompted to explain the rankings or to share their thoughts
about them during study, and also the proportion of
observations that were anomalous with respect to intuitive
statistical misconceptions. Explaining promoted greater belief
revision when anomalies were common, but had no effect
when rare. In contrast, increasing the number of anomalies
had no effect on belief revision without prompts to explain.
A great deal of research has demonstrated that learning is influenced by the learner’s prior background knowledge (e.g. Murphy, 2002; Keil, 1990), but little is known about the processes by which prior knowledge is deployed. We... more
A great deal of research has demonstrated that learning is
influenced by the learner’s prior background knowledge (e.g.
Murphy, 2002; Keil, 1990), but little is known about the processes
by which prior knowledge is deployed. We explore the role of
explanation in deploying prior knowledge by examining the joint
effects of eliciting explanations and providing prior knowledge in a
task where each should aid learning. Three hypotheses are
considered: that explanation and prior knowledge have
independent and additive effects on learning, that their joint effects on learning are subadditive, and that their effects are superadditive. A category learning experiment finds evidence for a superadditive effect: explaining drives the discovery of regularities, while prior knowledge constrains which regularities learners discover. This is consistent with an account of explanation’s effects on learning proposed in Williams & Lombrozo (in press).
In evaluation frames, both focal and alternative hypotheses are explicit in queries about an event’s probability. We investigated whether evaluation frames improved the accuracy and coherence of conditional probability judgments... more
In evaluation frames, both focal and alternative hypotheses
are explicit in queries about an event’s probability. We
investigated whether evaluation frames improved the
accuracy and coherence of conditional probability judgments
when compared to economy frames in which only the focal
hypothesis was explicit. Participants were presented with
contingency information regarding the relation between
viruses and an illness with an unknown etiology, and they
judged the conditional probability that the illness would occur
or not occur given that a virus was either present or absent.
Compared to economy frames, evaluation frames improved
the accuracy and coherence of probability judgments.

And 2 more

Digital educational resources could enable the use of ran-domized experiments to answer pedagogical questions that instructors care about, taking academic research out of the laboratory and into the classroom. We take an... more
Digital educational resources could enable the use of ran-domized experiments to answer pedagogical questions that instructors care about, taking academic research out of the laboratory and into the classroom. We take an instructor-centered approach to designing tools for experimentation that lower the barriers for instructors to conduct experiments. We explore this approach through DynamicProblem, a proof-of-concept system for experimentation on components of digital problems, which provides interfaces for authoring of experiments on explanations, hints, feedback messages, and learning tips. To rapidly turn data from experiments into practical improvements , the system uses an interpretable machine learning algorithm to analyze students' ratings of which conditions are helpful, and present conditions to future students in proportion to the evidence they are higher rated. We evaluated the system by collaboratively deploying experiments in the courses of three mathematics instructors. They reported benefits in reflecting on their pedagogy, and having a new method for improving online problems for future students.