Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Learning and Individual Differences 22 (2012) 806–813 Contents lists available at SciVerse ScienceDirect Learning and Individual Differences journal homepage: www.elsevier.com/locate/lindif Rubrics and self-assessment scripts effects on self-regulation, learning and self-efficacy in secondary education Ernesto Panadero a,⁎, Jesús Alonso Tapia a, Juan Antonio Huertas b a b Departamento de Psicología Clínica y de la Salud, Universidad Autónoma de Madrid, Spain Departamento de Psicología Básica, Universidad Autónoma de Madrid, Spain a r t i c l e i n f o Article history: Received 8 September 2011 Received in revised form 16 April 2012 Accepted 28 April 2012 Keywords: Self-regulation Self-assessment Rubric Self assessment script Self-efficacy Formative assessment a b s t r a c t This study compares the effects of two different self-assessment tools – rubrics and scripts – on self-regulation, learning and self-efficacy in interaction with two other independent variables (type of instructions and feedback). A total of 120 secondary school students analyzed landscapes – a usual task when studying Geography – in one of twelve experimental conditions (process/performance instructions× control/rubric/script self-assessment tools×mastery/performance feedback) through three trials. Self-regulation was measured through questionnaire and thinking aloud protocols. The results of repeated-measure ANOVA showed that scripts enhanced self-regulation more than rubrics and the control group, and that the use of the two self-assessment tools increased learning over the control group. However, most interactions were not significant. Theoretical and practical implications for using rubrics and scripts in self-regulation training are discussed. © 2012 Elsevier Inc. All rights reserved. 1. Problem and theoretical framework The main objective of this study is to compare the effects of two different self-assessment tools – rubrics and scripts – on self-regulation, learning and self-efficacy. The reason for this goal rests on the importance of self-regulation for learning, and on the role of self-assessment for improving self-regulation 1.1. Self-regulation It is frequently said that students do not learn because they lack adequate motivation. Nevertheless, often they lack adequate motivation because, when trying to learn, they do not experience progress, because they are not able to “self-regulate” their learning process (Boekaerts, 2011; Zimmerman, 2011). As described by Efklides (2011), self-regulation (SR) is a self-initiated and cyclic process through which students self-represent a task, plan how to carry it out, monitor and assess whether its execution is adequate, cope with difficulties and emotions that usually arise, assess their performance and make attributions concerning the cause of the outcomes. Self-regulation is, then, a crucial competence for being a successful learner. ⁎ Corresponding author at: Departamento de Psicología Biológica y de la Salud, Módulo 1, Despacho 24, Facultad de Psicología, Universidad Autónoma de Madrid, 28049 Madrid, Spain. Tel.: +34 91 497 45 98. E-mail address: ernesto.panadero@gmail.com (E. Panadero). 1041-6080/$ – see front matter © 2012 Elsevier Inc. All rights reserved. doi:10.1016/j.lindif.2012.04.007 Given the importance of self-regulation, researchers have tried to facilitate its acquisition through interventions focused on the sources of individual differences. For example, instructions have been used to arouse interest and perception of self-efficacy, and to focus students' attention on different motivational goals; scripts and rubrics have been used to help students to self-assess their learning processes and performance; finally, frequency, opportunity and content of feedback have been used to shape the students' self-regulation processes (Alonso-Tapia & Panadero, 2010; Dignath & Büttner, 2008; Dignath, Büttner, & Langfeldt, 2008; Zimmerman & Schunk, 2011). 1.2. Self-assessment Of all the processes implied in self-regulation, a crucial one is selfassessment (Puustinen & Pulkkinen, 2001). Self-assessment involves comparing one's own execution process and performance with some criteria to become aware of what has been done to change it if necessary, and to learn from it to perform the task better in the future (Lan, 1998). Moreover, according to Efklides (2011), the kind and degree of self-assessment may depend, first, on the goals the student is pursuing, that in turn can be affected by teacher's instructions, and second, on its perceived effectiveness, a perception that can be improved by the kind and frequency of teacher's feedback. Therefore, it is important to know whether interventions aimed at promoting selfassessment can help to improve self-regulation, and how and under what conditions – for example, instructions and feedback – can it be done with best results. So, what kind of evidence do we have on the effect of educational interventions on self-assessment? E. Panadero et al. / Learning and Individual Differences 22 (2012) 806–813 There is indirect evidence from two meta-analyses about the effectiveness of interventions to promote self-regulation. Dignath and colleagues (Dignath & Büttner, 2008; Dignath et al., 2008) have shown the importance of intervening in early academic years to help students to develop self-regulation, a key ability for being successful in the latter levels of education. They have also shown that it is important to intervene before the students develop performance and avoidance goals that have a negative effect on their learning (Hattie, Biggs, & Purdie, 1996). Dignath et al. (2008) also found that interventions based on monitoring and evaluation, and thus self-assessment, had the lowest effects on self-regulation, whereas interventions that used planning and monitoring, and planning and evaluation, were the ones with the greatest effects. How can this difference be explained? Self-assessment implies judging one's own performance by criteria previously established in a more or less conscious way. These assessment criteria must be clear to the student from the beginning of the learning processes so that the students can have clear expectations about what their goals are and plan accordingly. The group of studies based only on monitoring or only in evaluation stresses selfevaluation, a procedure that is not an effective method for promoting self-assessment as it does not include the assessment criteria. On the contrary, in studies based on planning-and-monitoring and planningand-evaluation interventions, the assessment criteria are clear, a fact that can explain the differences found between the two types of studies. In sum: an adequate self-assessment intervention should start when planning begins, and should continue throughout the task. There are two types of self-assessment tools that include the assessment criteria and, therefore, are adequate for self-assessment. These are: rubrics and scripts. Rubrics are self-assessment tools with three characteristics: a list of criteria for assessing the important goals of the task, a scale for grading the different levels of achievement and a description for each qualitative level. Students can compare their work against the criteria or “standards” in the rubric, and then self-grade their work accordingly. Although rubrics are designed to analyze the final product of an activity, it is recommended that they are given to students before they start a task in order to help them establish appropriate goals (Alonso-Tapia & Panadero, 2010; Andrade & Valtcheva, 2009). The most important question is whether rubrics facilitate students' self-regulation and learning, and how their effectiveness can be enhanced. Studies on the effects of rubrics on learning, performance and self-efficacy have obtained mixed results (Andrade, Wang, Du, & Akawi, 2009; Jonsson & Svingby, 2007; Schafer, Swanson, Bené, & Newberry, 2001). In Jonsson and Svingby (2007) of 75 studies about rubrics, they found it difficult to draw any conclusions about improvement in students' learning because the results pointed in different directions. In conclusion, rubrics have proved to have some positive effects in self-assessment and learning when supported by structured interventions, but just handing them out is no guarantee of success (Jonsson & Svingby, 2007). Further investigation is then required on how to structure interventions on rubrics to assure their effectiveness. Scripts, the second type of self-assessment tool, are specific questions structured in steps to follow the expert model of approaching a task from beginning to end. They are designed to analyze the process being followed throughout a task, although they can also be used to analyze the final outcome. In these latter case, however, students focus on performance, and therefore scripts can lose part of its pedagogical utility (Thillmann, Kunsting, Wirth, & Leutner, 2009). The question is, are scripts effective to promote self-regulation and learning? Research has found that, depending on the characteristics and conditions of their application, scripts have plenty of positive features. Their use enhances self-regulation through activating adequate learning strategies, promoting more accurate self-assessment, and a deeper understanding of the content, and thus a higher level 807 of learning (Alonso-Tapia & Panadero, 2010; Bannert, 2009; Kostons, van Gog, & Paas, 2009; Kramarski & Michalsky, 2009, 2010; Montague, 2007). However, these effects have not always been found, a fact that seems to depend on the quality of the script structure and the length of intervention (Berthold, Nückles, & Renkl, 2007; Kitsantas, Reiser, & Doster, 2004; Kollar, Fischer, & Slotta, 2007). Thus, as in the case of rubrics, it is important to study the conditions for script effectiveness. In sum, rubrics and, especially, scripts seem to have positive effects. The evidence about their effectiveness for improving selfregulation, learning and self-efficacy, is quite solid for scripts but not for rubrics. Nevertheless, no study has compared the relative effect of these two tools taking into account the contextual conditions that can moderate such effect. Moreover, the use of self-assessment tools in a real classroom situation is embedded in the context of situational variables – for example, instructions and feedback – that can affect personal factors influencing self-regulation such as motivation and self-efficacy (Alonso-Tapia & Fernandez, 2008; Black & William, 1998; Efklides, 2011; Pardo & Alonso-Tapia, 1992; Urdan & Turner, 2005; Zimmerman & Kitsantas, 2005). Since no study has compared the relative effect of rubrics and scripts – in the contextual conditions just mentioned – on self-regulation, achievement and self-efficacy, it was decided to study this effect with some hypotheses derived from the evidence available. Considering the three independent variables of our study – type of self-assessment help (scripts/rubrics/no tool), type of instruction (process/performance oriented), and type of feedback (process/ performance centered) – our main hypotheses are that student's selfregulation, learning and perceived self-efficacy after intervention would be greater if students (a) used a script or a rubric, (b) received process-oriented instructions, and (c) received process-oriented feedback. Moreover, it is also expected that the convergence of these three conditions as well as practice (the three trials) will improve such outcomes. However, several additional considerations suggest that the expected results could be moderated by different variables. First, activation and depth of self-regulation is related to the student's goal-orientation. It has been found that students with learning goals activate more learning strategies and are more proactive on their learning than students pursuing performance or avoidance goals (Alonso-Tapia, Huertas, & Ruiz, 2010; Zimmerman, 2011). Therefore, it may be that motivational orientations will moderate our results. However, we cannot anticipate the direction of this effect. Students high in learning orientation could take more advantage of the learning help as far as this help is congruent with their orientation, as Alonso-Tapia and Fernandez (2008) have found. However, it could also happen that such orientation was enough for activating positive self-regulation strategies, and hence that self-assessment tools are of more benefit to students low in learning orientation. Second, self-efficacy has been found to have a direct effect on selfregulation and to be influenced by learning outcomes (Schunk & Usher, 2011). Thus, if promoting self-assessment affects self-regulation and learning in a positive way, it may be that it produces an improvement of self-efficacy, as some studies suggest (Alonso-Tapia & Panadero, 2010; Andrade et al., 2009). If it is the case, it may be that our results are moderated by self-efficacy perception prior to training. Finally, the study was conducted in the context of social science instruction evaluating a required competence. According to the Spanish curriculum, Geography learners need to learn how to analyze landscapes for identifying natural and human factors affecting the territory that a landscape represents. The outcome of landscape analysis depends on the degree to which expert criteria are applied while following a more-or-less fixed sequence of steps. Therefore, landscape analysis can be a difficult competence to acquire and so teacher's support is crucial. In this study we will explore how different instructions, selfassessment tools and feedback influence the acquisition of the competence. 808 E. Panadero et al. / Learning and Individual Differences 22 (2012) 806–813 2. Method 2.1. Participants One hundred and twenty third- and fourth-year secondary school students, 63 females and 57 males, from two public high schools in Madrid (Spain) participated in the study. The mean age was 15.9 years (SD = 11 months). They did not receive any compensation for their participation, and the schools were chosen based on convenience. Participants were assigned randomly to the twelve experimental conditions. 2.2. Materials 2.2.1. Instruments for assessing dependent and moderating variables 2.2.1.1. Questionnaire of Motives, Expectancies and Values, part A: goals and goal orientations (MEVA) (Alonso-Tapia, 2005). This questionnaire was used for assessing goal orientations as moderating variables. It includes 76 items to be answered in a five-point Likert scale, and allows the assessment of nine specific motives (mean α = .77), and three general goal orientations: Learning (α = .92), performance (α = .81), and avoidance (α = .83). 2.2.1.2. Self-regulation measures. In order to reach a good estimation of self-regulation, following the advice of Boekaerts and Corno (2005), two different measures were used for assessing this process. 2.2.1.2.1. Emotion and Motivation Self-regulation Questionnaire (EMSR-Q) (Alonso-Tapia, Panadero, & Ruiz, submitted for publication). This questionnaire includes 36 items answered in a five-point Likert scale. They are grouped in two scales, Learning self-regulation, with 19 items (α=.90), and Performance/avoidance self-regulation, with 17 items (α=.88) (Cronbach alphas computed using data gathered in this study). The first scale includes self-messages or actions orientated to learning goals, for example: “I will plan the activity before starting to execute it”. The higher the value in this scale, the more positive is the self-regulation for learning. The second scale includes self-messages or actions showing lack of self-regulation or orientated to performance, for example: “I am getting nervous. I don't know how to do it”. The higher the values in this scale, the more negative learning self-regulation will be. 2.2.1.2.2. On-line self-regulation index. To calculate this measure, students were asked to express their thoughts and feelings aloud while analyzing the landscape. Thinking-aloud protocols are considered a good representation of the self-regulatory actions and metacognitive processes of students during an activity (Ericsson & Simon, 1993; Greene, Robertson & Croker Costa, 2011). They were recorded and later analyzed using the content of each complete proposition (i.e., stand-alone idea) as the unit of analysis. Proposition content was classified into one of three categories: – Descriptive propositions: those in which the content refers to what the participant was observing while analyzing the landscape; – Self-regulatory propositions: those which content referred to questions asked while receiving instructions, or included messages for controlling disturbing emotions, planning, help-seeking, or revision, and questions of clarification during feedback; – Negative emotional self-regulation propositions were computed on negative (e.g. “I am so nervous I cannot perform this task”). However, this kind of self-regulation proposition only represented 1% of the total. Two researchers classified all the propositions independently according to these categories. Inter-rater agreement was 94%. Finally, to normalize scores, the number of self-regulatory propositions of each student was divided by the sum of self-regulatory propositions plus descriptive propositions. Last, the on-line SRI was calculated for each of the three landscapes to evaluate the occasion/practice effect. 2.2.1.2.3. On-line self-regulation index plus. This measure is similar to the previous one with the exception of a new type of proposition: checked proposition. This proposition is similar to the descriptive propositions, but before expressing the idea, the participant looked at the rubric or the script for information, a behavior that implies self-regulation. This measure is only applicable to the participants using the rubric or the script. 2.2.1.3. Learning index. Participants wrote their conclusions once they had finished the oral analysis of each of the three landscapes. The written texts were divided into propositions, and then were evaluated as correct or incorrect using a specific analysis model for each landscape provided by two expert Social Science teachers. From this model a code of categories had been developed in a previous study (Alonso-Tapia & Panadero, 2010) under which students' propositions could be classified. Table 1 Coding examples of the quality of landscape analysis (Alonso-Tapia & Panadero, 2010). Categories Description Mountainous area Lake or reservoirs Dense vegetation Two types of vegetation: evergreen or deciduous trees Evergreen trees are pines Autumn season River valley Settlement It is a rural landscape with dispersed houses Communications: roads, electricity… Economic activity: agriculture for self-consumption and cattle farming Factors that cause the landscape to be the way it is Fertile soil River erosion and sediment Rainy weather Civilization: farming, roads, reservoir Classification Rural landscape Examples of answers “This area is really uneven as it has mountains.” “There is a lake… ummm… wait, it seems to be manmade so it is probably a reservoir.” “It is a really dense forest. There are a lot of trees and it is really green.” “I think those trees are evergreen ones because it seems to be autumn but they are still green.” “I would say the trees are pines.” “By the colours I think it is autumn.” “Ummm, this valley was created by the river.” “I can see houses, so there are people living here.” “This is a rural area and the houses are really far apart. There is also no downtown.” “There are some signs of communication, they have a small road, and you can see the telephone poles”. “Generally, they will work in agriculture and cattle farming here.” “The soil is probably good for farming and cattle grazing.” “This valley was created in the past through river erosion.” “If this landscape is so green it is because of the weather. It rains a lot.” “Here, people are not as present as they are in the city but you can still see the farms, roads… and even a reservoir.” “This is a rural environment.” E. Panadero et al. / Learning and Individual Differences 22 (2012) 806–813 An example is included in Table 1. The percentage of agreement between coders for the three different landscapes was 85%, 87% and 81%. 2.2.1.4. Self-efficacy questionnaire. The self-efficacy questionnaire designed for this study includes eight specific items of landscape analysis, for example: “Do you feel able to understand and interpret a landscape?” It is scored in a seven-point scale, and has a reliability index α = .87, computed using data gathered in this study. 2.2.2. Instruments used for the intervention 2.2.2.1. Instruction sheet. A sheet with the main instructions was handed out in case the participants wanted to review the instructions during the activity. 2.2.2.2. Landscapes. Three PowerPoint presentations were created (Fig. 1) containing four pictures of the same landscape taken from different perspectives providing complementary information. Each presentation showed a different type of landscape: (a) a rural area with Oceanic climate, (b) a mining area with Mediterranean climate, and (c) an urban area with Continental climate. The difficulty increased throughout the task, the third landscape being the most difficult. Participants could navigate the way they preferred through the presentation. 2.2.2.3. Self-assessment tools: rubric and script. For the design of the self-assessment tools, two Social Science experts with vast experience in analyzing landscapes established the assessment criteria. With these criteria, the questions for the scripts were formulated, as well as the scoring categories for the rubric. A scholar not related to this study analyzed the rubric and the script to confirm that both tools contained the same criteria. The script and the rubric are shown in Appendix A and B. 809 2.2.2.4. Instructions: performance vs. process. The interviewer has a set of different instructions depending on the experimental condition. The sentences for creating the performance condition were: “I will show you a series of landscapes for you to observe, describe and, most importantly, to give an explanation of the factors that determine the current configuration of the landscape. You will receive feedback after each landscape about your performance”. For creating the process condition, the last sentence was shortened to “You will receive feedback after each landscape”, and the following sentences were added: “As you are going to do the task several times, you will have room for improvement. If you find difficulties, don't worry; relax, because you will have more opportunities to learn. The most important thing is that you don't focus exclusively on the results, but on learning how to do the analysis”. 2.2.2.5. Feedback: performance vs. process. The interviewer has a set of two different feedbacks to be given to the participants. This set included an expert analysis of the landscape the participant just analyzed. There were two versions in the set: performance and process. For example, if the participants in the performance–feedback condition did not mention the relief, they were told: “You did not mention relief”, but if they were in the process–feedback condition, they were told: “One important feature is relief. In this landscape, it is abrupt. Considering the effect of the relief is important because it is a main factor of the landscape.” 2.3. Design An experimental design was used with a 2 × 3 × 2 structure. There were three between-group independent variables: (1) type of instructions, oriented to process or to performance, (2) presence or absence of self-assessment tool: control vs. rubric vs. script, and (3) feedback, Fig. 1. Example of a set of landscapes used in the study. E. Panadero et al. / Learning and Individual Differences 22 (2012) 806–813 oriented to process or to performance. Ten students were assigned to each of the 12 conditions. There was also one within-group variable: the number of landscape tasks completed (three trials). 2.4. Procedure Participants completed the goal orientation questionnaire (MEVA) in their normal classroom settings. Afterwards, the participants were taken individually to the experimental setting, a room where they sat down in front of a computer where the landscapes were presented, equipped with a web-camera. Before starting the task, each participant received the instructions, which were the same for all of the groups, except for sentences that created the conditions “process oriented” or “performance oriented”. Each participant was shown an example of a landscape, one different from those to be analyzed, so that they could visualize what they were about, ask questions, and estimate their level of competence. Then they completed the self-efficacy scale. Participants in the rubric condition were given the rubric with information regarding its meaning: “Here you have a rubric that can be of help if you want to self-assess your work. When a teacher evaluates a landscape analysis, he/she examines in which category your analysis fits into. In that way, he/she can score your work according to the examples that you can find in each category to compare your analysis against them”. Participants in the script conditions were given the script and the following information: “Here you have a script that can be of help if you want to self-assess your work. When a teacher evaluates a landscape analysis, he/she examines whether you have followed the steps outlined in this script. If you take these steps into account, you can evaluate your work quality.” The participant would then start the first analysis saying aloud what he/she was thinking. The verbalized thoughts were recorded by the web-camera, and later were coded to obtain the on-line selfregulation index. Once the participants reached their conclusions, they entered them as text into the computer, and then received feedback regarding their performance based on the assigned conditions of process feedback or performance feedback. The participants who had rubric or script were given feedback using the tools. For example: “As can be seen in the category of Natural Elements, you have not informed about the relief and vegetation”. After the feedback, the participants moved to the second landscape, and the procedure was repeated, and then again for the third landscape. When the participants had finished the analyses, they completed the self-regulation questionnaire and, again, the self-efficacy scale. When given the self-regulation questionnaire, they were told to reflect about their actions during the task so that their answers reflected the selfregulatory self-messages and actions that took place while carrying it out. The experiment had an average length of 2 h and 45 min per participant. 2.5. Data analyses First, one-way ANOVAs were computed to test whether or not students differed in goal orientations, the moderating variables. As no significant differences were found in these variables, the data on each dependent variable – the self-regulation questionnaire scores, the on-line self-regulation index and the learning index – were analyzed using repeated measure ANOVAs instead of ANCOVAs. Between-subject factors corresponded to each of the twelve conditions of the study, and the within-subject factor to the scores for the three landscape analyses each student completed. Regarding self-efficacy a repeated measure ANOVA was performed using the pre and post intervention measures as the within-subject factor. 3. Results 3.1. Intervention effects on self-regulation 3.1.1. Emotion and Motivation Self-regulation Questionnaire (EMSR-Q) Contrary to our expectations, no significant effects were found in the Learning self-regulation scale either for the type of instructions (p = .705), nor for the self-assessment tools (p = .199), nor for the kind of feedback (p = .578), nor for the interactions. In the Performance/avoidance self-regulation scale two marginal effects were found. First, the type of instructions, F (1, 118) = 3.288, p = .073; performance M = 21.18, process M = 18.83; η 2 = .030, where, as expected, the participants that received instructions oriented to performance experienced more problems in controlling negative thoughts and emotions and focusing on learning. Second, the type of feedback, F (1, 118) = 3.56, p = .062; performance M = 21.23, process M = 18.78; η2 = .032, where, also as expected, the participants that received performance feedback reported more performance-avoidance self-regulated actions. The effect of the use of the self-assessment tool was not significant (p=.140), and neither were the interactions (p =.11). 3.1.2. On-line self-regulation index As Fig. 2 shows, the occasion effect was significant, F (1, 118) = 3.45, p b .05, first landscape M = .195, second landscape M = .160, third landscape M = .140; η 2 = .031, showing that, taking together the results of the three groups, the more landscapes the participants analyzed, the less self-regulating statements were verbalized to complete the task. Also the effect of the self-assessment tool was significant, F (1, 118) = 5.99, p b .001; control M = .106, rubric M = .157, script M = .231; η 2 = .100, with the script group showing a higher level of on-line self-regulation than the control group (p b .001) and the rubric group (p b .05) and, at the same time, the rubric group had a higher level of on‐line self‐regulation than the control group but not significantly (p = .160). Therefore the use of scripts had the highest effect on the on-line self-regulation. 3.1.3. On-line self-regulation index plus The interaction self-assessment tool and occasion was significant, F (1, 78) = 4.52, p b .001; rubric M = .278, script M = .433. Participants using the script performed more self-regulated actions involving their instrument than the participants using the rubric did. 0,3 On-line Self-regulation Index 810 0,25 0,2 0,15 0,1 0,05 0 Landscape 1 Landscape 2 Landscape 3 Ocassion Control Rubric Script Fig. 2. Effect of interaction between type of self-assessment tool and occasion on on-line self-regulation index. Mean of Learning Index E. Panadero et al. / Learning and Individual Differences 22 (2012) 806–813 65 811 means that the already observed effect of the interaction occasion– feedback is higher when using rubrics than in the other cases. 60 55 4. Discussion 50 45 40 1 2 3 Occasion (Three trials) Rubric Script Control Fig. 3. Effect of interaction between type of self-assessment tool and occasion on learning. 3.2. Intervention effects on learning The only significant effect on learning was that of the interaction between self-assessment tool and occasion, F (2, 108) = 7.85, p b .001; η 2 = .127. As can be seen in Fig. 3, the script and rubric groups outperformed the control group from the first landscape. 3.3. Intervention effects on self-efficacy From the intervention effects on self-efficacy, only two interactions were significant. First, the occasion–feedback interaction, F (1, 106) = 7.12, p b .01; η2 = .063, performance feedback M = 40.09, process feedback M = 41.42. As can be seen in Fig. 4, feedback increases selfefficacy more if centered on process than on performance. Second, the triple interaction of self-assessment tool/feedback/occasion was also significant, F (2, 106) = 3.527, p b .05; η2 =.062. As is shown in Fig. 5, this 46 Self-efficacy 44 42 40 38 36 34 Before Feedback on performance The main objective of this study was to test the effects of different self-assessment tools – rubrics and scripts – in the context of different types of instructions and feedback, on self-regulation, learning and self-efficacy. What has been the contribution of our study in relation to this objective? After Feedback on mastery Fig. 4. Effect of interaction between type of feedback and occasion on self-efficacy. Fig. 5. Effect of interaction among type of self-assessment tool, feedback and occasion on self-efficacy. P: performance feedback; M: mastery feedback. 4.1. Effects of assessment tools Considering first the effects of self-assessment tools on self-regulation, our study supports our two hypotheses, that the use of self-assessment tools would promote a higher level of self-regulation than if no selfassessment tools were provided, and that scripts would enhance selfregulation more than rubrics. However, some clarifications need to be made. In the case of self-regulation, evidence comes only from the on-line self-regulation results based on thinking aloud protocols, but not from the self-regulation questionnaire where no significant effects were found. This unexpected finding may be due to the fact that each measure assesses different aspects of self-regulation (Winne, 2010). Online measures like thinking aloud protocols assess cognitive learning self-regulation directly while the questionnaire assesses “self-regulation awareness” once the task is finished. It is also important to point out that increasing practice seems to diminish on-line learning self-regulation scores, an effect probably due to automation of learning self-regulation processes. However, the fact that there were significant differences between rubric and script groups in the on-line self-regulation index plus – an index sensitive to a greater amount of self-regulation actions – showed that scripts positively increased self-regulation and that they do it more than rubrics. In sum, the results regarding self-regulation support our main hypotheses, also giving support to the recommendation of Boekaerts and Corno (2005) of using situational measures along with questionnaires. Regarding prior research, our results are in line with those that explored the effect of scripts (and also prompts and cues) on selfregulation (e.g. Bannert, 2009; Berthold et al., 2007; Kramarski & Dudai, 2009; Kramarski & Michalsky, 2010). As for the effects of self-assessment tools on learning, according to our results the hypothesis that the use of the self-assessment tools would increase learning over the control group can be maintained. The use of rubrics and scripts has a positive effect on enhancing the students' mastery of the task because they include the key aspects relevant for the task. Similar results have been found in previous research on script effects (Alonso-Tapia & Panadero, 2010; Bannert, 2009; Kostons et al., 2009; Kramarski & Michalsky, 2009, 2010; Montague, 2007) and mixed results on rubrics (Andrade et al., 2009; Jonsson & Svingby, 2007; Schafer et al., 2001). Though the script had a higher effect on self-regulation both -script and rubric- had the same positive effect on learning -both groups performing over the control group-. This might be explained by the fact that participants using the rubric had a clearer understanding of how the final product should look like based on the rubric specific performance examples and therefore they needed less self-regulatory actions to reach similar level of learning than the participants using scripts. Finally, the results did not support our hypothesis on the effect of the self-assessment tool considered alone on self-efficacy. This result is in line with previous research (Alonso-Tapia & Panadero, 2010). It seems that providing students with scripts or rubrics is not enough to create the mastery experiences necessary for increasing the sense of efficacy and other factors should be comtemplated -e.g. lenght of the intervention(van Dinther, Dochy, & Segers, 2010). 812 E. Panadero et al. / Learning and Individual Differences 22 (2012) 806–813 4.2. Effects of task-instructions The second research question in this study concerned the role of task-instructions in self-regulation, learning and self-efficacy. When teachers introduce learning tasks, their instructions can underline learning or performance goals that can influence the learning classroom climate, the students' own goals and the way they approach learning (Alonso-Tapia & Fernandez, 2008; Alonso-Tapia & Pardo, 2006). However, no significant effect was found in this study. There is no basis to explain this finding other than the short intervention length. 4.3. Effects of type of feedback The third research question had to do with the effect of type of feedback on self-regulation, learning and self-efficacy. There are many studies demonstrating the importance of feedback for improving learning (Black & William, 1998; Crooks, 1988). What evidence do our results provide on such effects? Considered alone, feedback increases selfefficacy more if it centers on process more than on performance. This is an expected effect, as process feedback, by its own nature, helps students to understand the reasons for their successes and failures. Probably, feedback contributes to create the mastery experiences already mentioned in the review of van Dinther et al. (2010). No other effect of feedback, considered alone, was significant. In spite of the limitations just described, our results have important theoretical implications. They underscore the importance of promoting self-assessment to enhance self-regulation and learning, as well as the need to take into account the importance of precise feedback oriented to process in order to favor the increase in self-efficacy that, in turn, can affect self-regulation positively. These factors can influence initial interest and motivation (Efklides, 2011) and, through them, the effect of scripts and rubrics on self-regulation and learning. Such a potential moderating role is a limitation for future studies to address. Our study also has several educational implications. First, as the regular use of scripts and rubrics seems to favor self-regulation and learning, secondary teachers could help their students by providing them with these tools. Second, the effect on self-regulation is less in the case of rubrics than in the case of scripts, which suggests that, in the long run, it is better to focus students' attention on process – as scripts do – than on performance. Third, when students have information on both performance and process criteria — as what happened in the condition rubrics∗ feedback-on-process, it is more likely that they experience being able to cope efficiently with learning tasks. In conclusion, the implementation of scripts and rubrics is recommended for creating the positive conditions to promote self-assessment (Goodrich, 1997) in light of our results. Supplementary data to this article can be found online at http:// dx.doi.org/10.1016/j.lindif.2012.04.007. Acknowledgments 4.4. Moderating effects of instructions and feedback on self-assessment tools effects Instructions and feedback were introduced in the study because they could moderate the effect of rubrics and scripts. In the context of Efklides' (2011) self-regulation review and model, instructions and feedback can affect motivation and self-efficacy which, in turn, can affect the kind and degree of self-regulation. However, no interactions affecting self-regulation were found, except that already described between type of self-assessment tool and practice. This lack of effect from interactions between the three independent variables may be due to the fact that self-regulation is a process depending more on present contextual variables. Regarding self-efficacy, when the use of rubrics was followed by feedback centered on process self-efficacy increased significantly more than in any other condition. This unexpected result, in line with results found by van Dinther et al. (2010), may have been due to the combination of the clarity of performance criteria provided by rubrics and the information provided by the process feedback, which suggests that the combination of these kinds of information helps students to cope efficiently with this type of learning tasks. 4.5. Limitations and educational implications Our results have several theoretical and educational implications. However, before describing them it is necessary to consider several limitations. First, although a considerable number of students participated for such a complex and long experiment, the sample was of medium size and quite homogeneous. This is especially relevant for the analysis that involved the twelve conditions, as each group was filled with ten participants and this might limit the confidence on these specific statistical results. Second, and most importantly, the study was not carried out in real classrooms where different personal and social factors can mediate effort and self-regulation. Third, only a specific kind of task was used -landscape analysis-. Nevertheless, different tasks can demand procedural knowledge of greater complexity, a fact that can moderate the effect of using self-assessment tools. These limitations imply that future studies are needed to highlight whether our results can be generalized to natural classroom settings, as well as to other subjects, students and learning tasks. Support for this research was provided by grants from the Spanish Education Ministry to Ernesto Panadero (ref. SEJ2005-00994) and to Jesús Alonso-Tapia (EDU2009-11765). Thanks to Hedid Andrade, Inmaculada López Fernández, Fermín Asensio, IES Joaquín Araujo (Fuenlabrada, Madrid) and IES María Zambrano (Leganés, Madrid). References Alonso-Tapia, J. (2005). Motives, expectancies and value-interests related to learning: The MEVA questionnaire. Psicothema, 17(3), 404–411. Alonso-Tapia, J., & Fernandez, B. (2008). Development and initial validation of the classroom motivational climate questionnaire (CMCQ). Psicothema, 20(4), 883–889. Alonso-Tapia, J., Huertas, J. A., & Ruiz, M. A. (2010). On the nature of motivational orientations: Implications of assessed goals and gender differences for motivational goal theory. The Spanish Journal of Psychology, 13(1), 232–243. Alonso-Tapia, J., & Panadero, E. (2010). Effect of self-assessment scripts on self-regulation and learning. Infancia y Aprendizaje, 33(3), 385–397. Alonso-Tapia, J., Panadero, E. & Ruiz, M. (submitted for publication). Development and validity of the Emotion and Motivation Self-regulation Questionnaire (EMSR-Q). Madrid: Universidad Autónoma. Alonso-Tapia, J., & Pardo, A. (2006). Assessment of learning environment motivational quality from the point of view of secondary and high school learners. Learning and Instruction, 16(4), 295–309, http://dx.doi.org/10.1016/j.learninstruc.2006.07.002. Andrade, H., & Valtcheva, A. (2009). Promoting learning and achievement through self-assessment. Theory Into Practice, 48(1), 12–19. Andrade, H., Wang, X. L., Du, Y., & Akawi, R. L. (2009). Rubric-referenced self-assessment and self-efficacy for writing. The Journal of Educational Research, 102(4), 287–301. Bannert, M. (2009). Promoting self-regulated learning through prompts. Zeitschrift Fur Padagogische Psychologie, 23(2), 139–145, http://dx.doi.org/10.1024/1010-0652. 23.2.139. Berthold, K., Nückles, M., & Renkl, A. (2007). Do learning protocols support learning strategies and outcomes? The role of cognitive and metacognitive prompts. [Article]. Learning and Instruction, 17(5), 564–577, http://dx.doi.org/10.1016/j.learninstruc.2007.09.007. Black, P., & William, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy and Practice, 5(1), 7–73. Boekaerts, M. (2011). Emotions, emotion regulation, and self-regulation of learning. In B. J. Zimmerman, & D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance (pp. 408–425). New York: Routledge. Boekaerts, M., & Corno, L. (2005). Self-regulation in the classroom: A perspective on assessment and intervention. Applied psychology—An international review-psychologie Appliquee—revue Internationale, 54(2), 199–231. Crooks, T. J. (1988). The impact of classroom evaluation practice on students. Review of Educational Research, 58(4), 438–481. Dignath, C., & Büttner, G. (2008). Components of fostering self-regulated learning among students. A meta-analysis on intervention studies at primary and secondary school level. Metacognition and Learning, 3, 231–264, http://dx.doi.org/10.1007/ s11409-008-9029-x. Dignath, C., Büttner, G., & Langfeldt, H. (2008). How can primary school students learn self-regulated learning strategies most effectively? A meta-analysis on E. Panadero et al. / Learning and Individual Differences 22 (2012) 806–813 self-regulation training programs. Educational Research Review, 3(2), 101–129, http: //dx.doi.org/10.1016/j.edurev.2008.02.003. Efklides, A. (2011). Interactions of metacognition with motivation and affect in self-regulated learning: The MASRL model. Educational Psychologist, 46(1), 6–25. Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data. Cambridge, MA: MIT Press. Greene, J. A., Robertson, J., & Croker Costa, L. J. (2011). Assessing self-regulated learning using thinking-aloud methods. In B. J. Zimmerman, & D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance (pp. 313–328). New York: Routledge. Goodrich, H. W. (1997). Student self-assessment: At the intersection of metacognition and authentic assessment, 57, US: ProQuest Information & Learning. Hattie, J., Biggs, J., & Purdie, N. (1996). Effects of learning skills interventions on student learning: A meta-analysis. Review of Educational Research, 66(2), 99–136, http://dx.doi.org/10.3102/00346543066002099. Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2, 130–144. Kitsantas, A., Reiser, R. A., & Doster, J. (2004). Developing self-regulated learners: Goal setting, self-evaluation. The Journal of Experimental Education, 72(4), 269–287. Kollar, I., Fischer, F., & Slotta, J. D. (2007). Internal and external scripts in computer-supported collaborative inquiry learning. Learning and Instruction, 17(6), 708–721, http://dx.doi.org/10.1016/j.learninstruc.2007.09.021. Kostons, D., van Gog, T., & Paas, F. (2009). How do I do? Investigating effects of expertise and performance-process records on self-assessment. Applied Cognitive Psychology, 23(9), 1256–1265, http://dx.doi.org/10.1002/acp. 1528. Kramarski, B., & Dudai, V. (2009). Group-metacognitive support for online inquiry in mathematics with differential self-questioning. Journal of Educational Computing Research, 40(4), 377–404. Kramarski, B., & Michalsky, T. (2009). Three metacognitive approaches to training pre-service teachers in different learning phases of technological pedagogical content knowledge. Educational Research and Evaluation: An International Journal on Theory and Practice, 15(5), 465–485. Kramarski, B., & Michalsky, T. (2010). Preparing preservice teachers for self-regulated learning in the context of technological pedagogical content knowledge. Learning and Instruction, 20(5), 434–447, http://dx.doi.org/10.1016/j.learninstruc.2009. 05.003. 813 Lan, W. Y. (1998). Teaching self-monitoring skills in statistics. In D. H. Schunk, & B. J. Zimmerman (Eds.), Self-regulated learning: From teaching to self-reflective practice. New York: Guilford Press. Montague, M. (2007). Self-regulation and mathematics instruction. Learning Disabilities Research and Practice, 22(1), 75–83. Pardo, A., & Alonso-Tapia, J. (1992). Estrategias para el cambio motivacional. In J. Alonso-Tapia (Ed.), Motivar en la adolescencia (pp. 331–377). Madrid, Spain: Universidad Autónoma. Puustinen, M., & Pulkkinen, L. (2001). Models of self-regulated learning: A review. Scandinavian Journal of Educational Research, 45(3), 269–286, http://dx.doi.org/ 10.1080/00313830120074206. Schafer, W. D., Swanson, G., Bené, N., & Newberry, G. (2001). Effects of teacher knowledge of rubrics on student achievement in four content areas. Applied Measurement in Education, 14(2), 151–170. Schunk, D. H., & Usher, E. L. (2011). Assessing self-efficacy for self-regulated learning. In B. J. Zimmerman, & D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance (pp. 282–297). New York: Routledge. Thillmann, H., Kunsting, J., Wirth, J., & Leutner, D. (2009). Is it merely a question of “What” to prompt or also “When” to prompt? The role of point of presentation time of prompts in self-regulated learning. Zeitschrift Fur Padagogische Psychologie, 23(2), 105–115, http://dx.doi.org/10.1024/1010-0652.23.2.105. Urdan, T., & Turner, J. C. (2005). Competence motivation in the classroom. In A. J. Elliot, & C. S. Dweck (Eds.), Handbook of competence and motivation (pp. 297–317). New York: Guilford. van Dinther, M., Dochy, F., & Segers, M. (2010). Factors affecting students' self-efficacy in higher education. Educational Research Review, 6(2), 95–108, http://dx.doi.org/ 10.1016/j.edurev.2010.10.003. Winne, P. H. (2010). Improving measurements of self-regulated learning. Educational Psychologist, 45(4), 267–276, http://dx.doi.org/10.1080/00461520.2010.517150. Zimmerman, B. J. (2011). Motivational sources and outcomes of self-regulated learning and performance. In B. J. Zimmerman, & D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance (pp. 49–64). New York: Routledge. Zimmerman, B. J., & Kitsantas, A. (2005). The hidden dimension of personal competence: Self-regulated learning and practice. In A. J. Elliot, & C. S. Dweck (Eds.), Handbook of competence and motivation (pp. 509–526). New York: Guilford Press. Zimmerman, B. J., & Schunk, D. H. (2011). Handbook of self-regulation of learning and performance. New York: Routledge.