Validation of A Learning Object Review
Validation of A Learning Object Review
Validation of A Learning Object Review
Volume 4, 2008
Formerl y the Inter disciplinar y Journal of Knowl edge and Learning Objects
Abstract
Growing interest in learning objects (LOs) as a means of developing learning materials is leading
to mainline LO evaluation methodologies using review instruments, such as evaluation rubrics, to
suit various practical purposes. Such evaluation tools give evidence about the design and the
value of the LOs, and studies performed with actual users can provide data against which these
expectations of the effects of LOs on student achievements in practice can be set. This study presents a validation of a learning object review instrument (LORI) with student users (n=507) of
twenty-four LOs used in K-12 environments. T he data collected through pre- and post-tests,
teachers and students usability questionnaires, and the LORI revealed some interactions between those variables. However, the LORI ratings, and the usability assessments did not correlate
with the learning gains of students. Some implications of these findings are discussed.
Ke ywords: Learning objects, learning outcome, LORI, validation.
Introduction
To meet diverse learning needs and to improve student learning, a variety of resources, often including digital media, are developed where the combination of the media and methods of use
change with context and try to take account of student differences. New technologies have
emerged to assist these objectives, and one method of designing and presenting computer based
educational materials is that of learning objects (LOs), usually defined as any digital resource that
can be reused to support learning (Wiley, 2000). Examples of such digital resources that can be
employed within instructional materials include images or photos, live data feeds, live or prerecorded video or audio snippets, text, animations, and web-delivered applications such as a Java
applet, a blog, or a web page combining text, images and other media. Thus LO approaches can
be wide-ranging and offer new possibilities to access and reuse online materials (Wiley, 2005).
Online repositories storing large numbers of LOs, which different user groups
Material published as part of this publication, either on-line or
(e.g. teachers, instructional designers,
in print, is copyrighted by the Informing Science Institute.
material producers, and learners) can
P ermission to make digital or paper copy of part or all of these
works for personal or classroom use is granted without fee
access and employ in various contexts
provided that the copies are not made or distributed for profit
according to their needs, can, in princior commercial advantage AND that copies 1) bear this notice
ple, bring economy and variety into the
in full and 2) give the full citation on the first page. It is pereducational process (Nurmi & Jaakkola,
missible to abstract these works so long as credit is given. To
copy in all other cases or to republish or to post on a server or
2006a). However, although LOs can
to redistribute to lists requires specific permission and payment
provide stimulating opportunities to imof a fee. Contact 0HPublisher@InformingScience.org to reprove educational practices, to extend
quest redistribution permission.
the use of digital technologies in schools and to reduce the time required to prepare technology
enhanced teaching, many associated problems and practical shortcomings can arise (Akpinar &
Simsek, 2007; Jonassen & Churchill, 2004; Kay & Knaack, 2007; Li, Nesbit & Richards, 2006;
Nurmi & Jaakkola, 2005; Parrish, 2004; Strijker, 2004; Vuorikari, Manouselis, & Duval, 2006).
There is a lack of empirical evidence on the effectiveness of LOs, though this has not reduced the
interest in the technique, and indeed it provides an incentive for further research.
Whilst the LO debate continues (Churchill, 2007; Cochrane, 2005; Friesen, 2005; Krauss & Ally,
2005; Maceviciute & Wilson, 2008; Merrill, 2001; Parrish, 2004; Polsani, 2003; Salas & Ellis,
2006; Varlamis & Apostolakis, 2006; Wiley, 2000), the effectiveness of LOs is likely to be limited if they do not conform to established design principles and have not been subjected to formative user testing (Li et al., 2006). A range of different evaluation approaches for such learning
resources exists, and Vuorikari et al. (2006) studied and analyzed a sample of thirteen evaluation
approaches either currently applied to learning object repositories (LORs) or used as general
quality guidelines for digital learning resources. These approaches were distinguished in terms of:
(1) methodological characteristics focusing on the process or the product;
(2) the stage of the learning resource lifecycle focusing on developmental guidelines or enduser evaluation ratings;
(3) the educational processes or optimization parts of the development lifecycle;
(4) the form of evaluation instruments used, e.g. questionnaires, a list of criteria or certification instruments;
(5) the audience as developers, evaluators, subject experts, teachers, or end users;
(6) the criteria or metrics engaged by the tools; and
(7) the characteristics of the environment in which evaluation approach is expected to be applied.
Also, a recent survey (T zikopoulos, Manouselis, & Vuorikari, 2007) reported on 23 highlighted
evaluation and rating approaches. Because there is such diversity in the goals and forms of LO
evaluations, Dron, Boyle, and Mitchell (2002) and Vuorikari et al. (2006) suggest the use of
tagged metadata for storing the results of such evaluations, not only noting data on sharing and
reusability, but summarizing the experience and achievements LO resources in use.
292
Akpinar
interact with the LO. Navigation through the LO is simple. The behavior of the user interface
is consistent and predictable.
(7) Accessibility: The design of controls and presentation formats in the LO may accommodate learners with sensory and motor disabilities. The LO can be accessed through different
electronic means including assistive and highly portable devices.
(8) Reusability: T he LO is a stand-alone resource that can be readily transferred to different
courses, learning designs, and contexts.
(9) Standards Compliance: The LO conforms to relevant international standards and specifications. Sufficient metadata is provided in tagged codes and made available to users.
LORI 1.5 uses a Likert-style five point response scale with the items ranging from low (1) to high
(5). If an item is judged not relevant to the LO, or if the reviewer does not feel qualified to judge
that criterion, then the reviewer may opt out of the item by selecting not applicable. But the
convergent evaluation model used in the LORI is criticized by Kay and Knaack (2007), who note
that it is usually limited by the small number of participants giving feedback, and its final evaluation may not be representative of what a larger population might observe or experience.
To review LOs for acceptance in one of the largest LO repositories, MERLOT (www.merlot.org),
adopted many of the same criteria used by the LORI. The Merlot process employs both individual
evaluation (peer review) and referral to standards for learning objects. The standards or guidelines
are an attempt to help reviewers assess materials submitted by developers, and the criteria used by
Merlot reviewers fall into three broad areas: T he quality of content, the potential effectiveness as
a teaching-learning tool, and the ease of use. T he Merlot scale uses a continuum from one star
denoting material not worthy of use to a five star rating representing excellence all around.
Like the LORI process, reviewers are taken from the subject discipline of the LO content. T he
Collaborative Learning Object Exchange (CLOE) of Canada has also built up a peer review process for LOs to be included in a provincial LO repository. T his process closely follows the Merlot
criteria but differs in the range and number of questions used. T he Merlot criteria employ a set of
more than 30 individual questions requiring detailed answers, while the CLOE criteria use 14
items. In brief, reviewers (i.e. instructional designers and subject matter experts) are asked to
evaluate the LOs on the quality of their content, their effectiveness as a teaching tool, and their
ease of use. Finally, Haughey and Muirhead (2005) proposed a further evaluation instrument,
Learning Object Evaluation Instrument (LOEI), which was developed from the previous three
instruments. The scales used in evaluating each LO component are not meant to provide comparisons, but to allow reviewers to assess the integrity, usability, learning, design, and value focus of
each learning object.
Although the LO repositories commonly use these review instruments, only a limited number of
empirical studies have examined the learning outcomes and the instructional effectiveness of
LOs. By using LO survey tools, Kay and Knaack (2005, 2007) examined the quality of LOs
through content analysis of open-ended response questions based on principles of instructional
design and perceived benefit under post-hoc structured categories. They evaluated 5 learning objects with 220 secondary school students, in grades 9-12, with 30 teachers. The evaluation data
were collected after the sample used the LOs either as an introduction to the learning of concepts
in the subject matter area or as support material to the teachers activities. The data collection
tools included interviews with teachers and two surveys collecting comments of students and
teachers. The results showed that two-thirds of all students felt that LOs were beneficial, particularly when they had a motivating theme, visual supports, and interactivity. Also, both experienced
and pre-service teachers confirmed the student reports. However, this study focused on perceived
benefits of LOs rather than on the actual learning outcomes resulting from the LO activities.
McCormick and Li (2006) also studied 770 teachers views and experiences of using LOs in
CELEBRAT E (a research and development project dealing with LO development and implemen-
293
tation funded by the European Union) through online surveys, routine data collected from the
CELEBRAT E portal, and semi-structured interviews in 40 schools in 6 different countries. The
study showed a generally positive reaction to the use of LOs by the teachers as a support for
teaching and learning. However, it was noted that teachers used the LOs in a variety of contrasting ways, and that they tended superimpose their own pedagogy, whatever the designed pedagogy
of the LO. Hence it appeared that granularity and interoperability characteristics were significant
in rating LOs as useful, particularly where they supported resource-based learning. However,
both of the studies reviewed suggested the need for further evaluation tools, methods, and research to identify the actual effects of LOs on learning processes and outcomes.
In this respect, Nurmi and Jaakkola (2006b) conducted an experimental study using a pre-testpost-test design to evaluate the effectiveness of three LOs from three different subject areas, i.e.
Mathematics, Finnish Language and Science. T he LOs, tested with school children, were used in
different instructional settings. T he results revealed that in Mathematics and in Finnish students
following traditional teaching conditions achieved slightly better results on subject matter posttests and developed more learning gains than students following the LO conditions, though these
differences were not statistically significant. Also, no significant differences were observed between the LO and the traditional teaching conditions with low and high prior knowledge students.
In the Science LO study, students in the mixed condition, where both LO and laboratory activities
were used together, significantly outperformed those in the traditional teaching condition. T he
study concluded that, to be successful, LOs require carefully designed learning environments and
instructional arrangements around them.
In a design similar to the Nurmi and Jaakkola (2006b) study, Akpinar and Simsek (2007) tested 8
LOs, whose overall LORI scores varied from 30 to 36 out of a maximum of 45, with 180 school
children in a pre-post test research design. The data analysis revealed that 7 of the LOs helped the
sample students improve their pretest scores, but in one, the Horizontal Projectile Motion (HRM)
LO for ninth grade students, the scores did not improve. It seemed that this detrimental effect may
have stemmed from the fact that HRM is a topic in which many students have misconceptions
(T ao & Gunstone, 1999) that the LO was unable to correct.
Sample
The sample for the LO evaluation studies consisted of a total of 507 elementary and secondary
school students and their 24 teachers. The students grades varied from 4 (age of 11) to 10 (age of
294
Akpinar
17). The twenty-four LOs, (one in Biology, three in Chemistry, three in Mathematics, three in
Physics and fourteen in General Science) whose development procedure is given below, were
studied by 24 groups. T he group size varied between 15 and 54 students.
295
dents approved the screen design, the text and picture orientation, the interactive mechanisms,
and the tools used in the LO. These questionnaires all Likert type scales. Those for elementary
school students had three choices: disagree, neutral, and agree; the other student questionnaires
had five choices: strongly disagree, disagree, neutral, agree, and strongly agree. Likewise the usability questionnaires for teachers contained five-point Likert type items seeking teachers opinions on the screen components and instructional facilities of a particular LO.
Both the pre/post tests and the usability questionnaires were reviewed, corrected, and verified by
the researcher, by an expert instructional designer, and by one of the appropriate peers. For the
classroom applications of the LOs, local schools were contacted for their agreement to participate, and with this permission the LO authors administered the pretests to the designated students.
[T wo of the twenty-six LOs were not used because their content had already been used in trialing
studies.]
The authors of the LOs demonstrated the LOs and explained to the class teachers how the LOs
worked, and the teachers were told there should be minimal teacher intervention during the students working with the LOs. The authors then installed their LOs into the computer labs of the
schools where they explained to the students and their class teacher how the LOs were to be used
in the study sessions. These introductions of the LOs took 10-20 minutes, and the students and
the teachers questions were also answered. T he students then started to work with the LOs.
Whilst the students studied an LO, the LO author and the class teacher monitored students work.
All students studied the LOs individually. To enable interaction and active participation, each LO
presented task activities for the students to accomplish using the tools available in the LO. For
example, the LO about the separation of mixtures for seventh graders asked students to play the
role of a chemist and to separate mixed substances by selecting and using the given tools, such as
a strainer, ventilator, boiler, distiller, or burner. The students progress, moving from one task to
the next, depended on their performance over the tasks. If mistakes were made, then system feedback enabled students to correct these errors before moving on to the next task The LOs also contained help features related to the activities. The students study sessions for an LO took between
20 and 65 minutes. Only two studies were carried out in multiple sessions due to large number of
participants in the classes, one of which had 33 students organized in two sessions, and another of
which had 54 students organized in three sessions. Following the study with the LOs, the posttests and the usability questionnaires were administered to the students: In ten of the studies they
were administered just after the LO study; in all others they were administered the next day. All
the studies were completed within a two weeks time period. T able 1 presents a summary of the
data.
Table 1: A summary statistics of the studies
LO Content
Science
(Nutrition &Its
Agents)
Mathematics
(Functions)
Science
(Separation of
Mixtures)
Science
(H. Motion)
Biology
(Animal Cell)
Science
(Change in Matter)
296
Statistical
test
Effect size
Cohens d
5/12
39.16
0.80
0.80
22.40
21
0.00**
1.45
9/16
26
34.44
0.81
0.80
21.93
22
0.00**
1.31
7/14
12
39.92
0.89
0.88
18.00
20
0.00**
1.09
7/14
18
36.10
0.86
0.90
18.50
20
0.00**
1.08
9/16
13
32.83
0.74
0.80
30.39
19
0.00**
1.05
7/14
29.86
0.60
0.55
14.21
19
0.00**
1.02
Akpinar
LO Content
Mathematics
7/14
12
(Ratio)
Science
7/14
10
(Atoms)
Chemistry
9/16
11
(Reactions)
Science
7/14
12
(Fluid Pressure)
Phy sics
10/17
8
(Projectile Motion)
Science
6/13
8
(Animate and Life)
Science
4/11
15
(Class. of Matters)
Science
(Properties of
8/15
10
Lights)
Science
6/13
8
(Force)
Science
6/13
9
(Electricity )
Chemistry
10/17
10
(Gases)
Science
7/14
13
(Pressure)
Phy sics
9/16
6
(Expansion)
Mathematics
7/14
8
(Circles)
Chemistry
9/16
7
(Solutions)
Science
(Torque & Bal10/17
8
ance)
Phy sics
(Heat & Tempera9/16
11
ture)
Science
6/13
10
(Circuits & Conductors)
** p< 0.01 (2-tailed); * p< 0.05
Statistical
test
Effect size
Cohens d
34.38
0.65
0.65
11.25
24
M.W.-U
0.00**
0.88
37.58
0.80
0.89
18.89
18
0.03*
0.84
36.58
0.85
0.87
16.00
18
0.03*
0.76
32.50
0.73
0.80
11.43
21
0.02*
0.74
36.05
0.75
0.73
23.40
15
0.03*
0.70
31.48
0.80
0.72
4.77
21
0.14
0.33
36.33
0.80
0.80
5.46
22
0.21
0.34
37.42
0.80
0.67
5.50
16
0.22
0.35
34.78
0.84
0.83
6.36
22
0.22
0.34
37.00
0.87
0.67
-3.58
54
0.23
-0.20
31.74
0.80
0.73
11.60
18
0.28
0.36
36.64
0.80
0.70
5.00
20
0.28
0.30
28.41
0.60
0.67
3.14
17
0.38
0.13
34.57
0.80
0.80
2.43
33
0.63
0.08
31.05
0.93
0.85
5.50
15
0.68
0.16
34.00
0.69
0.65
-1.33
15
0.80
-0.10
29.79
0.60
0.83
-1.20
17
0.85
-0.05
37.80
0.80
0.80
1.00
20
0.80
0.04
297
However, although in general the teacher and students LOs usability ratings achieved significant correlations (see (i)), the students LO usability ratings did not achieve statistical
significance with the LORI items shown in (iii).
v. T he student post-pre test differences did not significantly correlate with any items of the
LORI or the two types of usability tests.
Table 2: The correlation matrix
1
10
11
12
- .56**
.12
.56**
.51*
.41*
.36
.37
.54**
.49*
.49*
.52**
2.
.32
.27
.20
.11
.10
.27
.30
.34
.25
.29
3.
.21
.04
-.03
.01
.00
-.02
.01
.15
.07
4.
LORI_Item1
.86**
.69**
.77**
.76**
.86**
.76**
.85**
.87**
5.
LORI_Item2
.78**
.79**
.82**
.85**
.85**
.91**
.92**
6.
LORI_Item3
.76**
.72**
.60**
.61**
.62**
.74**
7.
LORI_Item4
.81**
.77**
.71**
.73**
.85**
8.
LORI_Item5
.89**
.85**
.80**
.94**
.91**
.87**
.94**
.91**
.94**
.92**
1.
9.
LORI_Item6
10. LORI_Item7
11. LORI_Item8
12. LORI_Total
298
Akpinar
3, with the LO group gaining significant post-pre tests differences having higher mean LORI
scores (see Table 3). There was a similar marked difference (not statistically significant) on the
Presentation Design item. However the LO groups did not show marked differences on properties
measured by the other items of the LORI, nor in the overall LORI scores.
Table 3: LORI item score s for the two groups of LOs
LORI Items
Std. Dev.
Std. Dev.
3.70
0.44
3.94
0.47
3.68
0.45
3.84
0.42
3.39
0.46
3.75
0.32
3.62
0.51
3.77
0.35
3.62
0.49
3.89
0.57
3.68
0.36
3.72
0.53
3.57
029
3.69
0.38
3.66
0.37
3.79
0.31
5.00
0.00
5.00
0.00
33.92
3.13
35.40
2.99
299
et al. (2006) that the controversial issue of tagging LOs should be re-considered and, as it is worth
noting (from this study), that although the teachers usability ratings correlated (in general) with
the LORI ratings, the usability measures themselves also did not correlate significantly with the
learning improvement scores. It seems then that tags should include the result or commentary of
the LO applications linking to students performance improvement, otherwise the LO repositories
will be likely to end up offering unreliable guidance for students and may give uncertainties and
problems for classroom teachers making selections for their students.
However, this study used LOs which were judged to be well designed under the LORI criteria
and which also achieved high and consistent ratings on usability measures from teachers and students. But, although generally showing positive benefits and significant learning benefits in half
of the LOs, the students achievements were uneven. This directs attention to the modes of use of
the LOs and the learning process itself. This study used the LOs in the fashion of self-directed
exploratory study, with little input or interaction from the supervising teachers. Although providing useful information for the study, this decision may have been unwise particularly in the light
of the science results of the Nurmi and Jaakkola (2006b) research in which using LOs and laboratory studies together allowed those groups to significantly outperform other groups. Further work
should give more attention to design features (in which rubrics could be useful) in relation to the
pedagogies to be followed and to the different research design, including control groups design.
Research looking more closely at student actions and the learning process itself could throw some
light on the relations of performance to the rubric criteria.
The two groups of LOs were different in some ways: First, the group consisting of LOs that created significant learning output has a greater number of activities in average (12.50 versus 9.50)
than the other group of LOs. Second, as the analysis demonstrated, they have better feedback and
adaptation facilities; the type of feedback given by those LOs took differential learner inputs
more into account. Third, they seem to have better quality design of visual and auditory information for enhanced learning and efficient mental processing than the other group of LOs. Further
work may increase the number of activities in those LOs and may investigate effects of the activities with adaptive feedback. In addition, those LOs which do not seem to promote learning effectively may be revised in a guided manner and/or tried under a different study scheme. In these
ways LORI type design criteria can be adapted and related to support pedagogies and learning
contexts. Including such summary data within tagging schemes should allow the flexibility and
reusability features of LOs to be more clearly demonstrated.
Acknowledgements
The author thanks both the anonymous reviewers, the LO authors, and the teachers and the students for participating in this research.
References
Akpinar, Y., & Hartley, J. R. (1996). Designing interactive learning environments. Journal of Computer
Assisted Learning, 12(1), 33-46.
Akpinar, Y., & Simsek, H. (2007). Should K-12 teachers develop learning objects? Evidence from the field
with K-12 students. International Journal of Instructional Technology and Distance Learning, 4(3),
31-44. Retrieved 02 June 2008 from www.itdl.org/Journal/Mar_07/Mar_07.pdf
Churchill, D. (2007). Towards a useful classi fication of learning objects. Educational Technology Research
and Development, 55(5), 479-497.
Cochrane, T. (2005). Interactive quicktime: Developing and evaluating multimedia learning objects to enhance both face-to-face and distance e-l earning environments. Interdisciplinary Journal of Knowledge
300
Akpinar
301
Salas, K., & Ellis, L. (2006). The development and implementation of learning objects in a higher education setting. Interdisciplinary Journal of Knowledge and Learning Objects, 2, 1-22. Retrieved June 02,
2008, from http://ijklo.org/Volume2/v2p001-022deSalas.pdf
Squires, D., & Preece, J. (1999). Predicting quality in educational software: Evaluating for learning, usability and the synergy between them. Interacting with Computers, 11(5), 467483.
Strijker, A. (2004). Reuse of learning objects in context: Human and technical issues. Enschede, PrintPartners Ipskamp.
Tao, P. K., & Gunstone R. F. (1999). The process of conceptual change in force and motion during computer-supported physics instruction. Journal of Research in Science Teaching, 36(7), 859-882.
Tzikopoulos, A., Manouselis, N., & Vuorikari, R. (2007). An overview of learning object repositories. In P.
Northrup (Ed.), Learning objects for instruction: Design and evaluation (pp. 29-55). Hershey, PA:
Idea Group Publishing.
Vargo, J., Nesbit, J. C., Belfer, K., & Archambault, A. (2003). Learning object evaluation: Computermediated collaboration and inter-rater reliability. International Journal of Computers and Applications,
25(3), 1-8.
Varlamis, I., & Apostolakis, I. (2006). The present and future of standards for e-l earning technologies. Interdisciplinary Journal of Knowledge and Learning Objects, 2, 59-76. Retrieved June 02, 2008, from
http://ijklo.org/Volume2/v2p059-076Varlamis.pdf
Vuorikari, R., Manouselis, N., & Duval, E. (2006). Using metadata for storing, sharing and reusing evaluations for social recommendations. In Go, D. H. & Foo, S. (Eds.), Social Information retrieval systems:
Emerging technologies and applications for searching the web effectively. pp.165-178. Hershey, PA:
Idea Group Publishing.
Wiley, D. A. (2000). Connecting learning objects to instructional design theory: A definition, a metaphor,
and a taxonomy. In D. A. Wiley (Ed.), The instructional use of learning objects: Retrieved 02 June
2008 from http://reusability.org/read/chapters/wiley.doc
Wiley, D. A. (2005). Learning objects in public and higher education. In J. M. Spector, C. Ohrazda, A. van
Schaack, & D. A. Wiley (Eds.), Innovations in instructional technolog, (pp. 110). Mahwah, NJ: Lawrence Erlbaum.
Biography
Yavuz Akpinar is an associate professor at Bogazici University, Department of Computer Education and Educational Technology. His
research interests are in interactive learning environments design, human computer interaction, simulations in learning, authoring systems
for software design, educational testing, designing and evaluating multimedia and hypermedia in education and training, distance education,
learning object and e-learning design, and learning management systems.
302