Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Rating Learning Object Quality With Distributed Bayesian Belief Networks: The Why and The How

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Rating Learning Object Quality with Distributed Bayesian Belief Networks: the why and the how

Vive Kumar, John Nesbit & Kate Han Simon Fraser University, Surrey Campus, Canada vive@sfu.ca Abstract
As differing evaluation instruments are adopted in learning object repositories serving specialized communities of users, what methods can be adopted for translating evaluative data across instruments in order to share this data among different repositories? How can evaluation from different reviewers be properly integrated? How can explicit and implicit measures of preference and quality be combined to recommend objects to users? In this research we studied the application of Bayesian Belief Network (BBN) to the problem of insufficient and incomplete reviews during learning objects evaluation, and translating and integrating data among different quality evaluation instruments and measures. Two BBNs were constructed to probabilistically model relationships among different roles of reviewers as well as among items of different evaluation measurements. Initial testing using hypothetic data showed that the model was able to make potentially useful inferences about different dimensions of learning object quality. We further extend our model over geographic distances assuming that the reviewers would be distributed and that each reviewerwould change the underlying BBN network (to a certain extent) to suit his/her expertise. We highlight issues that arise due to a highly distributed and personalized BBN network that can be used to make valid inferences about learning object quality. improve searching and filtering capabilities, learners, teachers, and instructional developers may still be faced with choosing from many pages of object listings returned by one learning object query. The listed objects tend to vary widely in quality. Without proper recommendation, the learning object enquirers not only grope in the dark in front of overwhelming information, but also easily fall for poorly designed and developed instructional materials, wasting time and effort. Hence, there is a clear need for quality evaluations that can be communicated in a coherent, standardized format to inform users about selections. The MERLOT repository (www.merlot.org) provides two quality rating systems using five-point scales. The Learning Object Review Instrument (LORI) includes nine separate dimensions [3, 4] rated on a five-point scale. We believe that demand exists for instruments with different levels of detail and areas of emphasis. While casual reviewers likely prefer the convenience of providing a single rating, professional learning object design teams benefit from the more elaborate set of items offered by LORI. Different people use different rating standards such as MERLOT and LORI to rate learning objects. There are needs for these standards to co-exist, serving different disciplines and different user communities. An ideal rating system should be able to somehow convert these different ratings into a unified value so that recommendation of the learning objects can be carried out in a unified fashion. This ability not only facilitates sorting by quality when learning objects are rated with different standards, but also allows a learning object rated with one standard to obtain a quality rating under another. Thus, an online learning system with a learning object repository using one rating standard is able to recommend learning objects even if they are returned from an external repository, where a different rating system might be applied. Our rating system also weighs quality reviews differently when reviews are submitted from different types of reviewers. For example, a quality review from a group of experts should weigh more than that of a

1. Introduction
The growth of the Internet has led to new modes of learning in which learners routinely interact online with instructors, other learners and digital resources. Much recent research has been focused on building infrastructure for these activities, especially to facilitate searching, filtering and recommending online resources known as learning objects [1, 2]. Although newly defined standards for learning object metadata (http://ltsc.ieee.org/wg12/) are expected to greatly

Proceedings of the Fifth IEEE International Conference on Advanced Learning Technologies (ICALT05) 0-7695-2338-2/05 $20.00 2005 IEEE

single expert; an experts opinion should weigh more than that of an anonymous user. The majority of learning objects that exist in learning object repositories have no quality reviews at all. Also, most of the learning objects are rated based on reviewers expertise, either by instructors or by learners or as such. Hence, they are partially rated, lack of full view of its quality. Further, the rating system should consider data from implicit measures, such as frequency of use and interpersonal recommendation, in addition to explicit measures obtained from users. After studying existing Bayesian Belief Networks (BBN) applications and performances in evaluation systems, we find BBN a viable solution to equip a learning object rating system with the features discussed here. BBN is a powerful probabilistic knowledge representation and reasoning tool for partial beliefs under uncertainty. Uncertainty could be insufficient knowledge, for example lacking of certain quality aspects in a quality review. BBN combines graph theory and probability theory to provide a practical means for representing and updating probabilities (beliefs) about events of interest, such as the quality rating of a learning object. In addition to offering probabilities of events, the most common task using BBN is to do probabilistic inference, for instance, to infer quality rating from one standard to that of the other.

For the first BBN, we used a subjective mapping between the items of MERLOT and LORI to establish the topology of the network. This BBN shows [5, 6] nodes relations and evenly distributed weights among the relating nodes, since no empirical knowledge is available to do otherwise. The second BBN describes [5, 6] explicit and implicit types of rating. According to the roles of reviewers, we further breakdown the explicit rating into Registered Expert Rating, Registered User Rating and Anonymous Rating. Under each category, there list some child nodes based on our current understanding. For the NPT in this BBN, due to the topological structure, it does not make sense to evenly distribute the probability among the parent nodes, such as ImplicitRating and RegisteredExpertRating. Hence, we decide a particular parent node contributes more than the other does, and how much more, to the best of our knowledge. For those nodes that do not have parent nodes, Normal Distribution is still applied. Thus, by controlling the parent node contribution towards a child node, we ensure the data integrity provided to the rating system. The values in the NPT can be adjusted over time when more data is collected and better knowledge is gained about the relationships among the nodes. Using JavaBayes (www-2.cs.cmu.edu/~javabayes/Home/), a shareware tool that implements BBN probability calculation and propagation, we conducted a set of exploratory tests [6] to assess the potential of the BBN model for making inferences across dimensions of learning object quality. The tests used hypothetical data. They were designed to determine whether, given realistic incomplete evaluation data, the model could make qualitatively plausible translations between instruments, the model could treat the reviews appropriate to their sources, and that these estimates increased in certainty as more data was acquired. We are able to confirm that the BBN upgrades or downgrades the quality rating in a plausible and meaningful fashion consistent with the graph topology and the NPTs, and that the certainty represented in the model increases as evaluation data accumulates.

2. BBNs for Learning Object Quality Rating


Two a priori elements are required to construct a BBN -- the structure of how all nodes are related (the graph topology), and the probability distribution for each node (NPT). We present two distinct uses for BBN in our learning object quality rating system. First, BBN is used in a single quality review construction to tackle the problem of incompleteness of current quality reviews and to unite reviews from using different rating standards. The result is called Unit Quality Rating. Second, BBN is used to obtain an aggregated rating by integrating reviews from different sources called Integrated Quality Rating. This includes different ways of the evaluation system collecting the data, i.e., explicit or implicit, as well as the different roles of reviewers who submit the data, i.e., recognised experts or anonymous users. We claim that, through these two BBN approaches, the availability and accuracy of quality ratings can be largely improved in a learning object repository, thus making better learning object recommendations.

3. The need for Distributed BBN in Learning Object Quality Rating


In our current rating system, one does not differentiate to whom a learning object is recommended. All users evaluate all learning objects using a single BBN; all users get learning object recommendations based on the same BBN. However,

Proceedings of the Fifth IEEE International Conference on Advanced Learning Technologies (ICALT05) 0-7695-2338-2/05 $20.00 2005 IEEE

the role information is often quite important in educational practice. For example, for a disabled learner, accessibility is more important than it is for the others; or for a language learning object, Interaction Usability is more important than it is for a mathematics learning object. Therefore, there is a need for different quality rating BBN to be constructed to serve different purposes. In other words, we expect a system to automatically form personalised recommendations that account for the demonstrated preferences of a user and the requested types of learning object. On these premise, our focus shifted towards distributing the proposed BBN models [6] across client computers. We briefly evaluated a distributed belief network application, RISO [51], to estimate the practicality of Distributed BBN. Based on the evaluation, it is our conclusion that with the advancing technology in distributed computing, letting the client computer build and operate a customized BBN on the fly is not unreasonable. In addition, it is also reasonable to combine different BBNs distributed across geographical distances to present a coherent and unified view of the quality rating framework. Distributed BBN is unavoidable in an intercontinental quality rating of learning objects when individual reviewers are free to change the underlying BBN network to suit their expertise. In one such scenario involving distributed BBN, the same node can exist across multiple constituent BBNs providing different meanings. In another scenario, different reviewers may prefer different probability distributions for the same BBN subnetwork. Yet another scenario exists where there are differences across the constituent BBNs with respect to the network corresponding to a single node. It is quite conceivable that the updates of nodal values across distributed BBNs pose a considerable operational problem that requires time-stamping. We propose to reduce the constituent BBNs to a single, probabilistically equivalent, polytree network by merging nodes using clustering algorithms followed by the application of a polytree inferencing algorithm [9]. Where distributing BBN and client-side BBN customization are technically possible, the challenge is how often we allow client to change the topology of the distributed BBN. How does an online learning system take a customized BBN to make personalized recommendation? Should we integrate the customized BBN into the master one that standard recommendation uses? If yes, then we will face the mathematically unsolved problem of multiple BBNs integration besides any other issue; if not, we will still face the computational difficulty to ensure the topology validity of a customized BBN. BBN is an

acyclic directed graph. To compute using a customized BBN, besides checking the nodes and NPT to be mutually exclusive and exhaustive, we have to constantly check whether any cycle exists in the BBN, which is computationally hard. Additionally, without an educational expert intervention, it is difficult to ensure the content validity of a customized BBN, i.e., the relationship between nodes.

4. Conclusions
This research proposes a new way of obtaining the quality rating of learning objects. Under the circumstance of incomplete or insufficient rating reviews or reviews using different rating measurements, BBN shows strong capabilities to provide an appropriate rating when dealing with these problems. Distributed BBN pave way for a system that presents a unified quality rating of a learning object despite using highly personalized and geographically distributed constituent BBNs.

4. References
[1] D. Wiley (Ed.), The instructional use of learning objects (Association for Educational Communications and Technology, 2001). [2] M. Recker, A. Walker, & K. Lawless, What do you recommend? Implementation and analyses of collaborative information filtering of web resources for education, Instructional Science, 31(4/5), 2003, 229-316. [3] J. Nesbit, K. Belfer, & J. Vargo, A convergent participation model for evaluation of learning objects. Canadian Journal of Learning and Technology, 28 (3), 2002, 105-120. [4] J. Vargo, J. Nesbit, K. Belfer & A. Archambault, Learning object evaluation: Computer mediated collaboration and inter-rater reliability. International Journal of Computers and Applications, 25 (3), 2003, 198-205. [5] Han K.N., Kumar V.S., Nesbit J.C., Rating learning object quality using Bayesian Belief Networks, ED-MEDIA, Lugano, Switzerland, 2004. [6] Han K.N., Learning Object Quality Rating using Bayesian Belief Networks, Masters Dissertation, Simon Fraser University, Surrey Campus, Canada. [7] R.H.Dodier, Unified prediction and diagnosis in engineering systems by means of distributed belief networks, PhD dissertation, University of Colorado, Boulder, 1999.

Proceedings of the Fifth IEEE International Conference on Advanced Learning Technologies (ICALT05) 0-7695-2338-2/05 $20.00 2005 IEEE

You might also like