Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Toward social learning environments

2009, | IEEE Transactions on Learning Technologies

IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES, MANUSCRIPT ID Towards Social Learning Environments Julita Vassileva Abstract—We are teaching a new generation of students, who have been cradled in technologies, communication and abundance of information. As a result, the design of learning technologies needs to focus on supporting social learning in context. Instead of designing technologies that “teach” the learner, the new social learning technologies will perform three main roles: 1) support the learner in finding the right content (right for the context, for the particular learner, for the specific purpose of the learner, right pedagogically); 2) support learners to connect with the right people (right for the context, learner, purpose, educational goal etc.), and 3) motivate / incentivize people to learn. In the pursuit of such environments, new areas of sciences become relevant as a source of methods and techniques: social psychology, economic / game theory, multi-agent systems. This paper illustrates how social learning technologies can be designed using some existing and emerging technologies: ontologies vs. social tagging, exploratory search, collaborative vs. self-managed social recommendations, trust and reputation mechanisms, mechanism design and social visualization. Index Terms— e-Learning, Education, User Profiles and Alert Services, Life-long learning, Personalization, Information Filtering, Knowledge Management, Social Computing, Online Communities, Mechanism Design, Game Design. —————————— Æ —————————— 1 INTRODUCTION T HIS century brought, like every previous one, new technologies that influence not only the way we do things but also who we are. Some authors claim that the Internet actually changed the way the human brain is wired [35]. The new generation of learners has different patterns of work, attention and learning preferences. Due to the development of communication technologies, we have witnessed an explosion in all areas of human knowledge, and rapid proliferation of interdisciplinary areas. In the past, a fresh University graduate was “set for life” with the knowledge necessary to practice a given profession. Now students know that they are engaged in a lifelong learning process. How will the students of the “Digital Natives” [35] generation learn the knowledge necessary for their work and life in these new conditions? How can new technologies be used to help them learn better? How does the role of educational institutions, especially Universities, evolve? These are important questions. This “vision” paper presents some of my views and projections drawing mostly upon my own work and work by my students and colleagues. “participative web”. The user is no longer a viewer, a recipient, or consumer; the user is an actor, self-centered and rational (in the economical sense), but surprisingly often – a collaborative and altruistic contributor. More recently, with the proliferation of social networking sites, the focus of Web 2.0 has shifted to emphasize supporting users in connecting, communicating and collaborating with each other and deriving value from this, as in Facebook, MySpace or Twitter. In the end of 2008 Tim O’Reilly [33] defined Web 2.0. as “the design of systems that get better the more people use them”. Therefore, often the term “social software” is used interchangeably with “participative web” or “Web 2.0”. In Web 2.0. the software recedes to the background; it provides the framework, the infrastructure, like the electricity or plumbing. The user does not (and should not) think about the software - it has to be very easy to use because the slightest hurdle may cause the user to abandon it for something else. The user is said to spend about 8 seconds deciding if she would create an account and try a site, before moving on. One of the important guidelines for the design of social software is to keep the design and functionality extremely simple. 1.1 New technologies Web 2.0 empowers not only the users, but also softRecently, Web 2.0 has become the new platform in the ware developers. A democratization of software producdevelopment of Internet applications. According to Tim tion is happening, with major players (Google, Apple O’Reilly [32], the term “Web 2.0” means putting the user iPhone, and Facebook) opening their APIs for anyone to in the center - designing software that critically depends contribute software. Powerful tools allow self-taught proon its users since the content, as in Flickr, Wikipedia, grammers, without formal computer science or engineerDel.icio.us or YouTube, is contributed by thousands or ing degrees to write sophisticated applications with atmillions of users. That is why “Web 2.0” is also called the tractive user interfaces. We see a seamless integration of services and mashups. With this proliferation of interac———————————————— tions, the need for standards should be growing. But in • Julita Vassileva is with the Computer Science Department of the Universi- reality the standards have to be very simple to be viable. ty of Saskatchewan, Saskatoon, SK S7N 5C9. E-mail: jiv@cs.usask.ca. If there is at all a rule to be followed, it is: Manuscript received on 22 December 2008. xxxx-xxxx/0x/$xx.00 © 200x IEEE 1 2 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES ID “Rule: Hard or Impossible to Impose Hard Rules” A new generation of learners is coming. Today’s teens and people in their twenties are dubbed the “Digital Natives” generation [35]: children who do not know the world before the Internet. Everyone is online these days: even the unborn babies have a social networking site Foops! (www.foops.be). “Online” is no longer limited to the computer screen. With game consoles for real games like Nintendo’s Wii or Swinx (www.swinx.com) “online” becomes a pervasive, ubiquitous reality. And we will be seeing more of it: screens everywhere, on textiles (foldable, rollable), input or output devices in everyday objects (walls, toilets, fridges, clothes, packaging), tiny sensors, powerful processors and storage scattered everywhere. According to Garry Small, the Director of the Memory & Aging Research Center at the Semel Institute for Neuroscience & Human Behavior at UCLA [35] the Digital Natives are cradled in technology, they are intuitively tech-competent, explore and try things out. Multi-tasking allows them to instantly gratify themselves and put-off long-term goals. Competing simultaneous tasks often provide a superficial view, rather than an in-depth understanding, of information. Educators complain that young people are less efficient in their school work. Chronic and intense multitasking may also delay adequate development of the frontal cortex, the area of the brain that helps us see the big picture, delay gratification, reason abstractly and plan ahead. Multitasking leads to a short attention span and errors in decisions and judgment. The Digital Natives seek instant gratification, praise and recognition. They have been getting attention and encouragement throughout their formative years, and have a strong feeling of entitlement; they challenge authority. Many teens feel that they are invincible. Since the regions of the brain responsible for empathy develop in the later stages of adolescence, too much exposure to the Internet and computer games, rather than real face-to-face interaction may inhibit this development, and the brains’ neural pathways may never catch up. Today’s teens “may remain locked into a neural circuitry that stays at an immature and self-absorbed level, right through adulthood” [35]. Yet, on the other hand, they are very social and constantly communicate. They are used to texting each other any minute, they keep up with hundreds of friends on Twitter or Facebook, and they switch seamlessly from phone to text, to chat, to email, to social networks to reality. They easily create new relationships (mostly online), and maintain many relationships (mostly weak and shallow, just keeping in touch, exchanging information, rather than deep empathy and support). They are strongly peer-oriented within their own age group. They are smart, competent, very efficient in achieving their goals when motivated; able to locate and mobilize a lot of resources and people for the purpose at hand. Digital Natives that has important implication for their learning. It becomes hard to teach subjects involving the development of complex knowledge structures and demanding a lot of exercise, such as math [27]. To deal with this problem, new learning environments have to make the learning of complex skills more gratifying for the Digital Natives. There have been successful experiences since the 1980s with game-like environments to train math skills. This direction is very promising, but the design of games that are both engaging and effective for learning is rather complex and the task of covering the entire math curriculum into games is daunting. Yet, Web 2.0 allows for massive participation. Now it is possible for teachers, parents, volunteers and even children to create and contribute their own educational games. With a wide collective effort, it may become easier to create a wide range of gratifying games that allow learners to practice their math skills. Specially designed games by neuroscientists that put emphasis on complex goals and strategies and have a significant impact on a young person’s frontal brain lobes [35, pg. 37] will be part of the increasing pool of educational games. In Education there has been long standing research into the area of Problem Based Learning, inspired by the wish to make learning more relevant and contextualized. This research is becoming increasingly important now, with the radical increase of availability of usercontributed content. Digital Natives learn mostly in context, in response to a (perceived) demand, or to solve a particular problem. They learn “on the go”, in multi-tasking mode. They search the Internet or YouTube to find information, video, games, any related materials to whatever they are curious to learn at the moment. Alternatively, they scout through their social networks to find a person who may be able to help. Therefore, multiple, fragmented learning experiences happen in parallel, in no particular order, always in the context of some application or problem. These learning experiences are relatively short (due to the short attention span of the learner). The retention of whatever is learned in unclear [35-pp.34]. There is no perceived need to learn or memorize information for later use since they can always repeat the search experience when needed. Generally the motivation for learning is to satisfy a short-term goal; it is “solution-driven”, rather than “learning in principle”. Often the motivation for learning is social, e.g. find a curious fact to impress peers, or to help with a task that a group in which the learner is a member at the moment has undertaken together (e.g. finding a strategy for particular type of attack in an online multiplayer game or a group project in class). While the new mode of learning by the Internet generation may be considered a sign of decline into superficiality by some, others [21] see it as a natural evolution in our collective knowledge development. The next section discusses this evolution in order to persuade the reader that this mode of learning is not a “problem to be fixed”, but a trend to be aware of, accept and adapt. 1.3. Implications for Learning 1.4. The Evolution of Collective Knowledge The need for quick gratification is one of the features of the In the distant past collective human knowledge was 1.2 New Learners VASSILEVA, J.: SOCIAL LEARNING ENVIRONMENTS smaller in volume and it was possible to have universal scholars. With the gradual growth of collective human knowledge in the 18-19th centuries, it became impossible for an individual to be knowledgeable in many areas at once, and the specialization of knowledge into disciplines started. This is when the classical sciences emerged. The division in disciplines was captured by Dewey in his library cataloguing system [47]. The 20th century brought about an explosive development in scientific knowledge and further divisions between sub-disciplines, with new areas of specialization emerging (e.g. cell biology, nuclear physics), divisions between theoretical and applied sciences and engineering. Towards the end of the 20th century, in tandem with the development of the Internet and the possibility to share research results faster, the speed of research development increased. Finding information across disciplines became much easier and this opened the possibilities for people to make links that would have been more difficult to make earlier. The most interesting and productive areas of research are increasingly on the boundaries of different areas and disciplines; new frontiers and crossovers among areas emerge constantly, some form new disciplines and areas. Currently, we are witnessing an explosion of interdisciplinary knowledge, or a rapid growth in the “long tail of knowledge” (see fig. 1). Fig. 1. The Long tail of Knowledge The long tail consists of new interdisciplinary areas in which only a few people are competent. As some of these areas become important and focus some collective attention, the number of people working in them increases and these areas move slowly towards the beginning of the curve (to the left in fig 1.) One can argue that this process has always been in place. Yet, the growth of the tail has never been so fast before. In Canada, for example, this process has necessitated restructuring of the way the Natural Sciences and Engineering Council (NSERC) research grant-selection committees review proposals using a conference model, with members regrouping into subcommittees to discuss individual proposals. There is an increasing need to teach knowledge from a wide range of interdisciplinary areas, not covered by the existing undergraduate courses. Educational institutions struggle with the problem of fitting into their programs both “classical” discipline knowledge and knowledge in emerging interdisciplinary areas that the students need to find jobs. Producing learning materials in such a wide variety of 3 areas is costly, and delivering education for the “long tail” is only possible if the production costs are close to zero [23]. Web 2.0 (the Participative Web) offers a solution by mobilizing free contributions from users. Numbers of laymen exist who are self-taught experts in some particular narrow area of their keen interest, without any formal education in the subject in which the area would be traditionally classified. For example, on Wikipedia one can find an expert in particular type of battle-ships from the time of WWI, who is neither an engineer nor historian by education. Many such self-taught people are avid writers on Wikipedia, engaging in discussions with professionals, or providing “expert” answers on Google Answers. Formal accreditations are not required to become an author. The question “Why do people contribute to systems like Wikipedia?” arises. From an economic perspective, some incentives may be necessary to mobilize contributions. Wikipedia works based on a reputation economy [12] similar to that of the scientific research community or the open source community. But this is not the only possible model. The Internet also provides incentive structures that allow experts to earn money by contributing their knowledge (e.g. Google Answers). It is not clear in general which incentives are suitable for which community. Wikipedia provides a possible optimistic model of how the process of long-tail knowledge creation and learning may happen in the future – in a collaborative, democratic, no-credentials-necessary, “wisdom of crowds” style, which is self-correcting, using discussion, social negotiation of meaning and selection. The critics of this model point out that the quality of articles of Wikipedia can be low, especially in areas where there is no agreed-upon knowledge or facts. Unfortunately, relying on volunteer contributors without credentials may lead to articles that are contentious or simply false. If these articles happen to be in the “long tail”, and therefore not subjected to thorough scrutiny and discussion, their falsehood persists. This makes participatory media, like Wikipedia, somewhat problematic as an educational resource, because studies have found that young people have difficulty assessing the quality of information sources they find on the Web [22] [13]. However, as Forte and Bruckman state [13], “The decision of some educators to outlaw resources like Wikipedia in school does not prevent students from using it and, in fact, fails to recognize a critical educational need.” Their results suggest that there is potential for students to learn how to evaluate information sources like Wikipedia by participating in their production or in similar publishing activities. Wikipedia is one of the Web 2.0 applications that provide a communication platform for knowledge creation, negotiation, and learning through massive participation of learners, teachers, parents, educators, and experts. There are others, like YouTube, Facebook, and immersive environments like SecondLife, which provide close to virtual reality experiences, very suitable for training manual skills. The openness of these environments to usergenerated content is crucial to enable both the diversity of 4 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES ID content and participants and thus enable myriads of possible connections and recombinations – both between different pieces content and connections between humans: learners, between learners and teachers, learners and experts, learners and parents. The Participative Web allows a mode of learning that is well suited for the learning needs of the Digital Natives. It allows learning as a result of opportunistic search on demand (similar to Googling), for a purpose - to fill a gap or to accomplish a task that arises at a given moment, in a multi-tasking mode. Search results involving rich media are instantly gratifying; in addition to the gratification of quickly obtaining content that satisfies a learning need or purpose. The Social Web facilitates knowledge exploration through browsing from one content item to another through using tags and links, and from person to person using social networks. In these new conditions in society and technology, the role of educational institutions changes: 1) From teaching deep domain knowledge in an array of disciplines towards teaching more general or metaknowledge – principles that apply across areas, methodologies for answering research questions and strategies for searching effectively and evaluating the credibility of information found. 2) Providing a physical social environment of peers. The university creates a notion of a “cohort”, of others just like the learner, at the same stage of learning so that they can communicate, collaborate, share and compare among themselves. 3) Providing motivation or incentives for learning and certification / credentials to students becomes increasingly important part of the role of educational institutions, especially universities [20]. Interestingly, while credentials don’t matter on the Internet, they become very important in the real world to get a job. In fact, credentials will continue to matter in the real world, and will only increase in importance. The reason is that since, as mentioned before, the quality of learning that one can receive on the participative web cannot be guaranteed; credentials are required to ensure that job applicants have the needed skills and knowledge. This leads to an increasing competition between universities for high ranking and reputation. The results are ranked in an order that depends on the learner, her context, purpose, and pedagogy. The results can also be sequenced or the accompanying links or tags re-adjusted so that the learner can maximize her browsing exploration around the results. Decisions or adaptations made for the benefit of the learner should be invisible, the user has to be in control and steer these decisions / adaptations. Therefore a social learning environment needs to: • Help the learner find the “right puzzle piece” of knowledge. • Help the learner find the “right” people – to collaborate or play with, to teach the learner, or to help find the answer, the missing “right puzzle piece”. 2) Make Learning More Gratifying: Digital Natives cannot be easily coerced into learning, unlike previous generations. They need to be convinced, motivated to explore, and rewarded for achievement. Their need for instant gratification can be exploited and they can be “seduced” into learning by providing the right amounts of challenge, achievement and rewards, similar to how players of online games are seduced into striving to achieve higher levels of skills. Learning happens both by consuming and producing knowledge. For example, contributing to the collaborative writing of Wiki articles can be an effective way of learning [13] and contributing to an online discussion forum is widely acknowledged as being valuable learning experience. However, engaging learners in the collaborative production of knowledge, in discussion or writing, is not easy. Therefore a social learning environment needs to: • Create a feeling of achievement /selfactualization • Tie learning more explicitly to social achievement related to status / reputation in the peer group • Tie learning more explicitly to social rewards in terms of marks and credentials. To fit into the page constraints of this paper, in the next few sections I will discuss only work that addresses the user-centered challenge in tandem with the support social learning in context challenge or the gratification challenge, rather than providing a full overview of related work. 1.5. Implications for the Design of Social Learning Environments 2. SUPPORT SOCIAL LEARNING IN CONTEXT Based on the discussion in the previous sections, we can draw two main implications regarding the design of Social Learning Environments. 1) Learner-Centered, in Context: The Digital Natives initiate their learning experiences. They are purpose-driven, self-centered, and should feel always in control. Thus Social Learning Environments are no longer stand-alone isolated systems that “teach” the user. Instead, like epiphytes, they harvest and connect existing resources – content and people – from the Participative Social Web. Like search engines or “knowledge navigators” they respond to a learner’s query to provide the best learning resources available. To support social learning in context, systems need to support learners in finding the knowledge that is right for them and the people who can help them learn – collaborators, teachers, and helpers. This section provides an overview of approaches and techniques used for this purpose emphasizing those that are user- or learnercentered, supporting a self-centered user with their own purpose, 2.1 Support in Finding the “Right Stuff” Many questions arise even when one reads the title of this section. What is “right”? “Right” with respect to the particular learner (personalization); “right” with respect to the context (related to the particular purpose, task, VASSILEVA, J.: SOCIAL LEARNING ENVIRONMENTS query), “right” with respect to the content (what kind of content, what media, what timing/scope), “right” with respect to pedagogy (what sequence of content, what sequence of activities, what style of presentation, what difficulty level etc.). These questions define open dimensions in the design of adaptive learning environments and have been explored extensively in the last 20 years. What is the “stuff”? Generally, it means “content, but there can be different kinds of content. “Passive” content includes materials, pages, videos, discussion forum articles, and blogs. “Active” content includes systems, like games, Learning Management Systems (LMS), YouTube, SecondLife, and functionalities within a system, e.g. communication or collaborative spaces (chat, or wiki, s. forum or blog)? How do we distinguish between different kinds or areas of knowledge represented in the content? What is “context”? What characteristics of context should be represented? What are the characteristics of a purpose or of a learner? What pedagogies, what didactics, what learning styles? 5 simpler semantic representations allow straightforward mapping of terms using applications like WordNet. Just like translating word by word a text in a foreign language using a dictionary, this kind of mapping excludes the semantic of the relationships among objects (the relations can be mapped in name only). Research is currently going on to allow for more advanced structural ontology mapping, but the practical application of such mappings is still limited. Even if there is agreement among designers about using a particular simple ontology, from a user point of view, it imposes a cumbersome and inconvenient way of organizing or finding content. I will illustrate this with an experience with an early version of our Comtella system. 2.1.1 What is <whatever>? Representing Semantics To be able to find the right stuff for the context, it is necessary to be able to distinguish between different aspects, characteristics of content, context, learner, and pedagogy. For this it is necessary to agree about the semantics. A lot of research has addressed semantic interoperability on the web. Some form of annotation, or meta-data is necessary to distinguish among the content and to be able to search. However, who defines the standard “dictionary” to be used in the meta-data? Many meta-data standards-proposals have been developed specifically for learning objects, e.g. MERLOT, LOM etc. However, they allow capturing mostly simple semantics. To allow for richness and consistency in the annotations, ontologies have been proposed as the basis for standards, content, learner characteristics, pedagogies and learning context [30]. The heart of the semantic web, ontologies allow complex objects and the relationships among them to be represented. Therefore, ontologies allow for very powerful representations of meaning in any domain and allow sophisticated reasoning, recommendation and adaptation mechanisms. The problem is that ontologies are very hard to engineer, despite the availability of editing tools like Protégé. Just like MS Word does not help much in writing a meaningful essay, creating a consistent map of meaning in a given domain depends on the skill and art of the knowledge engineer and always reflects his or her individual viewpoint and understanding of the domain. As soon the domain gets more realistic, complex and interesting, people’s viewpoints no longer agree and they start interpreting things in different ways. Different communities use different naming agreements. It is hard or impossible to agree on the semantics of relationships – people interpret them in different ways, even when they agree that there is a relationship between two entities. In reality most of the systems that claim to use ontologies are based on taxonomies or topic maps (categories connected with certain types of relationships). These Fig. 2. The Comtella Search interface forcing the user to select a search category based on ontology. 2.1.2. Finding Stuff with an Ontology-based Interface The first version of Comtella [5] was developed in the MADMUC lab at the University of Saskatchewan in 2002/2003. It a peer-to-peer system, similar to the musicsharing systems Kazaa and LimeWire, that allowed graduate students and faculty in the department of Computer Science to share research articles of interest. In order to share an article found on the web, the user had to enter the URL and select the content / semantic category of the paper. Categories were used to allow for searching since Comtella did not support full text search of the papers. We used an “ontological” approach for content organization by adopting the ACM category index, which we considered as a standard content indexing scheme for computer science. Yet, finding the required category in a subject category index is challenging, as any author who ever published a paper with ACM or IEEE knows. So we simplified the ontology by selecting only the top categories corresponding to areas in which our department has active research projects and limiting the depth of hierarchy of the topics to 3 (instead of 6 – the depth that the ACM category index reached at that time). We then organized the ontology as a set of three hierarchically nested menus (see fig. 2), which were used to annotate a new contribution to the system and to search for articles that have been shared. The system had very little use – the users indicated that it took too much effort to both categorize a contribution and to find a contribution using the menus. The biggest difficulty for users was not knowing the “map” of categories: to access the right level-3 menu, one had to make the right choices in the level-1 and level-2 menus. Some of the 6 top categories were vague and the users had no clue where the particular level-3 category they were looking for was hidden. This confusion resulted from the absence of a common agreement about how categories in Computer Science should be structured. The ACM category index, like any other category list, even though likely resulting from the dedicated work of a committee of experts, reflects the particular viewpoint of the committee that designed it. Luckily, in the design of the top-level menu, we had included a category “All” (Other) in case the user was not able find the category they were looking for. Interestingly, most of the user contributions were labeled under this category. The users explained later that this was the easiest way to share an article. This also emerged as the easiest way to search for articles, since all the articles were in this category. Yet, it worked well only because there weren’t so many articles shared and the list of results was tractable. However, categorizing everything in the category “All” is the same as not having categories at all. This is an anecdotal confirmation of the “Everything is miscellaneous” postulate by David Weinberger in his recently published book [47], which discusses if it is possible to impose one ontology or unified classification schema on diverse and autonomous users. Weinberger answers this question negatively, with many convincing examples and argues for collaborative tagging, a simpler, non-AI based approach supporting “findability” rather than “interoperability”. 2.1.3. Annotating and Finding Content through User Tagging and Folksonomies In contrast to the pre-defined “ontologies”, which users/developers have to adopt, the main idea of collaborative tagging is to let users tag content with whatever words they find personally useful. In this way, for example, the tags “X1” and “My Project”, may be perfectly meaningful tags for a given individual at a given time, even though they have no meaning for anyone else and probably won’t have any meaning for the same user a couple of months later. However, users who have a lot of content will have to use more informative tags to be able to find their own stuff at a later time. Also users who share content will use tags that they expect to be meaningful to other users; otherwise it does not make sense to share the content at all. In this way, with many users who tag the same documents, the pool of tags added will capture some essential characteristics of the document, possibly its meaning. Thus “folksonomies” emerge as an alternative to ontologies, developed by collaborative communities of users tagging a pool of documents. Folksonomies provide a user-centered approach to semantic annotation because selfish users tag for themselves. Tags are very easy to add, there is no need to agree on the semantics, taxonomy, and relationships or metadata standard in advance. The tags can express different semantic dimensions: content, context, pedagogical characteristics, learner type, and media type. The tag clouds that are found in many Web 2.0 systems provide a summary of the documents in the repository. They are useful to guide users in their browsing, and provide a very intuitive and IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES ID easy interface for search without the need of typing by just clicking on a tag. Different font sizes indicate the popularity of each tag, which gives an idea of the semantics of the entire content collection at a glance. For a person searching for something, it is acceptable if a document of potential interest is not found because it was tagged using a different language or different terminology (or ontology). The abundance of content guarantees that there will be something found that is suitable. Similarly to the collaborative knowledge negotiation process in Wikipedia, the abundance of actively tagging users guarantees that the quality of tags will improve under public scrutiny, especially for documents that are not too far to the right on the long tail (in which too few users are interested). However, the tags in a Folksonomy are not machineunderstandable. From the machine’s point of view the tags are discrete labels with no relationships among them. The tags can be used for retrieval but not for machine reasoning and decision-making. A machine cannot say how two documents, tagged with the same tag(s), are semantically related to each other or why they are similar. So it would be, for example, very difficult to create a sequence of content from tagged learning objects. Tags are good for a “one-shot” retrieval by a user but are insufficient for inference or reasoning. There has been some interesting work comparing if folksonomies capture the semantics of a document as well as automatic term extraction. Brooks and Montanez [3] did an experiment with the 250 most popular tags in Technorati. They grouped documents that share tags into clusters and then compared the similarity of all documents within a cluster. Their hypothesis was that documents in a cluster that shared a tag should be more similar to each other than a randomly constructed set of documents. As a benchmark, they also compared clusters of documents known to be similar (based on Google search results for the tag). Finally, they constructed tags automatically by extracting relevant keywords from documents, and used these tags to construct clusters of documents. This was intended to determine whether humans were better at categorizing articles by tagging than automated techniques (semantic lexical analyis). The results showed that articles sharing a tag had a 0.3 pair-wise similarity, the articles considered similar by Google – 0.4. Automated tagging performed the best - the articles that shared 3 words with the highest TFDIF yielded a result between 0.5-0.7 (mean 0.6). The authors then applied agglomerative clustering over the tags and obtained a hierarchy of tags very similar to a hand-made taxonomy of tags, which can be browsed. This is an interesting and potentially useful result. Automatic ontology generation based on tagged documents may be a promising direction of research avoiding some of the pitfalls in ontology research so far (the need for agreement among experts, development of standards that are then imposed on others to follow). Yet, there is no guarantee that the automatically generated ontology will be understandable for humans and that humans will agree with it. On the other hand, agreement may not be necessary since, for practical reasons, the ontology is bet- VASSILEVA, J.: SOCIAL LEARNING ENVIRONMENTS ter used by the machine and hidden away from human eyes. The humans can deal with tags, which are userfriendly; the ontology should stay in the background to support machine reasoning and more complex inference and adaptation. 2.1.4 Combining the strengths of ontologies and folksonomies The same authors [4] proposed also an interesting approach that combines the powers of tags, ontologies and puts the human in the center. Machine learning / datamining is used to extract meaningful tags from text. The machine-generated folksonomy is augmented using existing ontologies, a process referred to as “snap to grid” by Gruber [15]. The resulting tags are provided as suggestions to the user who decides whether to add them or not. Ultimately, the ease of use provided by the tags is preserved; the user is in control, empowered by an invisible intelligent mechanism and ontologies in the background [4]. These are exactly the main features that we want in social learning environments: user-centered (supporting a selfish user), easy to use, intelligence, adaptation, and recommendation, provided invisibly in the background. 2.1.5 User interfaces supporting exploratory search An important problem remains: how to develop user interfaces that allow convenient search and at the same time convey an intuitive idea about the structure of information, so that the user can navigate by browsing? As shown in section 2.1.2, interfaces based on ontologies are not a good solution. It is better to develop interfaces revealing a structure focused to the perceived purpose of use and makes explicit only those dimensions of knowledge or information that are relevant for the purpose. There may be a need to create such interfaces and structures for a variety of purposes; in this case, the environment should be able to decide automatically which interface should be activated depending on the anticipated or explicitly declared purpose of the user. Next, I will present briefly an illustration – an interactive visualization of social history that allows a user to perform exploratory search in a blog archive. The visualization is shown in Figure 3. Fig. 3. iBlogVis – Interactive Blog Visualization supporting exploratory search. It uses three dimensions (semantic categories): time (horizontal axis), content (posts, vertical axis, upwards), people (comments of readers, vertical axis, downwards). 7 In the space defined by these three dimensions, each post is represented as a dot on the horizontal time axis. A line stretching up from each post indicates the length of the post. A line stretching down from each post indicates the average length of the comments made on the article. The downward line ends with a bubble whose size represents the number of comments received. In the upper (content) part of the space, tags indicate the semantics of posts written in the corresponding period of time, while in the lower (people) part of the space, user names indicate the commentators of the posts during the period of time. In a case-study evaluation of the interactive visualization uses found it easy, intuitive and efficient to find blog posts of interest in a large archive [19]. What makes this approach interesting is that it allows the combination of several principally different ways of searching information (corresponding to different purposes) that are not possible by the current state of the art tag-based systems: by time, content and social interaction history, providing a maplike overview of a blog archive allow the user to filter a large amount of information and discover interesting posts. It adds the power of queries-based search to the freedom of browsing, and the usability of a tag-oriented user interface. 2.2 Finding the “Right” Content for the Learner Social recommender systems have evolved as a practical approach for providing recommendations without the need for understanding the semantics of the domain of the content or the needs of the user. Based on Collaborative Filtering algorithms [1], they do not require an explicit model of the features of the content, or the knowledge / preferences of the individual user. The recommendations are based on finding users with a record of choices in the past (e.g. purchases, ratings, views) that is similar to the past record of choices of the user for whom the recommendation is generated. All that is needed is keeping a history of each user’s choices. Then a matrix of the choices of all users is compiled, and correlation is computed between every two users and groups of users. Then on the basis of the ratings made by users that are in the same correlated group, the system recommends items that people in the group have liked to users that haven’t evaluated these items yet. This approach relies on the fact that as the number of people and ratings grows, it would be easier to find users with similar tastes (highly correlated) and the quality of predictions for these users will be high. Of course, the tastes of people in different areas differ (e.g. A and B may have similar tastes in music, but very different views on Climate Change), so a general correlation (over many areas) between two people is unlikely to be high. However, in a limited domain or on a global scale with extremely large number of people and ratings it is possible to find fairly good matches and generate good recommendations. The problem is that if there are not enough users and ratings in the system (sparse rating matrix), the quality of predictions goes down dramatically. To deal with this “cold start” problem, social recommender systems usually deploy some form of content-based recommendation, 8 based on a preference profile of the user, either acquired interactively from the user or based on demographic data. For this they require some semantic representation of the domain of the content and of the preferences or interests of the user, either based on tags, keywords, or on a simple ontology. Most of currently existing recommenders are therefore hybrid. In using collaborative filtering to recommend learning content there are some problems. In a sense, the main idea of this approach is to “follow the people similar to you”. However, in education it is often desirable to not follow people similar to oneself, but to follow people one strives to become similar to, e.g. role models, teachers, advisors, experts. Content may often be recommended for pedagogical reasons, due to prerequisites that need to be learned. Existing approaches for content sequencing [6] can be applied but these are entirely content-based. Approaches that combine collaborative filtering with pedagogical or domain requirements (e.g. prerequisite relationships) still need to be developed. It is also important to make the learner aware of how the recommendations are generated, and to allow for some control by the learner. Brusilovsky and Weber’s ELM-ART [42] system allows for a simple visualization and explanation of the recommended content. From a user’s point of view, a content-based recommendation is more intuitively understandable than collaborative filtering-based one. Users who generally know how to search with Google can understand that by entering their interests or preferences, the system will constantly filter information that is relevant to these interests, like a continuous Google search in the background. However, using demographic data or collaborative filtering to produce recommendations is non-transparent. It is hard to accept a recommendation based on statistical average of an unknown group or resulting from the ratings of an unknown user correlated with the current user. Moreover, in the absence of enough ratings, the recommendation can be volatile with too much depending on random choices. This volatility can be annoying for users, who have no control to correct the error. One can imagine that it would be even more harmful for learners to be recommended irrelevant stuff, since they may not be even aware that it is irrelevant. In order to trust the recommendation the learner should be aware of how it was generated. There has been some recent work on making recommender systems more transparent to users and to allow users to have a say about who should influence their recommendations [16], [45], [46]. KeepUp is a hybrid recommender system for feeds based on a modified collaborative filtering algorithm [44] that works particularly well for news items, which do not have enough ratings yet The KeepUp system provides the user with an interactive visualization (see fig. 4) that shows which other users are correlated with him or her. It shows how much each of these users influences the recommendations that the user receives, and allows him or her to change the degree of influence, i.e. to override the correlation that was automatically computed by the system. This capability IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES ID would be particularly important in an educational application, since it allows the user to decide to “follow” particular other users in the choices they make, even if their past history of choices have not been similar. Fig. 4. KeepUp: Interactive Visualization of social recommender system “neighbors” and their influence on the user’s (black dot in the center) recommendations. The user can pull the red dot ending each beam towards the center or away, thus changing the strength of influence the corresponding person on the periphery has on him/her recommendations. The blue beams from the center towards the periphery show the strength of influence the user has on other users on the periphery. 2.3 Finding the “right” people. Supporting social learning means helping the learner find the right people for her to learn from or to collaborate with at the current time, context and for the current purpose. Students find helpers and collaborators in face-toface learning environments (e.g. finding someone to help on an assignment or project, a senior student to chat with or exchange information about the class). With the emergence and increasing penetration of social network applications data about relationships among users and their profiles is becoming readily available for users to browse and find suitable people for particular purposes in their own social networks and those of their friends. Yet, it is still not easy to find the right person for a given purpose in a given context. Finding helpers online was suggested Hoppe [17] as a matchmaking process conducted by a centralized system that compared the student models and found good matches using certain criteria. This idea was implemented in a real system and elaborated further by Collins at all [9] in the Phelps system and later in iHelp [14], where learners could be matched using various criteria and their combinations: knowledge, availability, language, state of completion (e.g. of assignment), cognitive styles, and even by horoscope. The general approach was that an “intelligent” system, guided by a teacher who sets the matching user characteristics and rules, decides who the best match for a learner is. In essence, this centralized VASSILEVA, J.: SOCIAL LEARNING ENVIRONMENTS approach is used by thousands of existing online dating services, which differ only by the details of their matching algorithms, the user features they collect, and the population of users they attract. Recently, in the spirit of the participative Web some dating services, e.g. Okcupid (www.okcupid.com) are pioneering a trend that places the user in the center and letting him or her create the matching algorithm and select the personal features he or she cares about. Many people would appreciate the opportunity to create the matching algorithm. But the ability to incorporate particular physical trait or other criterion in a matching mechanism for dating depends on the particular user and the importance that he or she puts on it in relation to more generic criteria such as education, looks, financial state, sense of humor, interests, activities etc. The factors participating in the calculation and their weights depend on the individual, context and purpose. Purpose-Based or Decentralized User Modeling has been suggested as an approach to deal with this problem [28], [31], [38] Trust and Reputation Mechanisms that have been proposed in the area of Multi-Agent Systems provide an example of simple Decentralized User Modeling in action. Trust is a simple user model that one person/agent holds about another, and is constantly updated with experience. It is subjective, reflecting the individual experience of the first person/agent with the second. It is not symmetrical, e.g. A may trust B, but B does not have to trust A. It is also context-dependent, for example, A may trust B as a competent colleague, but not as a driver. Multiple networks of trust exist naturally among people, overlaying their social networks. In contrast, reputation is a shared, aggregated representation of trust that a group holds about a person or an agent. It can be centralized – each person/agent reports his or her experience to a central authority called “reputation manager” that compiles the reputation of every person or agent, similar to centralized user modeling server. Reputation can also be decentralized, evolving from people/ agents sharing by “gossiping” information about a particular person or agent. If they gossip sufficiently frequently and “honestly”, the decentralized reputation developed by each person or agent will converge to the aggregated value by a centralized reputation manager. A very simple trust update formula based on reinforcement learning can be used, along with many others. Tnew=a*Told + (1 – a )*e, where Tnew and Told are respectively the new and old value of trust that the modeling agent has in the modeled agent; e - represents the new evidence (result of interaction with the modeled agent), and a – the modeling agent’s conservatism (0 <= a < 1). Trust, computed in this way, is just a single value between 0 and 1. This is, indeed, a very simple model that can have different meanings depending on the context. For example, a user can develop her trust in an automechanic using this update formula and her experiences in car repair. Another trust value can be computed for a 9 different sub-context – e.g. in between the same two agents with respect to accuracy of billing – or entirely different context – e.g. with respect to cooking Thai dishes, if the two users share this context. So we trust computed in this way provides an example of fragmented user models - many different trust values computed for different context and different purposes. These models could be combined to yield new trust values for broader contexts or purposes, for example, we can define a formula that computes a trust in an auto-mechanic from the two values of trust – in the person as a repairman, and in the person as an accurate and honest accountant. A trust value can be easily shared between two agents (we say that two agents who share their trust values about a third agent are gossiping). Sharing fragmented user models is essential in a social environment and is an integral part of decentralized user modeling. A new dimension of user modeling emerges in this interaction/sharing: how reliable are agents in sharing honestly, without strategic modification, their trust values. We talk about two kinds of trust that an agent can hold in another agent: basic trust (in the capability of the agent which is context- or purpose-relevant) and trust as a referee. The latter can be further split into trust in the honesty and benevolence of the agent and trust in the similarity between the agents with respect to tastes and interests. For example, if A and B are gossiping about their trust in C (C being a person or a thing, e.g. a movie), A may hold a certain level of trust in B’s honesty (i.e. that B is sharing the true value of trust it holds in C), and a certain level of trust in B’s judgment (i.e. that B’s criteria for evaluating its experience with C are similar to those of A, for example, that they both have similar taste in movies). Google’s Page Rank mechanism and collaborative filtering recommendation can be expressed as trust and reputation mechanisms [40]. In addition, a trust and reputation mechanism can be used to dynamically form communities of like-minded users without any centralized authority [39], [41]. By gossiping and maintaining individual reputations a community of completely autonomous and self-interested agents can collectively selforganize to create a semi-centralized reputation management system that helps to quickly find good resources and people in the community. The mechanism has been shown to perform well and reach equilibrium state in simulations. Currently, we are working to applying a similar trust mechanism in a real social network application, like Facebook. Yet for the mechanism to work, it is necessary that some agents / users take certain community tasks on – provide computational resources and memory to store a reputation list for the community. Also, each agent / user has to dedicate some resources towards computing trust and share benevolently the trust with others, even though there may not be immediate payoff for this. To ensure such cooperative behavior, our community formation mechanism involves incentives / rewards that cooperative agents would enjoy. This leads us to the second big challenge faced by social learning environments: making learning more gratifying and motivating participation. 10 3 MAKING LEARNING MORE GRATIFYING AND MOTIVATING PARTICIPATION There are different ways to make learning or work more gratifying, which vary from intrinsic to extrinsic [21]: • Make it game-like, a combination of challenge and fun • Boost the feeling of achievement by providing constant feedback on performance • Relate performance to status in peer group (social reward) • Relate performance to marks or credentials Creating learning experiences that are motivating requires careful design of the challenges and rewards provided. This is called “mechanism design” – an area addressed by two disciplines – game theory (a branch of mathematical economics) and computer game design. 3.1. Mechanism Design In game theory, mechanism design is the art and science of designing rules of a game to achieve a specific outcome, even though each participant may be selfinterested. This is done by setting up a structure of rules in which each player has an incentive to behave as the designer intends. The game is then said to implement the desired outcome. For example, A and B should divide a cake. Of course, each of them wants to have the larger piece. What rules for cutting and choosing should they follow to avoid conflict and ensure fairness? The solution: One cuts the cake, but the other one picks the first piece. Thus the one who cuts the cake has an incentive to make two equally sized pieces. There are many applications of mechanism design: the design of markets, auctions, and combinatorial auctions; the design of matching algorithms, such as the one used to pair medical school graduates with internships; the provision of public goods and the optimal design of taxation schemes by governments. The influence of mechanism design can be seen in the structure of auctions which squeeze potential buyers into making bids that reflect their view of the true value of the goods, and prevents them colluding to pay lower prices. Mechanism design is applied also in creating the rules of encounter in computer games, and in the design of “Games with a Purpose”. The task of designing a mechanism in learning / educational setting needs to consider the utility or the personal goals of learners. According to Houle [18], apart from the intrinsic goal of learning new knowledge, or achieving a certain goal, learners may be motivated by social reasons – to seek social contact. The wish for peeror teacher-recognition, or achieving high reputation in the group, falls under “social” motivation. Learners may also be driven by a goal to help others, e.g. to learn knowledge so that they can explain it to their friends, to reciprocate or to build new relationships through collaboration with others. Similarly, participants in learning communities may be motivated to contribute for a variety of reasons: they may identify with the purpose of the community and want to support it; they may want to earn reputation or social status in the community, or reciprocate to contribu- IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES ID tions made by their peers. Learners and participants in online communities may also be motivated by extrinsic factors - getting a credit, certification, or just a higher grade in the class. Incentive mechanisms can capture social and extrinsic motivations. In designing such mechanism one has to create a payoff matrix that defines the rewards for particular actions that are aligned with the learner’s goal, but also with the teaching goal and certain social / community goals. For example, rewards in terms of points for contributing posts to a discussion forum align with the learner’s goal to earn recognition among his or her peers, but also with a teaching goal to stimulate discussion on a topic and a social goal – to ensure a certain level of participation in the forum so that it is attractive to the learners and they come back to check it regularly. It is important that the learner receives timely feedback about his or her performance in terms relevant to his or her goal (e.g. her status, grade in the class, reputation level), so that he or she can get satisfaction, self-actualization from the achievement and eventually correct their behavior. Next, I present some work on using visualization to provide feedback for users/learners who are socially motivated to earn reputation or reciprocation within a community of learners. 3.2. Social Visualization as a Feedback Mechanism A social visualization shows certain features of the performance of users in a group or community. The role of a social visualization is to provide feedback on a user’s performance and comparison with the performance of others. It can stimulate social comparison and competition. Social Comparison [11] is very effective motivator for performance. Upwards comparison with peers who are better off, leads to growth through competition since the peers who are in a better position serve as role models. Downwards comparison boosts a user’s ego, feelings of achievement and self-confidence. Social visualization allows peer-recognition and provides learners an opportunity to build trust in others and in the group, thus serving as a visual reputation repository. Further actions of users can be interpreted or rewarded depending on their reputation. Depending on what kind of feedback for the user is required by the incentive mechanism, a social visualization can be designed to show interpersonal relationships among users. For example, if the incentive mechanism aims to reward users for building reciprocal relationships, a visualization that emphasizes the presence or lack of symmetry in the relationships between users can be an effective feedback that encourages users to take action and engage in reciprocating actions. We will see how visualizations of the feedbacks described above can be used in different incentive mechanisms that were designed in Comtella, the system for sharing articles that was introduced in section 2.1.1. 3.3 Incentive Mechanism Design in Comtella Comtella went through an evolution between 2002 and 2006 and from being a community for graduate students VASSILEVA, J.: SOCIAL LEARNING ENVIRONMENTS and faculty to share academic papers to a community for students enrolled in a class to share course-related web resources. The incentive mechanism also evolved. The first versions were based on the assumption that students are motivated by the wish to build reputation (status) among their peers, and it relied on social comparison as a motivational strategy. The second version took into accoutn the dynamics of community needs and the individual differences among students in computing the rewards. The last, third version focused on motivation through self-actualization [26] and common identity [34] highlighting the impact the student’s contributions have on the community with esthetically pleasing effects. The last version of the mechanism was based also on the assumption that students wanted to build relationships with each other and relied on reciprocation and commonbond [34] as a motivational strategy. In the next two sections I will give a brief overview of the three mechanism designs and the evaluation results of their effectiveness. Fig. 5. The first version of Comtella’s social visualization. Students were shown as circles, sorted in 4 levels depending on their contributions along 4 different criteria. The viewer could select the criterion by choosing the appropriate radio-button. 3.3.1 Incentive mechanism rewarding student participation with status The mechanism uses a utility matrix that defines a certain number of rewards (points) to desirable actions. In the context of Comtella, these actions are related to the following participation actions: logging in, downloading/reading an article, sharing a new article, rating an article, and commenting on an article. The reward for each type of action depends on how desirable the action is, which then depends on the goals of the designer and moderator of the community. For example, sharing new articles is very beneficial since it provides the community with materials to read and helps to overcome the “cold start” on a given topic. On the other hand, downloading /viewing/ reading an article is useful for an individual student’s learning of the topic. Logging in may not direct- 11 ly contribute to the community if the student remains a lurker, but it shows that the individual is keeping track of what is going on in the community, which should be encouraged. The goals of the community designer or moderator may be dynamic, and the rewards given for different related student actions may also be dynamic. The points accumulated by each student through her participative actions allow students to be classified into different status levels, for example, gold, silver, bronze. Explicit status categorization is used in many customer loyalty programs, such as the Star Alliance group of airlines that award certain status and related privileges to frequent flyers. The success of this marketing approach (part of a whole range of approaches in Customer Relationship Management) can be explained by social psychology with the social comparison theory [11] and the theory of discrete emotions (fear). According to the social comparison theory, people strive to achieve higher status not just to gain the privileges (utility) associated with it. The belonging to exclusive elite club increases the individual’s self-esteem because of downwards comparison with people who have not achieved the same status. It is important that status can be lost as a result of inactivity. Airlines usually award a certain level of status for one year, which is based on the accumulated miles during the previous year. According to the theory of discrete emotions (fear), people are generally more motivated by the fear of losing something they have than acquiring the same thing if they don’t already have it. Therefore, people who have achieved a higher status, even one time, will try to avoid losing it. In the first version of Comtella that was used to support a class of students taking an Ethics and IT course, we defined three status levels: Gold (the top 10% of the students based on their participation points), Silver (the next 60% of the students), and Bronze (the remaining 30% of the students). The status was valid for one week and it was based on the points collected from the participation actions of the student during the previous week. The status was displayed in the interface of the application as a shiny metallic card in the top left corner. By clicking on it the student saw their level of participation in each of the rewarded activities compared to the top student in this action. A Social Visualization accessible from the main interface of the application showed all of the students as different sizes of stars in a night sky, which would encourage students to engage in social comparison. The students could chose to view the stars sorted by status, by number of new contributed papers, by number or downloaded papers, and also by login frequency (see Fig.5). The incentive mechanism was introduced in Comtella in the middle of the term. The students used Comtella for 6 weeks without the incentive mechanism and for 4 weeks with the incentive mechanism. We saw a dramatic increase in the overall number of contributions during the first two weeks following the introduction of the mechanism and a decline in the next two weeks, but the contributions nevertheless remained at a level higher than most of the weeks before introducing the mechanism. We also observed a correlation (0.66) between the number of new 12 contributions shared by individual students and their accesses to the social visualization, which displayed the students compared by number of new contributions as a default view [37]. This shows that the students engaged in social comparison. In fact, the mechanism was too successful in encouraging participation; it encouraged “gaming”, or students submitting low quality papers to achieve a higher status. This resulted in an excessive number of contributions during the second week after introduction of the mechanism, which lead to cognitive overload and withdrawal of some students. We learned several lessons from this experience. First, that in the next version of the mechanism we need to encourage students to submit papers with high quality. Second, to enable a metric for paper quality, we had to encourage students to rate papers more often. Third, we had to stimulate contributions early in the week. We found that most of the contributions came late in the week when there was little time left for the students to read and rate them [8]. Gaming the system is a phenomenon that happens almost always when there is an incentive mechanism in place. According to Lewitt [25], people are economic creatures who always try to minimize their efforts and maximize the rewards. In education, gaming the system is frequently found, for example, in coursework as plagiarism or in exams as cheating. There are more sophisticated forms of gaming the system that aren’t easily caught or punished, e.g. finding a critical path towards receiving a degree by selecting the easiest classes, persuading instructors to waive prerequisites for them, and so on. Generally, gaming the system means finding and exploiting loopholes in the rules of the incentive mechanism to gain advantage. This happens when students are under high pressure, or when and there aren’t strong deterrents in place, such as strong penalties and successful policing. In the online world, gaming the system has been found in all online communities that make use of incentive mechanisms, i.e. Slashdot [24]; in multi-player online games, and even in students interacting with intelligent tutoring systems [2]. In game theory, a good mechanism is one that can be proven to be not game-able. Yet, in practical mechanism design it is very hard to find such mechanisms, apart from very constrained domains, like markets and auctions, since the rules and their possible interactions are very complex. In practice, designers often try to obscure the rules so that it is more expensive to students to find ways to game the system than to put in the required effort. Slashdot, for example, does not publish the algorithms of awarding “karma”. We, similarly, did not tell the students how many points they were awarded for different participation actions and they had to explore this themselves. The exploration was made more challenging by displaying students’ participation statistics only for the previous week. The limited data meant that the students needed to keep track of all their actions for a full week in order to discover how many points each action earned. As a result of the lessons learned from the first version, IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES ID we developed in 2004/2005 a new version of the Comtella incentive mechanism that differed from the first in the following aspects: 1) Dynamic, adaptive rewards were used for desirable actions instead of predefined rewards. Rewards differed in time and by students depending on the current community needs and the contribution history of the student. 2) The visualization displayed a new dimension for students to compare with each other – the reputation of a student for bringing high-quality contributions, which was based on the ratings earned from other students. 3) Another incentive mechanism was used in conjunction with the status based one. The purpose of this mechanism was to encourage students to rate the contributions of their colleagues by rewarding then with a virtual currency (Cpoints) for each act of rating. These aspects are explained below in more detail. The needs of the community change and evolve in time. It was important that students submit new contributions early in the week, so that their colleagues had time to read and rate them before the topic changed in the week after. Therefore more points were awarded for new papers shared early in the week than to late papers. As the number of shared papers increased over the week, it became increasingly important to rate the papers, so that they could be sorted by quality and thus facilitate students in finding better papers to read. Therefore the points awarded for rating papers increased with time. We defined a time-dependent rewards function, which reflected the community needs over time and served as a Communuty Model. Fig. 6. The social visualization in the second version. Students were shown as stars with different color (status: gold, silver, bronze and plastic), size (depending on the number of original contributions), and brightness (depending on the average quality of contributions based on the ratings received from others). Students are different. Some students tend to contribute a lot of lower quality articles, while others are more selective about their submissions. Therefore, an Individual VASSILEVA, J.: SOCIAL LEARNING ENVIRONMENTS User Model was created to compute the average quality of contributions submitted by each student. The average quality of each contributed paper by the student was computed based on the ratings received from all students. The average quality of the ratings the student has given was computed by comparing his or her ratings to the average rating that the papers have received from other students. These two values comprised the Individual Model. Based on the individual and community models, individual weights were computed for the actions of each student (the payoff matrix). The total number of points determined the students’ status for the new week, which brought different feedback and privileges: different color scheme in the interface, a higher number of ratings a student could give out, personal complimentary messages. The status is reflected in the community visualization (Fig. 6). The visualization differed from the earlier one by showing only one view, the same for all students [36]. The stars representing the individual students looked more like real stars and differed by size (representing the number of papers contributed), color (representing the status of the student), brightness (showing the reputation of the student, based on the ratings of his or her contributions) and whether the student was on line at the moment. Fig. 7. The search results for a given week were sorted by default first by Cpoint, then by the time of contribution, earlier - first. The interface allowed the student to sort by each of the columns of the table, apart from “My Rating” and “Fake?” (“Fake” allows reporting broken links). Students were encouraged to rate the papers submitted by their colleagues through the introduction of a new extrinsic reward (Cpoints) that the students could use like currency. The students were able to utlize the Cpoints to make their own submissions appear at the top of the search results for each week (similar to Google’s sponsored links, see Fig. 7). The incentive mechanism motivated students to submit twice as many ratings as students who didn’t have access to the CPoints mechanism in controlled experiment [8]. 3.3.2 Incentive mechanism rewarding the student through self-actualization and reciprocation. According to the Social Identity theory [34], many users contribute not to seek higher status but to help a shared cause because they identify with the community and its goals. Users gain a feeling of self-actualization [26] by seeing how their contributions support the cause or help the community to achieve its goals. In the third version of the class-support Comtella (2005/06) we designed an incentive mechanism that explored this type of motivation and rewarded users for desirable actions 13 through visualizing the impact of these actions on the community. There were two desirable actions: reading postings shared in the system and rating them. Of course, to maintain the existence of the community we needed to ensure that there were a sufficient number of postings submitted to the system. Instead of rewarding postings through the system, we decided to reward students for postings through the larger incentive system that existed in each University class in terms of coursework and marks. We incorporated the use of Comtella in the required coursework for the Ethics and IT class in 2005/2006. Students were rquired to contribute one new post/article each week, comment on the posts of two of their colleagues, and respond to a question asked by the instructor. The use of the coursework incentive mechanism ensured a sufficient level of activity in the Comtella community. Now we could apply our novel mechanism targeted at yielding more reads and more ratings. Fig. 8: Search results resulting from students’ rating activity. The posts that had received more positive ratings appeared brighter and with larger font than the rest. Those that had received negative ratings appear shrunken and darker. More reads were needed because the original educational purpose of the Comtella system was to make students read additional, more up-to- date material related to the class. In order to ensure quality control and to enable students to find easier the good posts, we needed students to rate a high number of postings. We encouraged students to rate postings by designing the interface so that an esthetically pleasing animation appeared after each act of rating: the post that was rated would change colors through a scale from violet to bright orange and finish with a color either brighter or darker than before (depending on if the rating was positive or negative). Thus the student immediately saw the effect of his or her rating on the list of search results that everyone in the community would have seen as well – the posts that had the highest ratings appeared brighter and with larger font. This emphasized the contribution made to the community, and created a feeling of self-actualization. We turned our attention to the Reciprocation and Fairness theory [10] and the Common Bond Theory [34] to encourage students to read others students’ posts. According to many experiments in behavioral economics [10], people tend to reciprocate and strive for fairness in their interactions with others. According to the Common Bond Theory [34], people may contribute to a community because they want to engage in relationships with mem- 14 bers of the community. To engage the students in reciprocal relationships, we hypothesized that providing the students with visual feedback about the type of relationships that they develop by reading each other’s posts would stimulate them to balance their relationships, make them more symmetrical and “fair”. We designed a new social visualization, which represented the symmetry of relationship between the viewer and all other students (see Fig.9). Fig. 9. Social Visualization of the Symmetry of Relationships between the other students and the viewer (the viewer positioned invisibly at the bottom-left corner of the square). From the viewpoint of the viewer, the student “Gid” was a “pop-star” – the viewer read often Gid’s posts, while Gid didn’t. The student ‘jcr948” was more of a secret admirer, but the viewer had started to pay attention to his posts and the relationship had started to become more reciprocal. The rest of the students in the “Unknown” quadrant were scattered along the diagonal; each of them and the viewer had read each others’ posts approximately equally, but not often. The visualization divided the 2-dimensional space according to dimensions corresponding to how often the viewer-student read the posting of another student (Yaxis) and how often other students read the postings of the viewer-student (X-axis). Each student in the community was represented as a point in the space. The viewer was always at the (0,0) postion in lower left corner. The distance between the viewer and a given student along each axis depends on how “close” the relationship between the viewer and the student. In the beginning, when none of the students had read any posts yet, the distance between the viewer and each of the other students was at its maximum, so the values of both coordinates were (1,1), or “double invisible” (the viewer hadn’t read anything by the student and vice versa). Therefore, at the beginning all of the other students were clustered at the upper right corner of the square. Later, students moved down and to the left, and asymmetries arose as the students read each other’s posts. Some students emerged as “pop-stars” and moved towards the top left corner, since their posts were frequently read by the viewer, but they were not aware of IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES ID the viewer’s posts. Other students became “secret admirers” of the viewer’s posts and they moved towards the bottom right corner. As seen in Fig. 9, most students evolved a symmetrical relationship with the viewer by slowly moving towards the lower-left corner along the diagonal. Our hypothesis was that the visualization would stimulate students who were “pop-stars” to look at other students in their “secret admirer” corner, read their postings and respond or comment on them, thus balancing their relationships. We confirmed the hypothesis in a one-term long classroom experiment with a control and test group. We saw that the test group students engaged in more symmetrical relationships with their colleagues and read more articles. More details about the approach and its evaluation can be found in [31]. Of course, we had to make the common assumption that is made in many adaptive and recommender systems: that viewing posts (the student clicks on the post) is the same as reading posts. The system can only track the number of “views” but it cannot know if the viewed posts were actually “read”. In general, the evaluation of all three incentive mechanisms in the classroom experiments using the different versions of Comtella with control and test groups of students showed that each of the mechanisms was very effective in stimulating student behavior that was intended (contributing more papers, ratings, or reading papers). Both the Cpoints and the immediate gratification approaches stimulated in the test group twice as many ratings as in the control group. We showed that an adaptive rewards mechanism could orchestrate the desired pattern of collective behavior: the time-adaptation of the rewards stimulated students to make contributions earlier. We learned that it is important to make the student aware of the rewards for different actions at any given time. More details about the evaluation of these systems are available in [8], [36], [37], and [43]. 4. FUTURE DIRECTIONS There are several promising areas along the directions outlined in this paper that are underexplored currently: 1) The design of pedagogically-grounded, learnercentered social learning environments is a long-term direction where a lot of work is needed. I illustrated some aspects of the problem of finding appropriate content, e.g. annotation and recommendation, emphasizing approaches that are user-friendly and user-centered, like tags, folksonomies and interfaces allowing users to understand and manipulate collaborative recommendations by adjusting the influence of their friends or other users. However, I didn’t even scratch the surface of the problem how to make recommendations pedagogically sound. Currently, most of the content on the Participative Web is not annotated with respect to pedagogy. It is unrealistic to expect that expert or even user annotations will be contributed in sufficient volume to keep up with the amount of newly added content. Data mining may provide a solution to this problem. Similar to the approach suggested by Brooks and Montanez [4], it may be possible to generate VASSILEVA, J.: SOCIAL LEARNING ENVIRONMENTS annotations automatically. Data mining based on usage analysis, as suggested by McCalla [29] may help identify successful patterns of learning and in combination with collaborative filtering provide pedagogically sound, even if unexplained, suggestions. Techniques for this will probably appear in the new area of Educational Data Mining. Another interesting direction is content sequencing. While the learner-centered postulate dictates that the learner is always in control, the user’s behavior can be subtly influenced by the environment, by reordering search results, changing the available links, or tags, and providing appropriate visual interface to allow the learner to browse in a way that makes pedagogical sense. The interface of iBlogVis presented in the paper [19] did not have any underling pedagogical principles or goals, but many adaptive hypermedia systems [6, 42] have manipulated the links and their appearance according to pedagogical goals. How to do this with content that is not designed “in house” but provided by users, however, is an open question. Finding collaborators is a very important direction that also has not been explored much. Trusting people is more important than trusting content. Integrating trust and reputation mechanisms with expertise-matching and pedagogical matching needs a lot more research. 2) The design of incentive mechanisms to encourage learning, exploration, participation and contributions in social learning environments is still under-explored. Most experiments have been done in open systems with presence of other incentives, such as course grades (as in Comtella) or in large but closed systems where users participate for fun (e.g. MovieLens). It is not clear what incentives would be effective to encourage a self-centered learner to explore and learn more complex knowledge during her fragmentary learning experiences when searching information for a given narrow purpose. 3) The design and grading of coursework can be regarded as a mechanism design problem The importance of coursework design and the design of grading schemes increases with the trend of educational institutions becoming accreditation / certification authorities that attest to the learning achievements and knowledge of students obtained both in formal learning environments and in self-directed learning on the web. There is very little work in how such accreditations can be put in place, and how grading will work, considering the freedom and lack of structure of learning. In any case, however, grades have a very strong motivational effect on students and an appropriate grading scheme or coursework weighting scheme can be used as an incentive mechanism to focus the attention and efforts of learners in a desired way. Considering the design of grading schemes for coursework as a mechanism design problem seems to be an interesting unexplored area. 15 new generation of learners – the Digital Natives. I believe that our efforts need to focus on designing environments that support social learning in context, on demand, with various purposes. I defined two main challenges: (1) supporting the learner to find the right stuff and people, and (2) motivating the learner. The paper presented some work that addresses these challenges and outlined some future directions for research. ACKNOWLEDGEMENT The research by me and my students described in this paper has been supported by NSERC under the Discovery Grant Program and by the LORNET Research Network. Thanks to Wendy Sharpe for proofreading the last draft of the paper. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] 5. SUMMARY In this paper I presented my views of the current trends in the development of systems to support learning for the [13] G. Adomavicius, A. Tuzhilin, "Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions", IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 6, 2005, pp. 734-749. R.Baker, A.T. Corbett & K.R. Koedinger, ”Detecting student misuse of intelligent tutoring systems”, Proceedings of the 7th International Conference on Intelligent Tutoring Systems, Berlin: Springer Verlag, 2004, pp. 531—540. C.H.Brooks and N.Montanez “An Analysis of the Effectiveness of Tagging in Blogs.” Proc. 2006 AAAI Spring Symposium on Computational Approaches to Analyzing Weblogs. Stanford, CA. March 2006. C.H.Brooks and N.Montanez, “Improved Annotation of the Blogopshere via Autotagging and Hierarchical Clustering” Proc. of the 15th World Wide Web Conference (WWW06), Edinburgh, May 2006. H. Bretzke, J. Vassileva, “Motivating Cooperation in Peer to Peer Networks”, Proceedings User Modeling UM03, Johnstown, PA, June 22-26, Springer Verlag LNCS 2702, 2003, 218-227. P. Brusilovsky, J. Vassileva “Course Sequencing Techniques for Large Scale Web-based Education”, International Journal of Continuing Engineering Education and Life-long Learning, 13, (1/2), 2003, pp.75-94. S. Bryant, A. Forte and A. Bruckman. “Becoming Wikipedian: transformation of participation in a collaborative online encyclopedia”. Proceedings of GROUP International Conference on Supporting Group Work, Sanibel Island, FL, 2005. pp. 1-10. R.Cheng, J.Vassileva, “Design and Evaluation of an Adaptive Incentive Mechanism for Sustained Educational Online Communities.” User Modelling and User-Adapted Interaction, Vol. 16 , No (2/3), 2006, 321-348. J. Collins, Greer, J., Kumar, V., McCalla, G., Meagher P., Tkach, R. “Inspectable User Models for Just in Time Workplace Training”. A. Jameson, C. Paris, C. Tasso, Eds. UserModelling, Proceedings of the UM97 Conference, Wien - New York: Springer, 327-337. E. Fehr and K. Schmidt, “Theory of Fairness, Competition and Cooperation”, Quarterly Journal of Economics 114, 1999, pp. 817-868. L. Festinger, “A theory of social comparison processes.” Human Relations, 7(2) , 1954, pp. 117-140. A. Forte, and A. Bruckman. “Why do people write for Wikipedia? Incentives to contribute to open-content publishing”. GROUP 05 workshop: Sustaining community: The role and design of incentive mechanisms in online systems. Sanibel Island, FL. 2005 A. Forte, and A. Bruckman. “Information Literacy in the Age of Wikipedia”. In Symposium on Learning and Research in the Web 2 Era: Opportunities for Research (Organized by James Slotta) To appear in 16 IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES ID the Proceedings of the 2008 International Conference on the Learning Sciences. Available online (accessed Jan 4, 2008): http://www.cc.gatech.edu/~aforte/ForteInformationLiteracyWikipedia.pdf [14] J. Greer, G. McCalla, J. Collins, V. Kumar, P. Meagher, J. Vassileva “Supporting Peer Help and Collaboration in Distributed Workplace Environments”, International Journal of AI and Education. 9, 1998, pp. 159177 [15] T. Gruber, “Where the Social Web Meets the Semantic Web”. Keynote presentation at the 5th International Semantic Web Conference, November 7, 2006., available online at: http://tomgruber.org/writing/index.htm [16] J.L. Herlocker, J. Konstan, J. Riedl: “Explaining Collaborative Filtering Recommendations”. In: Proceedings of the 2000 ACM Conference on Computer Supported Collaborative Work, pp. 241–250. ACM Press, New York. [17] H.-U. Hoppe, The Use of Multiple Student Modelling to Parameterise Group Learning,in J. Greer (Ed.) Artificial Intelligence and Education, Proceedings of AIED’95, Charlottesville, VA: AACE, 1995, pp. 234-241. [18] C. O. Houle “The inquiring mind”. Madison, WI: University of Wisconsin Press 1961. Republished 1988. [19] Indratmo, J. Vassileva, “Exploring Blog Archives with Interactive Visualization” Proceedings Advanced Visual Interfaces AVI'2008, Napoli, 28-30 May 2008 [20] J. Jacobs, “Dark Age Ahead”, Ranom House: New York, 2004. [21] S. Johnson, “Everything Bad is Good for You”, Penguin book: New York, 2005. [22] T. Karrer “Corporate learning Long Tail and Attention Crisis: ELearning”, 2008. (accessed on Jan 3, 2008) http://elearningtech.blogspot.com/2008/02/corporate‐learning‐long‐tail‐and.html [23] E. Kuiper, M. Volman, & J.Terwel, “The web as an information resource in K-12 education: Strategies for supporting students searching and processing information”. Review of Educational Research, 75(3), 2005, pp. 285-328. [24] C. Lampe, “Ratings Use in an Online Discussion System: The Slashdot Case”., Ph.D Thesis, School of Information, University of Michigan, Ann Arbor, MI, 2006. [25] S. D. Lewitt, S. Dubner, “Freakonomics: A Rogue Economist’s Guide to the Hidden Side of Everything”, Harper Collins: New York, 2005. [26] A. Maslow, “Motivation and Personality”. 1954. Ed. Cynthia McReynolds. 3rd ed. New York: Harper and Row, Inc., 1987. [27] S. Mantyka “The Math Plague: How to Survive School Mathematics”, MayT Consulting Corporation, St. John’s, Nfl, 2007. [28] G. McCalla, J. Vassileva, J. Greer, S. Bull, “Active Learner Modelling”, in G. Gautier, C.Frasson & K.VanLehn, eds. Proceedings of ITS'2000, Springer LNCS 1839, 53-62. [29] G. McCalla. “The Ecological Approach to the Design of ELearning Environments: Purpose-based Capture and Use of Information About Learners”. Journal of Interactive Media in Education, 2004 (7). [30] R. Mizoguchi, J. Bourdeau, “Using Ontological Engineering to Overcome Common AI-Ed Problems”, International Journal of Artificial Intelligence in Education, Vol. 11, no. 2, pp. 107-121, 2000. [31] X. Niu, G. I. McCalla, and J. Vassileva, “Purpose-based Expert Finding in a Portfolio Management System”. Computational Intelligence Journal, 20 (4), 2004, pp. 548-561. [32] Tim O'Reilly (2006-12-10). "Web 2.0 Compact Definition: Trying Again". http://radar.oreilly.com/archives/2006/12/web-20-compact.html 2007-1-20. [33] Tim O'Reilly On The Future Of Social Media, Talk of the Nation Science Friday. 19 Dec 2008. Avalilable online at (accessed on Jan 4, 2008) http://www.npr.org/templates/story/story.php?storyId=98499899 [34] Y. Ren, R. E. Kraut, S. Kiesler. (in press). “Applying common identity and bond theory to designof online communities.” Organization Studies. [35] G. Small and G. Vorgan, “iBrain: Surviving the Technological Alteration of the Mondern Mind”, HarperColins: New York, 2008. [36] J. Vassileva, L. Sun, “Evolving Social Visualization Design Aimed at Increasing Participation in a Online Community,” International Journal of Cooperative Information Systems (IJCIS), vol. 17, No 4, 2008, pp. 443-466. [37] J. Vassileva, L. Sun, “Using Community Visualization to Stimulate Participation in Online Communities”. e-Service Journal, 6, 1, 2007, 3-40. [38] J. Vassileva, G. McCalla, J.Greer, “Multi-Agent Multi-User Modeling”, User Modeling and User-Adapted Interaction, Vol. 13, No (1), 2003, 179-210. [39] Y. Wang, J. Vassileva “Trust-Based Community Formation in Peer-toPeer File Sharing Networks”, Proc. of IEEE/WIC/ACM International Conference on Web Intelligence (WI 2004), 2004, Beijing, China. [40] Y. Wang, J. Vassileva “Toward Trust and Reputation Based Web Service Selection: A Survey.” International Transactions on Systems Science and Applications, Vol. 3, No. 2, 2007, pp.118-132. [41] Y. Wang, J. Vassileva “Super-Consumer Based Reputation Management for Web Service Systems”, Proceedings the 2nd IEEE International Conference on Digital Ecosystems and Technologies, IEEE-DEST 2008, Phitsanulok, Thailand, February, 2008. [42] Weber, G. and Brusilovsky, P. “ELM-ART: An adaptive versatile system for Web-based instruction”. International Journal of Artificial Intelligence in Education 12 (4), Special Issue on Adaptive and Intelligent Web-based Educational Systems, 2001, 351-384 [43] A.S. Webster, J. Vassileva “Visualizing Personal Relations in Online Communities”, Proceedings AH’2006: Adaptive Hypermedia and Adaptive Web-Based Systems, Dublin, Ireland, June 21-23, 2006, Springer LNCS 4018, 223-233. [44] A.S. Webster, J. Vassileva “Push-Poll Recommender System: Supporting Word of Mouth”, Springer LNCS Proceedings User Modelling, UM2007, Corfu, Greece, June 25-29, 2007 [45] A.S. Webster, J. Vassileva, “The KeepUp Recommender System”. Proceedings ACM RecSys 2007, pp. 173-177. [46] A.S. Webster “Visible Relations in Online Communities: Modeling and Using Social Networks”, M.Sc Thesis, University of Saskatchewan. http://library2.usask.ca/theses/available/etd-09192007-204935/ [47] D.Weinberger “Everything is Miscallaneous: The Power of the New Digital Disorder”, Holt Paperbacks: New York, 2007. Julita Vassileva is professor at the Computer Science Department at the University of Saskatchewan, Canada. She is co-editor of the International Journal of Continuing Engineering Education and Life-Long Learning. She serves on the editorial board of the User Modeling and User-Adapted Interaction Journal; and as the Vice-President of User Modeling Inc. Dr. Vassileva holds the NSERC/Cameco Prairie Chair for Women in Science and Engineering, one of the five such regional chairs in Canada sponsored by NSERC.