ARTICLE IN PRESS
Int. J. Human-Computer Studies 61 (2004) 84–103
On-line question-posing and peer-assessment
as means for web-based knowledge
sharing in learning
Miri Baraka,*, Sheizaf Rafaelib
a
The Center for Educational Computing Initiatives, Massachusetts Institute of Technology,
Building 9-315, 77 Massachusetts Avenue, Cambridge, MA, USA
b
The Center for the Study of the Information Society, Graduate School of Business,
University of Haifa Mt. Carmel, Haifa, Israel
Received 29 June 2003; received in revised form 1 December 2003; accepted 2 December 2003
Abstract
This study is an examination of a novel way for merging assessment and knowledge sharing
in the context of a hybrid on-line learning system used in a postgraduate MBA course. MBA
students carried out an on-line Question-Posing Assignment (QPA) that consisted of two
components: Knowledge Development and Knowledge Contribution. The students also
performed self- and peer-assessment and took an on-line examination, all administered by
QSIA—an on-line system for assessment and knowledge sharing. Our objective was to explore
student’s learning and knowledge sharing while engaged in the above. Findings indicated that
even controlling for the students’ prior knowledge or abilities, those who were highly engaged
in on-line question-posing and peer-assessment activity received higher scores on their final
examination compared to their counter peers. The results provide evidence that web-based
activities can serve as both learning and assessment enhancers in higher education by
promoting active learning, constructive criticism and knowledge sharing. We propose the online QPA as a methodology, and QSIA system as the technology for merging assessment and
knowledge sharing in higher education.
r 2003 Elsevier Ltd. All rights reserved.
*Corresponding author. Tel.: 617-253-6749; 972-4-8249581.
E-mail addresses: bmiriam@mit.edu (M. Barak), sheizaf@rafaeli.net (S. Rafaeli).
1071-5819/$ - see front matter r 2003 Elsevier Ltd. All rights reserved.
doi:10.1016/j.ijhcs.2003.12.005
ARTICLE IN PRESS
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
85
1. Introduction
Networked computers can be used for sharing information and knowledge.
They can also be used in on-line evaluation of learning outcomes. Can the process
of on-line knowledge sharing be made relevant to learning and assessment? This
study is an examination of a novel way of merging assessment and knowledge
sharing in the context of a hybrid on-line learning system, used in a postgraduate
MBA course.
The internet has always been a prominent space for learning (Dori et al., 2003;
Potelle and Rouet, 2003) and testing (Rafaeli and Tractinsky, 1989, 1991). This
paper describes a unique way of implementing a web-based testing mechanism that
goes beyond just the administration of on-line tests. The system, named QSIA, was
designed to enhance knowledge sharing among both instructors and students. QSIA,
employed in a Graduate School of Business, was used as a platform for carrying
out an on-line Question-Posing Assignment (QPA). In this assignment, students
were required to contribute questions (knowledge items) for public use. The
students were also asked to rank their peers’ contributions. The on-line QPA was
graded for quality and persistence. This sort of knowledge sharing assignment is
only feasible in a web-based on-line supported environment, if only for the
sheer volume of data flows. These new procedures, grafted onto traditional
classroom practice, raise several intriguing research questions: How do students
respond to and perform with this novel on-line, collaborative QPA? How
does the on-line question-posing and peer-assessment activity relate to students’
traditionally conceptualized learning outcomes? And what are the students’ attitudes
towards the use of systems such as QSIA and the on-line QPA? In the following we
report on a field test that investigates the implementation of such a set of tools and
practices.
1.1. Web-based testing and on-line assessment
The evaluation of learning outcomes is evolving in both methodology and
technology. The methodology of evaluation is shifting from a ‘‘culture of testing’’ to
a ‘‘culture of assessment’’ (Birenbaum and Dochy, 1996; Sluijsmans et al., 2001).
Emphasis is placed on integrating assessment and instruction. Assessment addresses
the process of learning rather than just the evaluation of products and individual
progress. The role of students has also been changing from passive subjects to active
participants who share responsibility in the process, practices self-assessment and
collaboration.
Technologically, the environment is shifting from paper and pen to computerized
adaptive testing. Computerized administration of tests is attractive for a variety of
reasons. Computerized tests offer convenience, efficiency, aesthetic and pedagogic
improvements (Rafaeli and Tractinsky, 1989, 1991). Computerized testing has
traditionally been a very centralized, closely guarded and tightly controlled
enterprise. An artefactual expression of this centralization is evident in the reliance
of most computerized adaptive testing systems on closed, mainly multiple-choice
ARTICLE IN PRESS
86
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
format questions. More recent developments in interface design allow a relaxation of
some of this rigidity and an enrichment in test types.
The role of information technology in educational assessment has been growing
rapidly (Beichner et al., 2000; Hamilton et al., 2000; Barak, 2003). Several wellknown computer-based tests are now administered on the web, including the
Graduate Record Exam (GRE), the Graduate Management Admissions Test
(GMAT), and the Medical Licensing Examination (MLE). The high speed and large
storage capacities of today’s computers, coupled with their rapidly shrinking costs,
makes computerized testing a promising alternative to traditional paper-and-pencil
measures. Web-based testing systems offer the advantages of computer-based testing
delivered over the Internet. The possibility of conducting an examination where time
and place are not limited, however time and pace can still be controlled and
measured, is one of the major advantages of web-based testing systems (Rafaeli and
Tractinsky, 1989, 1991; Rafaeli et al., 2003). Other advantages include: the easy
accessibility of on-line knowledge databases and the inclusion of rich multimedia,
and interactive features such as color, sound, video, and simulations.
Modern on-line assessment systems offer considerable scope for innovations in
testing and assessment as well as a significant improvement of the process for all its
stakeholders, including teachers, students and administrators (McDonald, 2002).
This paper presents a new approach for web-based testing. We use the term ‘on-line
assessment’ to emphasize the shift from a culture of testing to a culture of assessment
and from paper and pen to web-based administered examinations. This study
describes a new on-line mode of learning and evaluation that includes questionposing integrated with multi-modes of assessment:
1. Self-assessment: Students conduct self-assessment by completing an independently run test followed by immediate feedback.
2. Peer assessment: Students are required to contribute items to a joint pool and are
encouraged to read and review questions developed and contributed by others—
their classmates.
3. Achievement assessment: Knowledge acquisition is assessed via an on-line final
examination.
All modes of assessment were administered by QSIA—an on-line system for
assessment and knowledge sharing (Rafaeli et al., 2003).
1.2. Question-posed by students
There is a recent increase in recognition given to the importance of student’s
questions in the teaching and learning process (Dori and Herscovitz, 1999; MarbachAd and Sokolove, 2000). The realization that questions and information seeking are
central to meaningful learning dates back to Socratean thought (Bohlin, 2000).
Challenging students to assume an active role in posing questions can promote
independence in learning (Bruner, 1990; Marbach-Ad and Sokolove, 2000).
Although the essence of thinking is asking questions, most students perceive
learning as the study of facts (Shodell, 1995). This may relate to acquired experience
ARTICLE IN PRESS
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
87
with questions as something teachers impose on students, using fact-demanding
questions rather than thought provoking queries. During their years of education,
students are schooled at answering questions but remain novices at asking them
(Dillon, 1990). Questions, in the traditional teaching, are privately owned and
displayed by the teachers. Dillon (1990) suggests that questions should come from
both teachers and students. Similarly, studies of novel teaching approaches stress the
importance of the student’s questions. These studies suggest that the central role of
education should be to develop in students an appreciation of posing questions
(Shodell, 1995; Dori and Herscovitz, 1999). In the study reported here, students were
asked to develop questions relating to the course learning contents. Using QSIA as
the web-based technology they were asked to share these questions with their peers,
use the questions as a form of preparing for the final test, and evaluate questions
posed by their classmates.
1.3. Peer-assessment
Innovation in assessment practices has accelerated in recent years as well.
Assessment systems that require students to use high-order thinking skills such as
developing, analysing and solving problems instead of memorizing facts are
important for the learning outcomes (Zohar and Dori, 2002). Two of these higherorder skills, are reflection on one’s own performance—self-assessment, and
consideration of peers’ accomplishments—peer assessment (Birenbaum and Dochy,
1996; Sluijsmans et al., 2001). Both self- and peer-assessment seem to be
underrepresented in contemporary higher education, despite their rapid implementation at all other levels of education (Williams, 1992). Larisey suggested that the adult
student should be given opportunities for self-directed learning and critical reflection
in order to mirror the world of learning beyond formal education (Larisey, 1994).
Experiencing peer assessment seems to motivate deeper learning and produces better
learning outcomes (Williams, 1992).
Peer assessment tasks include rating of individual and group presentations,
artwork, or posters (Zevenbergen, 2001); marking classmates’ problem solving
performances; and rating classmates’ contributions while carrying out
different group assignments (Conway et al., 1993; Sluijsmans et al., 2001). Our
study describes a new mode of peer assessment task. In this study, students were
asked to review instructional questions developed by their classmates’ and conduct
peer assessment by rating the questions. This peer assessment task was available
throughout the learning period as it was conducted via a web-based on-line
assessment system.
2. Research settings
Our study explored a novel educational methodology and technology implemented in a postgraduate E-business course. The students participating in the
course carried out an on-line QPA (Question Posing Assignment) administered by
ARTICLE IN PRESS
88
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
QSIA—an on-line system for assessment and knowledge sharing. This section
describes the E-business course, The QSIA system, the on-line QPA, and the way
students were graded on their assignment.
2.1. E-business course
E-Business is a required, advanced course given in the latter half of an MBA
(Masters in Business Administration) curriculum, at a major Graduate School of
Business Administration, in an Israeli university. The E-Business course introduces
applications of IT (information technology) in markets, organizations and individual
use. The course includes both technical and conceptual topics such as: HTML,
XML, ASP, business frameworks and models, packet switching, cryptography,
security, payment mechanisms and digital money, auctions, Java, Javascript,
cookies, knowledge management, and the like. This is a three credit, 9 week
(‘‘mini-semester’’) course.
As part of the course’s tasks, students were asked to pose questions and perform
peer assessment via QSIA—a web-based on-line assessment and knowledge sharing
system, described in the following.
2.2. The QSIA system
Knowledge sharing and community building is at the core of many on-line
systems design (Carroll et al., 2003; Robertson and Reese, 1999; Rafaeli and Ravid,
2003). This research presents an on-line system that was designed to share the
authoring of knowledge items and the process of constructing assignments and
tests—QSIA.
QSIA is acronym for Questions Sharing and Interactive Assignments but also a
Hebrew eponym for ‘‘question’’. QSIA was designed to serve instructors in providing
a web-based platform to share the authoring of knowledge items, the management of
collections of such items and the accumulated history of the psychometric
performance. The system was designed to harness the power of groups and
communities to improve the process of constructing assignments and tests (Rafaeli
et al., 2003). From a student and classroom perspective, QSIA enables the
administration of assignments and tests under a variety of contexts. Tests and
assignments can be completed on-line or off-line, in proctored or individual settings,
with or without time limits, allowing open or closed book or internet connections,
etc. Creation of the database of items and assignment templates is, however, only the
first tier of the system usage. A second tier allows the collection of knowledge items
ratings and the provision of recommendations. Participants in the system are given
tools that allow them to respond and rank the items. QSIA provides aggregation of
such ranking for future sifting and selection. Actual use of the system in a learning
capacity enriches the collected history and available logs. Thus, this system is
designed to learn, not just teach. QSIA can be viewed at: http://qsia.info. Fig. 1
presents a screen shot of the interface to QSIA’s knowledge items database, folders,
search and recommendations.
ARTICLE IN PRESS
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
89
2.2.1. QSIA’s technology architecture
The basic idea of sharing which initiated QSIA, also stands behind its technology.
The system is based on open source technology and contributes back to the open
source community. The system uses an open source application server (Apache
Tomcat), and an open source database (MySQL), based on an open source operating
system (Linux). Some of the software infrastructure built for QSIA is shared with the
open source community under the GNU public license.
The system is built around open and acceptable standards, both in the software
engineering aspect and in the functionality aspect. The system is based on Java
technology, using JSP as the presentation layer and object oriented Java Beans
technology as the business logic layer. These foundations enable the system to
operate in any standard operating system and application server environment. The
relational MySQL database serves as a data repository for the system. Because of the
seamless SQL support, the database could be switched easily to any SQL database,
and the content can be easily imported or exported.
In order to communicate with an external learning system, and in order to
assimilate the system in the world of educational systems, QSIA supports the
Question and Test Interoperability (QTI) specification developed by IMS Global
Learning Consortium, Inc. (2003). The IMS Question and Test Interoperability
Specification provides proposed standard XML language for describing questions
and tests. The specification has been produced to allow the interoperability of
content within assessment systems.
Fig. 1. QSIA’s knowledge items database, folders, search and recommendations.
ARTICLE IN PRESS
90
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
2.2.2. QSIA’s conceptual architecture
The design of QSIA was based on four conceptual pillars: Knowledge Generation,
Knowledge Sharing, Knowledge Assessment, and Knowledge Management.
*
*
*
*
Knowledge Generation—QSIA enables users to create and edit different
knowledge items such as questions or learning tasks. The system includes a
variety of question types such as open ended, numerical responses, multiplechoice, matching, true/false questions and the like.
Knowledge Sharing—QSIA focuses on knowledge sharing among participants,
while maintaining a secure and private working environment for subgroups and
individuals. One of QSIA’s sub-tasks is ‘matching mates’: the capability to make
matches among recommenders and those seeking recommendations.
Knowledge Assessment—QSIA can be used both as a formal and informal
evaluation of the students’ learning outcomes. Formal evaluation is carried out by
quizzes or examinations given to the students simultaneously but not necessarily
in the same place. Informal evaluation is represented by self-tests. Students’ may
use subsets of the item collections designated by instructors to perform self-tests
on-line, any time and any place. Such self-directed use can aid diagnosis and
detection of specific difficulties or misunderstanding.
Knowledge Management—QSIA allows individual and shared management of
educational content using hierarchical information storage and classification
policies and tools, search facilities and editing tools. It enables users to create and
manage a set of folders that includes all of the content owned by them.
QSIA enables web-based support for many of the traditional modes of testing and
assignments. However, in this paper we attempt to document a novel mode of
students’ ‘knowledge assessment’ that includes self-, peer- and achievementassessments. We report here on an investigation of the relations between ‘knowledge
assessing’ and ‘knowledge sharing’ made feasible via QSIA. Development,
contribution and assessment of knowledge were merged in a novel assignment
described in the following.
2.3. The on-line question-posing assignment (QPA)
Graduate students in the E-Business course were required to author questions and
present possible answers relating to topics taught in class. The students were required
to share these questions on-line with their classmates. They were also asked to read and
review their classmates’ contributions and perform peer-assessment by ranking items
authored by others on a 1 (poor)–5 (excellent) scale. Some motivation for participation
and attention to this process was based on the indication that the final examination
may include items generated in this process. Students also received a differential bonus
grade, up to 10 additional percentage points on the course final grade, depending on
their performance in this assignment. The on-line QPA required students to be actively
engaged in constructing instructional questions, testing themselves with their
classmates’ questions, and assessing questions contributed by their peers.
ARTICLE IN PRESS
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
91
The purpose of the on-line QPA was three-fold. The first goal was to enhance
meaningful learning by implementing construction of knowledge through construction of questions and answers. Secondly, we intended to encourage knowledge
sharing by distributing the students’ questions and answers on the web. The third
goal was to enhance peer-to-peer assessment, an initial step for generating a
community of learners.
Students were advised of seven components that determined their grade on the online QPA. The grading components included the cognitive level of the questions; the
cognitive level of the distracters; the type of questions; the type of multimedia
features employed with the questions; the number of questions contributed; the
persistence of uploading questions throughout the study period (measured as
frequency); and the number of questions authored by others, that were ranked by the
student. Students were encouraged to author ‘‘objective’’, closed questions such as
multiple-choice and true/false. This is due to two reasons. First, in closed questions
the students need to compose not only the question but also several distracters that
reflect the possible correct answer. The cognitive investment in authoring such items
is higher compared to open ended questions (Tamir, 1996). Second, closed questions
allow immediate feedback by the computer. Students were required to contribute at
least five questions to qualify for a minimum grade on the on-line QPA. Students did
not receive instructions on how to evaluate their peers’ questions. Each student had
to generate his own rules and criteria. Fig. 2 presents an example question that was
developed and contributed by one of the students.
2.4. Grading the on-line QPA
Determining the score earned on the on-line QPA involved both content analysis
and the usage of access logs. Qualitative components such as the ‘cognitive level of
the questions’ and the ‘cognitive level of the distracters’ required content analysis of
each question and distracter. Whereas quantitative data such as the number of
questions contributed, the persistence (frequency) of uploading questions and the
number of questions ranked were extracted from the servers’ logs.
2.4.1. Grading the ‘Knowledge Development’ components of the on-line QPA
Questions may be rank-ordered by the level of thought they require. The most
common hierarchy for ranking knowledge items is the Bloom taxonomy (Bloom
et al., 1956) that consists of six levels: knowledge, comprehension, application,
analysis, synthesis, and evaluation. In this study we analysed the content of 597
questions developed by 71 students. Each question was rated on a 0–3 scale. To
validate the content analysis and rating of the questions, five experienced instructors
of E-Business were asked to rank a random sample of 30 questions, about 5% of the
questions developed by the students. The instructors were asked to indicate the
cognitive level of the questions using the following three categories:
1. Knowledge (low-order thinking skill)—A question that requires recalling details
from memory, from a textbook, or from any other source of information.
ARTICLE IN PRESS
92
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
Fig. 2. Question on packet switching that was developed by one of the students.
2. Understanding (intermediate order thinking skill)—A question that requires
implementation of a new concept and connecting it to other concepts.
3. Evaluation (high-order thinking skill)—A question that requires implementation
of critical thinking and evaluation of phenomena.
These three categories are a modification of Blooms’ taxonomy (Bloom et al.,
1956). Eighty percent consent on the categories for each question was accepted. The
experts added remarks and comments such as: ‘‘overlapping alternatives’’; ‘‘OK, too
few options’’; ‘‘bad spelling’’; and more. The experts’ remarks and comments
provided the researchers benchmarks for analysing and categorizing the rest of the
questions.
Each question was analysed using four qualitative components and graded on a 0–
3 scale, as presented in Table 1. These four components determined the students’
Knowledge Development performance.
The scores for the four components of Knowledge Development presented in
Table 1, were produced for each question separately. An average of these
components was calculated to determine the students’ grades.
2.4.2. Grading the ‘Knowledge Contribution’ components of the on-line QPA
Quantitative components for evaluating the students’ Knowledge Contribution
were collected through the servers’ access logs. The number of questions contributed,
ARTICLE IN PRESS
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
93
Table 1
Grade components for evaluating students’ Knowledge Development
Question-posing assignment
components
Knowledge
Development
No
performance
(0 points)
Low
performance
(1 point)
Intermediate
performance (2
points)
High
performance (3
points)
1. Cognitive
level of
question
Wrong or
irrelevant
question
2. Cognitive
level of
Distracter
Wrong or
irrelevant
answers
No questions
Implementing
learned concepts
and
interconnection
All distracters
are feasible, the
best answer has
to be chosen
Multiple-choice
Implementing
critical thinking
and evaluation
of phenomena
Critical thinking
and addition of
explanations
3. Type of
question
4. Multimedia
features
Requires
recall of
details from
memory
Simplistic—
the correct
answer is
obvious
True/false
Matching
Text only
Colored fonts
Added hypertext
links
Added pictures
or animation
Table 2
Grade components for evaluating students’ Knowledge Contribution
Question-Posing assignment
components
No
performance
(0 points)
Low
performance
(1 point)
Intermediate
performance (2
points)
High
performance (3
points)
Knowledge
Contribution
No
contribution
5–7 questions
8–10 questions
More than 10
questions
All questions
uploaded at
the same day
No questions
were ranked
1–5 days gap
More than 9
days gap
6–8 days gap
1–5 questions
ranked
6–20 questions
ranked
More than 20
questions ranked
5. Number of
questions
contributed
6. Persistence
of uploading
questions
7. Number of
questions
ranked
the persistence in uploading questions throughout the study period, and the number
of questions ranked, determined the students Knowledge Contribution performance
as presented in Table 2.
The students’ final grade on the on-line QPA was, then, a linear function of the
seven components. Each student was accorded a differential grade, for up to 10
points.
3. Research objective and methodology
This study is an investigation of a novel mode for on-line assessment and
knowledge sharing. Our objective was to explore student’s learning and knowledge
ARTICLE IN PRESS
94
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
sharing while engaged in an on-line question-posing and peer-assessment activity.
QSIA system was used as a platform for this study.
When we harness the capabilities of web-based testing mechanisms to go beyond
just the administration of on-line tests and include knowledge sharing by on-line
QPA, we encounter three interesting research questions:
1. How do students perform in on-line QPA?
2. How does the on-line QPA relate to students’ traditionally conceptualized
learning outcomes?
3. What are the students’ attitudes towards the use of systems such as QSIA and the
on-line QPA?
3.1. Research population
The subjects in this experiment were 71 students who participated in an E-Business
course, offered as part of an MBA (Masters in Business Administration) program at
a Graduate School of Business Administration, in a major university in Israel. Most
of the subjects were males (69%), and their average age was 33.
As the student body was heterogeneous with students rooted in different cultures,
academic backgrounds, and levels of work experience, their GMAT scores were used
to determine their prior educational background and capabilities. The GMAT is an
internationally known required test of general skills and abilities. Students are
required to take the GMAT before enrolling the MBA program. GMAT measures
basic verbal, mathematical, and analytical writing skills (Johnson, 2000). The
GMAT scores were used as the control variant of the statistical analysis procedures
(Mean=655.92, s.d.=51.09). No significant difference was found between GMAT
scores for male and female students in this group.
3.2. Research instruments
The research instruments were selected in order to best measure student’s learning
and sharing outcomes while engaged in an on-line question-posing and peerassessment activity. The research tools assessed a variety of variables for indicating
students’ performances in the social, cognitive and affective domains as presented in
Table 3.
In the social domain, content analysis of the students’ contributed questions and
data from QSIA’s access logs were used for determining the score of each student on
his question-posing and peer-assessment activity. The content analysis of the
contributed questions determined not only social but also cognitive abilities. It was
defined under the social domain since the students shared their questions by
contributing them to the benefit of the learning community. The exact procedure of
analysing the contributed questions and calculating the students’ grades is detailed in
the research settings section. The content analysis investigated the students’
Knowledge Development.
ARTICLE IN PRESS
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
95
Table 3
Research instruments, assessed variables and explored domains
Instrument
Assessed variable
Domain
Content analysis of contributed
questions and QSIA access logs
Final examination
Question-posing and peerassessment activity
Conceptual and phenomenon
understanding
Attitudes towards the on-line QPA
and QSIA
Social
Feedback questionnaire
Cognitive
Affective
Web access logs provided information on the students’ behavior while
contributing knowledge. Accesses to web servers are recorded meticulously. Every
request by a ‘‘client’’ results in a record of the date and time of the request, the
transmission protocol, the amount of information sent, and the address of the client
(Rafaeli and Ravid, 1997). In this study, the number of questions uploaded by each
student, the frequency of using the QSIA platform, and the number of questions
ranked were recorded on the web servers’ logs, and were provided on QSIA for
monitoring purposes. The access logs were tools for investigating the students’
Knowledge Contribution.
In the cognitive domain, the students’ grades on their final examination were
investigated. The final examination was composed by the instructor. The
examination was 90 min long, and included 80 questions, 72% were multiple-choice
questions, the remainder in true/false format. About 15% of the questions were
interdisciplinary and integrated numerous topics taught in class.
In the affective domain, a feedback questionnaire for evaluating attitudes towards
the on-line QPA and the use of QSIA was administered at the end of the course, after
the students experienced developing questions and contributing them to QSIA,
performing peer-assessment and responding to an on-line self-test and an on-line
examination. The feedback questionnaire contained 12 statements concerning the
usage of QSIA during the semester as a learning environment and 4 statements
concerning the usage of QSIA as an on-line assessment tool. The statements were
presented on a Likert-type 5-point scale response (5-stronly agree to 1-strongly
disagree). Five experts in science and computers education validated the
questionnaire. The questionnaire’s internal reliability, Cronbach’s Coefficient Alpha,
was 0.85.
4. Results
The results section consists of three parts. Each part relates to a certain research
question and presents data for its answer. Each part is also associated with one or
two explored domains—social, cognitive and affective. The first part reports the
results of the students’ performance on the on-line QPA. The second part reports the
ARTICLE IN PRESS
96
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
relationships between the students’ on-line QPA grades and their final examination
grades. Both parts explore the relationships between the social and cognitive
domains of on-line knowledge sharing. The third part reports the students’ attitudes
towards the use of on-line systems such as QSIA and explores the affective domain.
4.1. Students’ performance on the on-line QPA
During the mini-semester the students developed and contributed 597
questions on QSIA platform, as part of the on-line QPA. The students’ grades
on the on-line QPA were standardized on a 1–3 point scale in order to reduce
the differences within each component and allow statistical analysis. The results
show that the average grade students receive on the on-line QPA was around 5
points (5.4470.76) and the maximum grade was 6.67 points (out of 10). The average
grades of the seven components composing the on-line QPA are presented in
Table 4.
Spearman r correlation test, among the seven components of the on-line QPA
components indicated a statistical significant correlation between three pairs as
presented in Table 5.
Table 5 presents a statistically significant correlation between the ‘cognitive level
of the questions’ and the ‘cognitive level of the distracters’ components. In other
words, students who developed high cognitive level questions also developed high
cognitive level distracters.
The ‘cognitive level of the distracters’ and the ‘type of question’ components were
also statistically significant correlated. This means that students who developed high
cognitive level distracters chose also to write complex type items such as matching or
multiple-choice questions.
A third significant correlation was found between the ‘number of questions
contributed’ and the ‘persistence in uploading questions’ components. This means
that students who developed many questions tended to upload them more
frequently—about once a week.
Table 4
Average grades of the seven components of the on-line QPA
On-line question-posing assignment components
Mean
s.d.
Knowledge Development
1.
2.
3.
4.
2.03
1.89
1.95
1.02
0.61
0.43
0.23
0.12
Knowledge Contribution
5. Number of questions contributed
6. Persistence of uploading questions
7. Number of questions ranked
1.75
1.65
1.15
0.63
0.61
0.47
N ¼ 71; Minimum=1, Maximum=3.
Cognitive level of question
Cognitive level of distracter
Type of question
Multimedia features enclosed
ARTICLE IN PRESS
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
97
Table 5
Spearman r correlations of the on-line QPA’s components
N ¼ 71
Cognitive level of
distracter
Type of question
Multimedia
features enclosed
Number of
questions
contributed
Persistence of
uploading
questions
Number of
questions ranked
Cognitive
level of
question
Cognitive
level of
distracter
0.334
Type of
question
Multimedia
features
enclosed
Number of
questions
contributed
Persistence
of
uploading
questions
—
0.110
0.194
0.517
0.034
—
0.029
0.021
0.174
0.094
0.059
0.099
0.180
0.054
0.082
0.098
0.128
0.120
0.043
—
—
0.441
0.230
—
1.57
po0:01:
Table 6
Students’ performance on the on-line QPA and the final examination
Research instrument
N
Minimum
Maximum
Mean
s.d.
On-line QPA
Final examination
71
71
3.33
40.00
6.67
90.00
5.44
65.10
0.76
7.98
4.2. The relationships between the students’ on-line QPA grades and their final
examination grades
The second research question was: How does the on-line QPA relate to students’
traditionally conceptualized learning outcomes? This question aimed at investigating
relationships and the mutual influence of the social and cognitive domains of the online question-posing and peer-assessment activity. Table 6 presents descriptive
analysis of the students’ performance on their on-line QPA and the final
examination. The on-line QPA grades are on a scale of 1–10 and the final
examination grades are on a scale of 100 points.
The relationship between the students’ grades on their on-line QPA and their final
examination grades is of central interest. A linear regression analysis was performed
to investigate whether the on-line QPA grades predict the final examination grades.
The on-line QPA grades were defined as the independent variable and the final
examination grades were defined as the dependent variable. The statistical analysis
showed a significant relationship (b ¼ 0:38; po0:01). This result indicates that
ARTICLE IN PRESS
98
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
students who performed well on their on-line QPA—developed and contributed high
quality questions in high quantity, had greater success on their final examination
compared to their counter peers.
Since the on-line QPA grades were composed of seven components, we further
investigated each component for its contribution to the relationship presented above.
A stepwise multiple regression analysis of the seven components indicated that only
the ‘Cognitive level of questions’ component contributes significantly to the
prediction of the final examination (b ¼ 0:69; po0:01). Moreover, the ‘Cognitive
level of questions’ component explains about 50% of the final examination scores.
The relationship between on-line QPA scores, especially the ‘cognitive level of
questions’ component, and the final examination grade might have an alternative
explanation. It can be argued that this finding is just an artefact of students with high
cognitive abilities succeeding in both the on-line QPA and the final examination. To
further refine the analysis we therefore proceeded with a more detailed examination.
A partial correlation coefficient was calculated using the students’ GMAT scores
(MBA program entering requirement) as a surrogate control variable for
representing prior knowledge and cognitive abilities. The results show a statistical
significant correlation (r ¼ 0:36; po0:01). This indicates that even beyond the effects
of students’ prior knowledge or abilities, students who were actively engaged in online question-posing and peer-assessment activity, received higher scores on their
final examination compared to their counter peers.
4.3. Students’ feedback regarding the use of QSIA as a web-based learning and
assessing environment
In this study, students were actively engaged in question posing as well as self-,
peer- and achievement-assessment, all administered by QSIA. In order to evaluate
the use of QSIA and to answer the third research question, a Likert-type 5-point
scale feedback questionnaire was administered at the end of the semester. The
feedback questionnaire contained 12 statements concerning the use of QSIA during
the semester as a learning environment and 4 concerning the usage of QSIA as an online assessment tool. The questionnaires were administered at the beginning of the
following semester and only 46 students (65% of the research population)
responded. The feedback results concerning the use of QSIA as an on-line learning
environment are presented in Table 7 (5-strongly agree, 1-strongly disagree).
On average, students reported moderate attitudes towards QSIA as a web-based
learning environment. They indicated enjoying the use of QSIA during the course
and noted they would like to use the system in other courses. When asked about the
learning aspects of QSIA, students indicated a variety of opinions. Students agreed
to the statement that while using QSIA they were engaged in active learning, they
also agreed that QSIA encourages individual learning. The students moderately
agreed that the use of QSIA improved their learning and conceptual understanding.
The students disagreed with the statement that the use of QSIA encouraged team
learning. By their reports, students were not concerned with data security issues, but
did note that they had encountered some technical problems.
ARTICLE IN PRESS
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
99
Table 7
Feedback questionnaire results concerning the use of QSIA as an on-line learning environment
Statements concerning the use of QSIA as a learning environment
Mean
s.d.
1. I enjoyed using the QSIA during the course.
2. I was engaged in active learning while using the QSIA.
3. I encountered technical problems while using the QSIA.
4. I would like to use QSIA in other courses.
5. QSIA encouraged individual learning.
6. Using the QSIA improved my learning.
7. My conceptual understanding improved while using QSIA.
8. QSIA encouraged critical thinking of the learning material.
9. Using QSIA improved my understanding of the learning material.
10. I became aware of my misconceptions while using the QSIA.
11. The QSIA system encouraged team learning.
12. I was concerned about Security issues when I used QSIA.
3.54
3.35
3.30
3.24
3.11
3.04
3.04
2.87
2.80
2.43
2.04
1.61
0.91
1.12
1.09
0.87
1.04
1.07
1.17
1.09
1.11
0.93
0.89
0.80
Total feedback
2.87
0.59
N ¼ 46:
Table 8
Feedback questionnaire results concerning the use of QSIA as an assessment tool
Statements concerning the use of QSIA as an assessment tool
Mean
s.d.
1. After taking a test, I would like to receive an immediate feedback on
my performance and my final grade.
2. I enjoyed using QSIA as an assessment tool.
3. I would like to use QSIA in other courses as an assessment tool.
4. I encountered technical problems while using the QSIA as an
assessment tool.
4.46
0.72
3.78
3.41
3.26
0.92
0.93
1.25
3.72
0.56
Total feedback
N ¼ 46:
The feedback results concerning the use of QSIA as an on-line assessment tool are
presented in Table 8 (5-strongly agree, 1-strongly disagree).
Table 8 shows positive attitude towards QSIA as a web-based assessment
tool. The students expressed a strong interest in receiving self-monitoring. Receiving
an immediate feedback and observing grades are important aspects of on-line
learning and testing, as perceived by the students. The students enjoyed the
use of QSIA as an assessment tool and noted they would like to use the system in
other courses. These two statements received here higher means compared to
the same statements on the previous table. Encountering technical problems
received a similar mean as in the previous table. Over all, the total feedback mean
of this section (3.7270.56) is much higher than the total feedback mean of the
previous section—QSIA as a learning environment (2.8770.59). This indicates
ARTICLE IN PRESS
100
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
that the students perceive QSIA more as an assessment tool than as a learning
environment.
No significant differences were found between genders regarding their attitudes
towards the use of QSIA as a web-based environment for learning and assessing, and
no significant correlation was found between the students’ attitudes towards the use
of QSIA and their final examination scores.
5. Summary and discussion
Some instructors expect learning to remain unchanged from the forms it had when
they were students. However, both technology and teaching paradigms are evolving,
and so should the learning environment (Dillon, 1990). This paper describes a new
approach to assessment that we believe holds promise for reshaping the way learning
outcomes are measured in higher education. This approach includes question-posing
as well as self-, peer- and achievement-assessments, all administered by QSIA—a
computerized on-line system for assessment and sharing of knowledge. This study
provides evidence that question-posing and peer-assessment can serve as both
learning and assessment tools in higher education by encouraging students to carry
out active learning, constructive criticism and knowledge sharing. We propose the
On-line QPA (Question Posing Assignment) as a methodology and QSIA as a tool to
serve as an alternative learning and assessing process in higher education.
Question-posing capability can be used effectively as an alternative evaluation tool
for assessing the extent to which students understand and analyse a topic (Dori and
Herscovitz, 1999). Using web-based on-line tools such as QSIA can be implemented
efficiently on a large scale with large numbers of students. We report a significant
correlation between the cognitive level of questions developed by students and their
performance on an independent, objective and separately administered traditional
final examination. Students who contributed higher-level questions received higher
grades on their final examination.
Rafaeli and Ravid (2003) found a relation between information sharing and team
profit. This project indicate a similar strong connection between the social
behavior—contribution of knowledge, and the cognitive benefits for the students.
On-line sharing of information or knowledge has a positive impact on learning out
comes. Findings indicated that even controlling for the students’ prior knowledge or
abilities, those who where highly engaged in question-posing and peer-assessment
activity received higher scores on their final examination compared to their counter
peers. The research findings support the claim made by other researchers that
question posing can be regarded as a component of high level thinking skills and as a
stage in the problem-solving process (Ashmore et al., 1979; Shepardson and Pizzini,
1991). The major contribution here is the demonstration of an on-line method of
doing this.
This research suggests that on-line question-posing activity may enhance
meaningful learning and that on-line peer-assessment activity may enhance
communities of learners. These activities support both development of the individual
ARTICLE IN PRESS
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
101
learner and a community of learners by enabling knowledge sharing. Knowledge
sharing was supported in this study by the two components of the on-line QPA:
Knowledge Development and Knowledge Contribution. We suggest that these are the
two components of which knowledge sharing is comprised. To share knowledge one
has first to develop it and then to consent and act on contributing it for the benefit of
others. The QSIA on-line system supports such knowledge sharing process.
Our approach expands the notion of ‘web-based testing’ to the broader ‘on-line
assessment’. The system described here goes beyond on-line testing and evaluation
of students’ performances to the creation and use of a platform that allows
various modes of assessment (self, peer and achievement) and supports knowledge
sharing.
The students who participated in the research noted positive attitudes towards
QSIA as a web-based learning and assessing environment. The students enjoyed
using the QSIA system during the course and noted they would like to use the system
in other courses. They pointed out QSIA as a system that enhances active and
individual learning. These findings are in contrast to Dabbagh’s findings (Dabbagh,
2000). In her research she received negative comments on on-line quizzes from her
students. While Dabbagh found that on-line quizzes were perceived as anxiety
producing and cumbersome in terms of on-line requirements, we have shown here
that an on-line system can yield better responses.
Although students were actively engaged in contributing knowledge and sharing
information, they did not note QSIA as a system that encourages team learning. This
indicates that students still see team learning in the traditional way of working in
small groups and conducting face-to-face meetings. This notion is supported by
Tang who noted that when people work collaboratively, but not face-to-face, many
interaction resources are disrupted (Tang, 1991). On-line educational systems that
are developed in collaboration between computer science researchers and educational researchers can support interactions and enhance collaboration (Carroll et al.,
2003; Rafaeli et al., 2003).
In this study, most students actually contributed more questions than the
minimum required. The students were willing to share knowledge with their
classmates in this aspect. However, students were less active in providing
rankings. Only a few students participated in ranking their peer questions. They
were not willing to assess or criticize their classmates work. This result echoes results
reported by Williams (1992). Students in Wlliams’ study felt that peer assessment
might be construed as inappropriate criticism of one’s friends and colleagues.
Students’ reaction to peer assessment is not always positive, therefore, the adoption
of peer assessment needs to be tempered with recognition of its virtues and
limitations.
In the quest for improving learning in higher education, and in accordance with
the trends of integrating on-line systems in higher education, further use of the online QPA administered via on-line testing mechanisms such as QSIA may provide
intriguing opportunities. On-line assessment holds promise for educational benefits
and for improving the way achievement is measured. This is especially so if
knowledge sharing is harnessed to aid the process.
ARTICLE IN PRESS
102
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
Acknowledgements
The authors wish to thank Caesarea Edmond Benjamin de Rothschild
Foundation, Institute for Interdisciplinary Applications of Computer Science and
IUCEL—the Israeli Inter-University Center for E-Learning for supporting this
research.
The authors are grateful to the Center for the Study of the Information Society at
the University of Haifa, Israel. The authors also wish to thank Alumit Wolfowitz for
her valuable contribution in supporting students with QSIA.
References
Ashmore, A.D., Frazer, M.J., Casey, R.J., 1979. Problem solving and problem solving networks in
chemistry. Journal of Chemical Education 56, 377–379.
Barak, M., 2003. Exploiting an online testing system to go beyond the administration of tests. AACE
E-Learn Conference, Phoenix Arizona November 2003, pp. 1487–1490.
Beichner, R., Wilkinson, J., Gastineau, L., Engelhardt, P., Gjertsen, M., Hazen, M., Ritchie, L., Risley, J.,
2000. Education research using Web-based assessment systems. Available: http://www.physics.ncsu.
edu/people/faculty.html (2002, July).
Birenbaum, M., Dochy, F., 1996. Alternatives in Assessment of Achievements, Learning Processes and
Prior Knowledge. Kluwer Academic Publishers, Boston.
Bloom, B.S., Engelhart, M.B., Furst, E.J., Hill, W.H., Krathwohl, D.R., 1956. Taxonomy of Educational
Objectives: The Classification of Educational Goals, Handbook 1: The Cognitive Domain. Longmans
Green, New York.
Bohlin, K.E., 2000. Schooling of desire. Journal of Education 182 (2), 69–79.
Bruner, J.S., 1990. Acts of Meaning. Harvard University Press, Cambridge.
Carroll, J.M., Neale, D.C., Isenhour, P.L., Rosson, M.B., McCrickard, D.S., 2003. Notification and
awareness: synchronizing task-oriented collaborative activity. International Journal of Human–
Computer Studies 58, 605–632.
Conway, R., Kember, D., Sivan, A., Wu, M., 1993. Peer assessment of an individual’s contribution to a
group project. Assessment and Evaluation in Higher Education 18, 45–56.
Dabbagh, N., 2000. Multiple assessment in an online graduate course: an effectiveness evaluation. In:
Mann, B.L. (Ed.), Perspectives in Web Course Management. Canadian Scholars’ Press Inc, Toronto,
pp. 179–197.
Dillon, J.T., 1990. The Practice of Questioning. Routledge, London.
Dori, Y.J., Barak, M., Adir, N., 2003. A web-based chemistry course as a means to foster freshmen
learning. Journal of Chemical Education 80 (9), 1084–1092.
Dori, Y.J., Herscovitz, O., 1999. Question-posing capability as an alternative evaluation method: analysis
of an environmental case study. Journal of Research in Science Teaching 36, 411–430.
Hamilton, L.S., Klein, S.P., Lorie, W., 2000. Using Web-Based Testing for Large-Scale Assessment. Rand
Education, Santa Monica.
Johnson, T.R., 2000. The GMAT registrant survey: a retrospective. Selections 16, 20–25.
Larisey, M.M., 1994. Student self assessment: a tool for learning. Adult Learning 5 (6), 9–10.
Marbach-Ad, G., Sokolove, P.G., 2000. Can undergraduate biology students learn to ask higher
questions? Journal of Research in Science Teaching 37, 854–870.
McDonald, A.S., 2002. The impact of individual differences on the equivalence of computer-based and
paper-and-pencil educational assessments. Computers & Education 39, 299–312.
Potelle, H., Rouet, J.F., 2003. Effect of content representation and readers’ prior knowledge
on the comprehension of hypertext. International Journal of Human–Computer Studies 58,
327–345.
ARTICLE IN PRESS
M. Barak, S. Rafaeli / Int. J. Human-Computer Studies 61 (2004) 84–103
103
Rafaeli, S., Barak, M., Dan-Gur, Y., Toch, E., 2003. QSIA—A Web-based environment for learning,
assessing and knowledge sharing in communities. Computers & Education, in press. Available online:
http://www.sciencedirect.com/
Rafaeli, S., Ravid, G., 1997. Online, Web-based learning environment for an information system course:
Access Logs, Linearity and Performance. ISECON ’97, Orlando, Florida, pp. 92–99.
Rafaeli, S., Ravid, G., 2003. Information sharing as enabler for the virtual team: an experimental
approach to assessing the role of electronic mail in disintermediation. Information Systems Journal 13,
191–206.
Rafaeli, S., Tractinsky, N., 1989. Computerized tests and time: measuring, limiting and providing visual
cues for time in computerized tests. Behavior and Information Technology 8, 335–353.
Rafaeli, S., Tractinsky, N., 1991. Time in computerized tests: a multi-trait multi-method investigation of
general knowledge and mathematical reasoning in online examinations. Computers in Human
Behavior 7, 123–142.
Robertson, S., Reese, K., 1999. A virtual library for building community and sharing knowledge.
International Journal of Human–Computer Studies 51, 663–685.
Shepardson, D.P., Pizzini, E.L., 1991. Questioning levels of junior high school science textbooks and their
implications for learning textual information. Science Education 75, 673–682.
Shodell, M., 1995. The question-driven classroom: student questions as course curriculum on biology. The
American Biology Teacher 57, 278–281.
Sluijsmans, D., Moerkerke, G., van-Merrienboer, J., Dochy, F., 2001. Peer assessment in problem based
learning. Studies in Educational Evaluation 27, 153–173.
Tamir, P., 1996. Science assessing. In: Birenbaum, M., Dochy, F. (Eds.), Alternatives in Assessment of
Achievements, Learning Processes and Prior Knowledge. Kluwer Academic Publishers, Boston,
pp. 93–129.
Tang, J.C., 1991. Findings from observational studies of collaborative work. International Journal of
Man–Machine Studies 34, 143–160.
Williams, E., 1992. Student attitudes towards approaches to learning and assessment. Assessment and
Evaluation in Higher Education 17, 45–58.
Zevenbergen, R., 2001. Peer assessment of student constructed projects: assessment alternatives in preservice mathematics education. Journal of Mathematics Teacher Education 4, 95–113.
Zohar, A., Dori, Y.J., 2002. Higher order thinking skills and low achieving students: are they mutually
exclusive? The Journal of the Learning Sciences 12 (2), 145–182.