Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Online Vs Inclass Richard Fendler

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Financial Education Association

Online Versus In-class Teaching: Learning Levels Explain Student Performance


Author(s): Richard J. Fendler, Craig Ruff and Milind Shrikhande
Source: Journal of Financial Education, Vol. 37, No. 3/4 (FALL/WINTER 2011), pp. 45-63
Published by: Financial Education Association
Stable URL: http://www.jstor.org/stable/41948666
Accessed: 28-06-2016 09:35 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted
digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about
JSTOR, please contact support@jstor.org.

Financial Education Association is collaborating with JSTOR to digitize, preserve and extend access to
Journal of Financial Education

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
Online Versus In-class Teaching:
Learning Levels Explain Student Performance

Richard J. Fendler, Craig Ruff and Milind Shrikhande


Georgia State University

Several earlier studies in the pedagogical literature have reached widely


di verging conclusions regarding the efficacy of online versus in -class instruction.
These studies have taken an aggregate view of these two instruction settings. We
enter the debate developing a perspective based on different learning levels. In
this paper , we examine student performance in the online and in-class
instruction settings at different learning levels as classified in Bloom s taxonomy
and conclude that learning outcomes are different in these two settings
primarily because they are different at the higher learning levels. We find that
the inclusion of learning levels clears the picture greatly and opens avenues for
the scholarship of teaching and learning along paths untrodden by any previous
researchers.

INTRODUCTION

Although many empirical studies support the no significant difference phenomenon


between online versus traditional in-class learning with regard to overall student
performance [see Russell (1999)], robust validation of the efficacy of e-education at all
levels of learning has not been adequately established. Bloom (1956) identifies a spectrum
of learning levels, from knowledge to comprehension to application to analysis to
synthesis to evaluation. In this spectrum, knowledge, the basic recall of material, is
considered the lowest level of learning, and evaluation, which involves value judgment
using definite criteria, is the highest level of learning.
The purpose of this study is to rigorously investigate whether there are any
significant differences in student performance at different learning levels between those
students taking a college introductory course in finance online versus those taking the
exact same course in a regular classroom setting. Significant differences along the
learning level spectrum are found, indicating greater attention needs to be given to which
courses, and perhaps even which levels of education (i.e., grade school, high school,
college, graduate school), should be offered in a traditional setting versus in an online
environment.

Fall/Winter 2011 45

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
LITERATURE REVIEW

In the book by Russell (1999), which is based on a bibliography of 355 studies that
were conducted between 1928 and 1998, the author concludes that there is no significant
difference in learning outcomes from technology aided instruction. Russell's work has
significant implications for distance learning classes that are based on the internet. The
main outcome of the study is that different educational technologies do not matter but
that students, the instructor, and course design are the most relevant factors. More recent
studies by Dellana, Collins and West (2000), Iverson, Colky and Cyboran (2005), and
Jones, Moeeni and Ruby (2005), which specifically compare student learning outcomes
between online and traditional courses, measured by student performance on some
common assessment instrument, also find no significant difference in the learning
outcomes of online versus in-class students.
Bertus, Gropper and Hinkelmann (2006) examine student performance in traditional
versus distance learning classes for graduate finance majors. They find that the
performance of these two groups of students is significantly different. Specifically, when
controlling for several student characteristics, they find that students in distance learning
classes perform significantly better than in-class students. Connolly, Mac Arthur,
Stansfield and McLellan (2007), in a three year comparison study of online versus face-to-
face delivery for three graduate-level computing classes, also report that online students
consistently outperform face-to-face students. Other studies that suggest online students
score significantly better than traditional classroom students include Dutton, Dutton and
Perry (2001), Hiltz (1997), and Shoemaker and Navarro (2000).
On the other end of the spectrum, multiple studies report that online students
underperform relative to traditional, in-class students. Brown and Liedholm (2002) ask
whether the web can replace the classroom in principles of microeconomics courses.
Interestingly they find that students in the virtual classes, in spite of having better
characteristics, performed significantly worse on the examinations compared to the in-
class students. Further, they found that this difference was most significant for exam
questions that required students to apply basic concepts in more sophisticated ways. At
the same time, this difference was least pronounced in the case of basic learning tasks,
such as knowing definitions or recognizing important concepts.
Controlling for factors such as GPA, gender, age, pretest scores, SAT scores, math
background, and reported study hours, Anstine and Skidmore (2005) find that average
exam scores for online students are significantly lower than for traditional students. In
a study of online versus in-class students for a microeconomics course and for a
macroeconomics course, Bennet, Padgham, McCarty and Carter (2007) find that
traditional students performed better in the micro class, yet online students performed
better in the macro class. They postulate that the difference in performance between the
two courses may be due to the fact that the micro course is more quantitative than the
macro course, which goes along with the findings of Brown and Liedholm (2002) that
online students may experience difficulty applying concepts in more sophisticated ways.

46 Journal of Financial Education

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
Other studies that report superior performance for traditional relative to online students
include Terry, Owens and Macy (2001), Ponzurick, France and Logar (2000) and Vamosi,
Pierce and Slotkin (2004).
To summarize, there is little consensus about the relative efficacy of online versus
traditional, in-class learning, especially among recent studies. Findings range from online
learning being significantly better, to equivalent, to significantly worse. These divergent
results may be a function of study design, teaching methodology, delivery systems, or
differences in student learning styles. Indeed, a rapidly growing body of literature
addresses the question of how to best teach an online course. However, with the
exception of Brown and Liedholm (2002), 1 scant attention has been given to relative
learning outcomes at different learning levels between online and in-class settings.
If online versus in-class instruction is more (less) effective at one learning level than
another, then at least some of these divergent results may be a function of the learning
level of the instrument used for comparison purposes. Specifically, if, as we show in this
study, online students perform worse at higher levels of learning and/or perform better
at lower levels of learning, then studies that use a comparison instrument more heavily
weighted to high learning levels will find superior in-class performance and studies
that use a comparison instrument more heavily weighted to low learning levels will find
superior online performance. At a minimum, this study effectively introduces the notion
that learning levels cannot be ignored in future online versus in-class comparison studies.

Learning Oriented Approach

The major goal of this research is to identify possible differences in student learning
outcomes across different learning levels (as defined by Bloom's taxonomy), for online
versus in-class teaching modes. Specifically, we want to determine whether student
performance at different learning levels in an online, introductory course in finance is
better than, equal to, or worse than performance in the same course taught in a
traditional class-room among similar groups of undergraduate students. By similar groups
we mean groups matched on prior ability, as measured by GPA, and basic demographic
data such as gender, race and chosen major.
Bloom's (1956) taxonomy is a hierarchy of learning levels within the cognitive

Fall/Winter 2011 47

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
domain ranging from simple recall of data to abstract thinking and judgment. The levels
of learning identified by Bloom are like a pyramid, or step function, where the higher
levels of learning build on the base of the lower levels.
To have well rounded learning, one must not only be taught at the lower levels of
learning in this taxonomy [knowledge, understanding and application] but also aspire to
be taught, as the AACSB recommends, at the higher levels of learning [analysis, synthesis
and evaluation].

DATA

The core finance course in the undergraduate program at the Robinson College of
Business (Georgia State University) is a common course that is taught in many different
sections per semester. Sections are offered on different days or day combinations and at
different times. Seven or eight instructors, who cover a highly standardized curriculum,
typically teach the course. The course is coordinated purposefully across all sections, with
a common syllabus, common learning objectives, a common textbook, weekly instructor
meetings, a faculty coordinator, and a common, cumulative final exam. The topics
covered in this course are financial statement analysis, time value of money, stock and
bond valuation and capital budgeting.
Typically there are 10 to 12 sections of the course taught per semester. Every
semester for the past six years, one of the sections of the course has been taught online.
All other sections are taught in a regular, traditional classroom setting. The regular
classroom based semester-long course is taught through a series of lectures. The
maximum enrollment in the traditional classroom sections is 40 students.

The online section of the course has been taught by one particular faculty member
every semester for the past six years. He taught the course for over 12 years in a
traditional classroom setting and continues to teach other finance courses using the
traditional method. This faculty member is the course coordinator as well as one of the
authors of the textbook used in the course. The maximum enrollment in the online

section is 75 students in the fall and 50 students in the spring. The section is extremely
popular among students, typically achieving maximum enrollment on the first day of
registration.
The online section is highly structured and refined. The course is taught using
WebCT/uLearn as a delivery platform. At the beginning of every month, students are
given a daily calendar of events showing suggested reading assignments, suggested end
of chapter problem assignments, and all online events. Solutions to all end of chapter
assignments are posted on the course homepage.
Students take an online quiz every week. Quizzes open at 8:00 am on Monday and
close at midnight on the following Sunday. Quizzes consist of ten to fifteen multiple-
choice questions or open-ended problems. A quiz can be taken as many times as the
student wants and only the highest score counts, however, due to question alternates and
the use of a random number generator in problems, successive attempts are different.

48 Journal of Financial Education

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
There are a total of 12 quizzes in the course and the lowest two quiz scores are dropped.
The quiz average counts as only 10 percent of the final course grade; thus, the student's
quiz average represents more of an effort grade as opposed to a measure of knowledge or
understanding.
Online students must also complete three 20 question problem sets, mainly
consisting of open-ended problems. The problem set questions are posted on WebCT;
students print a hard copy of the questions, work out solutions to all questions and enter
their final numerical answers online one week after the problem set is posted. Online
students also take three online exams. These consist of 25 to 30 questions that are similar
to quiz and problem set questions. Online exams have a time limit of 150 minutes, and
they can only be taken once during a 72-hour time window (over a weekend). Every
question on an exam has numerous alternates and/or random number elements so that
no two students take the same exam.

Every aspect of the course is taught online. That is, there is no face-to-face
instructor-student interaction, except for an occasional office visit. The main forms of
student-instructor interaction are weekly chat sessions and email. Chat sessions lasting
up to 2 hours are held every week. Attendance is optional; a transcript of the session is
posted immediately after the session ends. About one-half of the registered students
regularly attend chat sessions; however, nearly all students claim to read the posted
transcripts. All chats begin with a question/answer session and most involve the
instructor demonstrating the solution of problems that are distributed prior to the
session. Occasionally the instructor will lecture on a topic or attempt to discuss some
current event, however, it is difficult for the instructor to express enthusiasm for
the subject in a chat forum and student interest in these types of sessions is generally
low.

The instructor sends global email announcements to students every few weeks (for
example, 6 global email announcements were sent during the 15 week 2006 fall semester
session). There is frequent individual communication between student and instructor
throughout the term. The instructor checks his email multiple times throughout the day
every day of the week (including weekends and during breaks). During the same term
mentioned above, the instructor reported over 453 emails from students to which he
replied, most within hours of receipt.
Traditional classroom based sections of the course have fewer and shorter quizzes
and problem sets. There are two midterm exams, though these typically differ in
structure from the online exams described above. Traditional exams are handed out in

class, completed by students during the regular class period, turned in and hand graded
by the instructor. These exams can have multiple-choice questions, problems where work
can be shown and even essay questions. Partial credit can be, and usually is, assigned for
work that is shown or for essay answers. Instructors usually provide feedback on the
exams while grading and go over the exams when they are returned to students.
Obviously, evaluation instruments used during the semester for the traditional
classroom based sections of the course and the online section differ widely. Comparing

Fall/Winter 2011 49

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
relative performance on any of these instruments would be meaningless. However, there
is an evaluation instrument that is perfectly common between the different course
delivery methods - the comprehensive, common final exam. All students, regardless of
whether they take the course in-class or online, take a common final exam in a regular
classroom with an instructor proctor on the same day at the same time. Students are not
allowed to use a book, notes or formula sheets while taking the exam (this rule applies
to online students as well as regular classroom students).
All instructors teaching the course write questions for the common final exam.
Usually about 40 questions are written. All submitted questions are compiled and the
instructors then meet and select 25 questions for the common final exam. The questions
are carefully evaluated and edited as needed.
For the fall 2006 and spring 2007 semesters, the final exam questions were
categorized according to learning levels and the exams were structured such that 5
questions could be classified as knowledge, 5 as comprehension, 5 as application, 5 as
analysis and 5 as synthesis.2 Twenty of the questions were multiple-choice (with five
answer choices) and five (those classified as synthesis) were open-ended problems where
students were told they could receive possible partial credit if they showed their work.
Additionally, for the multiple-choice questions, one answer was identified as the correct
answer, one was identified as being partially correct, demonstrating at least partial
understanding, and the other three answer choices were identified as absolutely incorrect
and demonstrating little to no understanding.
Multiple choice questions were scored with four points being awarded to a correct
answer, two points awarded to a partially correct answer and no points being awarded
to an absolutely incorrect answer. Then a total score was assigned to each student for
each learning level category. For example, a student who selected the correct multiple-
choice answer for all five of the knowledge classified questions on the exam received a
KNOWLEDGE score of 20, Someone who selected the correct answer for three of the

knowledge classified questions, selected the partially correct answer for one of the
knowledge classified questions, and selected an absolutely incorrect answer for one of the
knowledge classified questions, received a KNOWLEDGE score of 14. Finally, answers
and work shown on the open-ended problems were graded by the authors of this paper
according to a developed rubric, that resulted in a similar score system for the advanced
application question group.
To complete our data set, we collected data on GPA, gender and major for each
student in our study since these variables have been shown in numerous previous studies
to be important determinants of student performance. We categorize the majors as
qualitative or quantitative since these categories are conjectured to have a differential
impact on student performance. Finance, by nature, is a quantitative subject. Other
majors were classified as qualitative or quantitative as follows:

50 Journal of Financial Education

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
The grouping was initially made based on an ex ante judgment of the authors and
subsequently verified by observing the correlation of the students' examination scores
with our classification of the different majors.3
Our final data set consists of 105 students who took the class in a regular classroom
setting and 99 students who took the course online.4 The students in the regular
classroom setting group were randomly selected from the nearly 900 students who took
the course over the two-semester period, such that the proportion of students relative to
the total was the same for both groups (online vs. in-class) in both semesters.
In the following section we compare the scores (as described above) for the online
group and for the in-class group on the different learning level classifications (i.e.,
knowledge, comprehension, application, analysis and synthesis).

Study Design and Descriptive Data

The basic design of our study is summarized in the following chart. Final exam
questions were classified according to the Bloom learning levels as Knowledge,
Understanding, Application, Analysis or Synthesis. We computed overall grades for each
of these learning levels with partial credit awarded for second best answers and without
partial credit awarded.

SYNTHESIS
ANALYSIS ANALYSIS
APPLICATION < > APPLICATION
UNDERSTANDING * * UNDERSTANDING
KNOWLEDGE I <
In-Class Group Online Class Group

Summary statistics for GPA, gender and major for the online group, the in-class
group and overall are shown in the table below:

Fall/Winter 2011 51

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
Table 1. Data Summary Stats: Number of Observations, GPA, Gender and Major

IN-CLASS ONLINE OVERALL

Number of Students

GPA - Mean 2.92 2.99 2.95


GPA-Std. Dev. 0.547 0.418 0.489

Male 45.7% 42.4% 44.1%


Female 54.3% 57.6% 55.9%

Quantive Major 45.7% 36.4% 41.2%


Qualitative Major

Undecided 3.8% 1.0% 2.4%

As expected, the average GPA of the online students and the in-class students is
statistically identical. Also, consistent with an observed trend in business schools that has
persisted for years, there are significantly more females in our sample than males.
Whereas these findings are not particularly interesting, we do note the following two
curious relationships between the online and the in-class student groups.
First, there are more females in the online class than in the traditional class. This
relationship has persisted every semester over each of the past five years that we have
been evaluating data for the online course. It is possible that female students feel less
intimidated in a math-based online course than in a traditional classroom setting. Or,
equally likely, since we are an urban university, it is possible that some female students
feel more secure taking the course online than they do taking the class in the downtown
campus. Second, we note that the online section has significantly more qualitative majors
than the in-class sections. This observation has also persisted over the past five years.
Since this course is a required core course for all business majors, perhaps qualitative
majors consider this course to be less relevant to their major and therefore treat the
convenience factor of taking the class online as primary in their decision process. This
explanation, however, is mere conjecture for which we have no specific data or
supporting evidence.
Raw exam score averages are shown in Tables 2a and 2b for no partial credit and
partial credit, respectively. As we move to higher learning levels, we find that
performance progressively declines in a systematic fashion, whether partial credit is
awarded or not. There is, however, one exception to this rule which, surprisingly, is at

52 Journal of Financial Education

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
Table 2a. Raw Exam Score Statistics - No Partial Credit

Total Exam Score -


57.1% 17.7% 55.6% 16.1%
À.11 questions

Knowledge Questions ^ ^ ^ ^
>core

Understand Questions ^ ^
^ 75.4% 25.4% ^ 78.0% 22.3%
>core

Application Questions ^ ^ 7Q 2% ^
>core

Analysis 592% 271o/o 61 2o/0 27.2%


Questions Score

synthesis 33 ļ% ^ ļ% 2ļ g% 25 ļ%
Juestions Score

Table 2b. Raw Exam Score Statistics - Partial Credit Awarded

Total Exam Score -


A11 66.9% 16.2% 66.6% 14.6%
All A11 questions

Knowledge Questions ]6 g% ^ ¡? 2%
>core

Understand Questions ?&e% ^ Jg 2% n M


>core

Application Questions JJ ¡% ^ ^ ^
:>core

Analysis 62g% 24 4% 65 ()% 25 2%


Questions Score

synthesis 5gg% 23g% 53 ļ% 22 ()%


Questions Score

the lowest learning level, the knowledge questions. A little bit of reflection suggests that
most of the Knowledge questions on the exam involve material covered during the first
weeks of class and never discussed again throughout the course. Naturally, retention
becomes a major issue. Perhaps this aberration can be explained as a low level of
retention at this lowest level of learning. The fact that the average grade on the
Knowledge questions is closer to the average grade on the Understand questions when

Fall/Winter 2011 53

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
Table 3, Average GPA Tercile per Score at Each Learning Level

knowledge 3.12 3.11 2.90 2.98 2.73 2.88


Understand 3.31 3.24 2.85 2.97 2.59 2.76
Application 3.22 3.23 2.99 2.93 2.54 2.81
Analysis

Synthesis 3.22 3.27 2.97 2.89 2.55 2.81

partial credit is awarded as compared to when partial credit is not awarded, seems to
strengthen the retention explanation. Students recall some of the basic knowledge level
information covered in the first weeks of class, but their recall is weak resulting in
confusion between the correct answer and the next best answer.

Results and Significant Findings

To test the validity of our assessment instrument we first checked the relationship
between relative performance on each learning level category and GPA. We divided the
sample into terciles based on scores by learning level on the cumulative final exam. As
shown in Table 3, the average GPA in each tercile at every learning level follows the
exact same order as dictated by student performance in these terciles for both the online
class as well as for the in-class sections. These results suggest that the assessment
instrument tracks student performance consistently with their proven competence levels
prior to taking this class, thus confirming the validity of our assessment instrument.
Table 4 compares the student performance and learning outcomes between the
online and in-class settings at five different learning levels as classified under Bloom's
taxonomy. The most striking result here is that the overall performance and learning
outcomes of the online and in-class groups cannot be differentiated from each other (t-
stat. score = -0.88, without partial credit; t-stat. score = -0.39, with partial credit).
However, at the highest learning level, the students' learning outcomes are different from
each other (t-stat. score = -3.25, without partial credit; t-stat. score = -2.21, with partial
credit), even though they are indistinguishable at all the other learning levels in Bloom's
taxonomy. The diverse conclusions of earlier studies about in-class versus online student
performance and learning outcomes can be explained by taking a closer look at how it
varies by levels of learning as demonstrated in this table.
To further investigate these observations we analyzed the data in a regression
framework controlling for GPA, major and gender. Our regression results are presented
in Table 5.

54 Journal of Financial Education

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
Table 4. T-Test Statistics of Test for Difference in Means (Online versus In-Class)

Learning Level Mean ^.rr


^ . T _ m ^ Mean ^.rr Dill.
rfExam Online ^ . In-Class T _ Diff. T- m Online ^ In-Class _ _ _
_ . _ _ 1-TestStat. _
Questions .

knowledge 46.5% 48.9% -0.76 63.0% 65.2% -0.93


Understanding 78.0% 75.4%

Application 70.2% 68.9% 0.18

Analysis

Synthesis
Dverall 55.6% 57.1% -088 66.0% 66.9% -0.39

Table 5. Regression Analysis Results 5

Dverall -0.02595 -0.02682 0.07232 -0.03617 0.20228

Knowledge 0.18804 -0.02664 0.05754 -0.04047 0.10294


Questions
Understand 0.04278 0.00591 0.05153 0.02239 0.23434

Questions

Application 0.01447 -0.00293 0.06204 -0.00751 0.22542


Questions

Analysis -0.08530 0.00834 0.06978 -0.09297 0.24073


Questions (-0.79) (0.24) (1.94) (-2.70) (6.50)
Synthesis -0.28976 -0.11881 0.12072 -0.06227 0.20799
Questions (-2.60) (-3.34) (3.25) (-1.75) (5.42)

Consistent with our observations in Table 4, the online variable coefficient is


insignificant in all regressions except for the Synthesis Questions group. For this group,
the coefficient is significantly negative. As for the power of the model, observe that
overall, when we do not break up the data by learning levels, R-squared is a convincing
41.82%, where major and GPA are strong predictors of student performance. The R-
squared values are still high when we analyze the data by learning levels. Thus,
regression analysis again confirms the intuition captured earlier, specifically, that the
online students do significantly worse than the in-class students at the highest 'synthesis'
level of learning.
Note that the one exception for relatively strong R-squared values occurs for the

Fall/Winter 2011 55

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
Knowledge Questions regression. As discussed earlier in the paper, we believe this
aberration at the lowest learning level, which is also observed in Table 4, can be
attributed to the information retention issue. The very low R-squared value at the lowest
learning level reported in Table 5 is consistent with students randomly guessing the
answer due to poor memory retention of material covered in the early weeks of the
semester.

CONCLUSION

A rapidly increasing number of academic courses, and in some cases entire degree
programs, are being offered online, from grade school to graduate school. Although few
argue with the fact that online education is efficient, convenient and cost effective, the
question of whether the learning that takes places in an online class is at least equivalent
to the learning that occurs in a traditional, in-class setting is far from settled. In fact, in
the academic literature, answers to this question span the entire spectrum. Where
learning is measured by performance on some common evaluation instrument, such as
a single, cumulative final exam in the course, one group of research studies finds no
significant difference, another concludes that online learning is significantly superior to
in-class learning, and still another reports that online learning is significantly inferior to
the learning that occurs in an in-class setting.
We argue that a possible explanation for this wide divergence is a missing variable;
specifically, learning level. To date, student performance at different learning levels has
been largely ignored in studies comparing online and in-class teaching settings. Our
study sheds light on this crucial aspect in ascertaining the distinction between online and
in-class settings based on student performance at different learning levels using Bloom's
taxonomy.
Indeed, our findings suggest that whereas online students perform similar to in-class
students at the lower levels of learning, online students perform significantly worse at
the highest level of learning, whether partial credit is awarded or not. Thus, even though
overall performance of online students and in-class students does not differ from each
other, on a cumulative exam specifically designed to test at all the learning levels, the
online students suffer at the AACSB-desired highest learning level.
Our study shows that care must be taken when comparing student performance in
the in-class and online settings. If instruments used to evaluate learning outcomes are
designed such that they focus on all learning levels equally, then drawing conclusions
based on exam or course performance becomes more effective. In fact, since many college
level examinations focus inordinately on knowledge and comprehension, some recent
empirical studies finding no significant difference between student performance in the
in-class setting and student performance in the online setting, may be more a
consequence of the instrument than a vindication of the effectiveness of online learning.
As possible avenues for future research, one way to enable students in the online
setting to improve their performance at the higher synthesis level of learning would be

56 Journal of Financial Education

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
to adapt the matrix framework for effective instruction in differentiated settings
suggested by Heacox (2002) based on combining Bloom's taxonomy with Gardner's
(2000) theory of multiple intelligences among learners.6 Another very interesting area
of future research would involve replacing the focus from success at answering exam
questions of different learning levels to a diagnostic approach consistent with Perry's
scheme of intellectual development. Perry's fundamental idea is that college students
go through a natural stage of intellectual growth, from believing that there are simple
right and wrong answers to understanding that acquiring knowledge is actually a very
complex process. The critical issue is whether or not on-line classes, which may lack the
traditional give-and-take of a classroom, stunt college students' natural intellectual
development. To take an extreme example, imagine that there are two otherwise
identical students in terms of living experiences, campus activities, etc. The only
difference between these two students is that one student completes all of her courses
online while the other student only takes traditional in-class courses. Would the two
students have reached differing levels of intellectual development at the end of their
college careers? This research can be conducted using well-established measures of
intellectual development.

ENDNOTES

1 Although Brown and Liedholm (2002) note that they observe a difference in
performance relative to question type, they do not classify questions by learning levels
nor do they directly test differences at different learning levels.
2
See Appendix A for a detailed description of the process used to classify questions
into different learning level categories.
The major, Risk Management and Insurance (RMI), may seem misplaced. In the
college, the RMI major tends to be less quantitative and mainly focused on insurance. On
the other hand, the Actuarial Science majors are, as a group, very quantitative.
Although the online class began with an enrollment of 75 in the fall and 50 in the
spring, a total of 20 students withdrew from the online class before the official semester
drop date. The online class always has a higher drop rate than the traditional class, a
relationship which is consistently reported in the online education literature. Thus,
originally there were 105 student exams selected for this group. After we coded all of the
data, however, we realized that 6 of the exams in this group were variance final exams
(four from the fall and 2 from the spring). A variance exam is an exam that is taken at a
time different from the common final exam time. If a student has a specific conflict with
the final exam time, the student can apply to the department chairman for an exam
variance. If granted, the student must take the final exam prior to the official common
exam time. To reduce the possibility of cheating, the variance exam differs from the
official common final exam. Thus, we dropped these exams from our sample, resulting
in a final tally for the online class group of 99 students.
5 The dependent variable for the regressions reported in Table 5 is score without

Fall/Winter 2011 57

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
partial credit. We also ran regressions on score with partial credit, but did not report
these separately because the significance levels of all coefficients were essentially
identical to what is reported in Table 5.
A nice summary is available in Heacox (2002): "...every student has strengths in
thinking and learning. Students learn and produce with greater ease when they're using
an area of strength. Asking students to work in ways in which they're less able helps
them strengthen those intelligences and widen their repertoire. . . .the more variety you
offer students in the ways you ask them to learn and show what they have learned, the
greater the likelihood of reaching more students." is available in Heacox (2002).
7 We thank an anonymous referee for introducing us to Perry's work on student
learning levels. See Perry (1970).

REFERENCES

Anstine, }. and M. Skidmore. "A small sample study of traditional and online courses
with sample selection adjustment." Journal of Economic Education, 36 (2), 2005. 107-
128.

Bennet, D., G. Padgham, C. McCarty, and M. S. Carter. Teaching Principles of


Economics: Internet vs. Traditional Classroom Instruction. Journal of Economics and
Economic Education Research , Volume 8, Number 1, 2007. 21-31.
Bertus, M., D. Gropper, and C. Hinkelmann. "Distance Education and MBA Student
Performance in Finance Courses." Journal of Financial Education, Vol. 32, Fall 2006.
25-36.

Bloom, Benjamin. Taxonomy of Educational Objectives, the classification of educational


goals - Handbook I: Cognitive Domain, New York, McKay, 1956.
Brown, B. and C. Liedholm. "Can Web Courses Replace the Classroom in Principles of
Microeconomics?" American Economic Review. Vol. 92, No. 2, May 2002. 444-448.
Connolly, T., E. Mac Arthur, M. Stansfield, and E. McLellan. "A quasi-experimental study
of three online learning courses in computing." Computers & Education, Volume 49,
Issue 2, September 2007. 345-359.
Dellana, S., W. Collins and D. West. "Online education in a management science course-
Effectiveness and performance factors." Journal of Education for Business, 76 (1),
2000. 43-47.

Dutton, J., M. Dutton and J. Perry. "Do Online Students Perform As Well As Lecture
Students?" Journal of Engineering Education, Vol. 90, No. 1, January 2001. 131-136.
Gardner, H. Intelhgence Re framed: Multiple Intelligences for the 21st Century. New
York, Basic Books, 2000.
Heacox, Diane. Differentiating Instruction in the Regular Classroom: How to Reach and
Teach All Learners, Grades 3-12. Minneapolis, MN: Free Spirit Publishing, 2002.
Hiltz, S. R., "Impacts of College-level Courses Via Asynchronous Learning Networks:
Some Preliminary Results," Journal of Asynchronous Learning Networks, vol. 1, no.
2, 1997. 1-19.

58 Journal of Financial Education

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
Iverson, K., D. Colky and V. Cyboran. "E-Learning Takes the Lead: An Empirical
Investigation of Learner Differences in Online and Classroom Delivery."
Performance Improvement Quarterly. Vol. 18, Iss. 4, Spring 2005. 5-14.
Jones, K., F. Moeeni and P. Ruby. "Comparing web-based content delivery and
instructor-led learning in a telecommunications course." Journal of Information
Systems Education, 16 (3), 2005. 265-270.
Perry, W. G. Forms of Intellectual and Ethical Development in the College years: A
Scheme. New York: Holt, Rinehart, and Winston, Inc., 1970.
Ponzurick, T., K. France and C. Logar. "Delivering graduate marketing education: An
analysis of face-to face versus distance education." Journal of Marketing Education.
22 (3), 2000. 180-187.
Russell, Thomas. The No Significant Difference Phenomenon. Office of Instructional
Telecommunications, (1999), North Carolina State University.
Shoemaker, }. and P. Navarro. "Policy issues in the teaching of economics in cyberspace:
research design, course design, and research results." Contemporary Economic
Policy 18 (3), 2000. 359-366.
Terry, N., J. Owens and A. Macy. "Student performance in the virtual versus traditional
classroom." Journal of the Academy of Business Education, 2 (1), 2001. 1-4.
Vamosi, A., B. G. Pierce and M. H. Slotkin. "Distance learning in an accounting
principles course-student satisfaction and perceptions of efficacy." Journal of
Education for Business, 79 (6), 2004. 360-366.

Appendix A

How Exam Questions were Categorized into Different Learning Levels

All exam questions on the final exam were written so that they fell into one of five
learning level categories: knowledge, comprehension, application, analysis, or synthesis.
Obviously, the classification of questions into learning levels is the most subjective aspect
of our study. Therefore, we tried to use a process that was as precise and robust as
possible.
First, each instructor for the course wrote and submitted to the course coordinator
suggested multiple choice questions for the final exam. A total of 50 questions were
submitted. Each question was required to have five answer choices.
Then, each of the three authors of the study independently classified all submitted
questions according to the following Appendix Table 1 guidelines:

Fall/Winter 2011 59

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
Appendix Table 1

Category

Knowledge

Comprehension Conceptual multiple choice questions, or

Application Multiple choice problems with one or more intermediate steps


Analysis Two step multiple choice problems where an answer to step one is

Synthesis Dpen ended problems where students must derive the correct
ms wer. These problems are multiple step with extraneous
information included in the question. Students were instructed to
show their work and circle their final numerical answer.

On questions where our classification differed, we either rewrote the question to


reach unanimous agreement or we dropped the question from the group. We then
selected, from the group we unanimously agreed upon, 25 questions for the final exam
such that five questions fit into each category. The synthesis questions were originally
submitted as multiple choice questions, but we removed the answer choices to make
them open ended problems. Finally, the 25 selected questions were circulated back
among the instructor group for final editing to make sure that all questions, answer
choices and problems were clear and uniform in style.
An example of each question type, a brief explanation of how the specific question
fits the basic question characteristics, the correct answer and a note concerning the
process we used to classify an answer choice as a "partial credit" answer are shown in
Appendix Table 2 below:

Appendix Table 2. Example Final Exam Questions


Learning Level: Knowledge

Illustrative Question: Simple multiple choice question


The primary goal of a publicly-owned firm interested in serving its stockholders
should be to:
A. maximize shareholder wealth
B. create jobs
C. promote social good
D. maximize net income

E. minimize risk

Correct Answer and Explanation of Partial Credit Answer:


The correct answer is A. The next best answer is D since maximizing net income is
an aspect (though not the only aspect) of maximizing shareholder wealth.

60 Journal of Financial Education

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
Illustrative Question: Conceptual multiple choice question
Suppose there is a financial security that promises to give you $2,000 ten years
from today. All else constant, for a given nominal interest rate, a change from
quarterly compounding to annual compounding will cause the current price of this
security to

A. Increase
B. Decrease
C. Remain the same
D. Either increase or decrease depending on the number of years until the money
is to be received
E. None of the above

Correct Answer and Explanation of Partial Credit Answer:


This is a "what if" question. If "x" changes, what happens to "y"? The correct answer
is A. The next best answer is B.

Illustrative Question: Simple one-step multiple choice problem


What is the future value at the end of year 15 of $1,250 deposited today given an
interest rate of 10 percent p.a.?
A. $4,422.78
B. $5,743.72
C. $4,487.59
D. $5,221.56
E. $5,147.22

Correct Answer and Explanation of Partial Credit Answer:


This is a one step multiple choice problem. The correct answer is D. The next best answei
is B which is the future value if a student sets N to 16 years (counting today as one yeai
->hxs the next 15 years). The other three answers are random numbers.

Illustrative Question: Multiple choice problems with one or more intermediate


steps
You are considering buying a new car. The sticker price is $20,000 and you have
$5,000 to put toward a down payment. If you can negotiate a nominal annual
interest rate of 12 percent and you wish to pay for the car over a 3-year period,
what is your monthly car payment?
A. $458.94
B. $581.19
C. $520.44
D. $457.89
E. $498.21

Fall/Winter 2011 61

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
Correct Answer and Explanation of Partial Credit Answer:
Solving this problem involves three intermediate steps: first, the down payment must
be subtracted from the sticker price to get the amount borrowed, second, the number
of periods is 36 (three years of monthly payments), and third, the annual interest rate
must be converted to a monthly rate to compute a monthly payment. The correct
answer is E. The next best answer is C, which is derived as the annual payment on
a $15,000 loan (with an interest rate of 12 percent and number of periods equal to 3)
divided by 12. The other three answers are random numbers.

Learning Level: Analysis

Illustrative Question: Two step multiple choice problems where an answer to step
one is used to compute the correct answer to step two
Exactly four (4) years ago, Bill borrowed $250,000 to buy a house. His mortgage
loan has a 30 year term with monthly payments. The annual nominal interest rate
on the loan is 6.75 percent. Assuming that Bill has made all required payments on
the loan to date, what is the payoff on his loan today (that is, immediately after
making the 48th payment on the loan)?
A. $237,755.49
B. $239,478.44
C. $238,176.83
D. $236,875.71
E. None of the answers listed above is within $50 of the correct answer.

Correct Answer and Explanation of Partial Credit Answer:


Must first compute the monthly payment on the loan and then use that monthly
payment figure to calculate the loan payoff. The correct answer is C. The next best
answer is A, which is the payoff on the loan if the payment is an annual payment
instead of a monthly payment. Answers B and D are random numbers. Answer
choice E is included on all of the application questions.

Learning Level: Synthesis

Illustrative Question: Open ended problems where students must derive the
correct answer. These problems are multiple step with extraneous information
included in the question. Students were instructed to show their work and circle
their final numerical answer.
Gwen-Aga, Inc. is a software manufacturer specializing in small business
operations. The company is considering introducing new alternative software. The
company's chief financial officer has collected the following information about the
proposed product:

62 Journal of Financial Education

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms
The project has an anticipated economic life of 8 years.
R&D costs for development of the new software were $22 million.
The company will have to purchase new computer equipment to produce the new
product. The new equipment will cost the company $43 million to purchase and
install. The equipment will be depreciated on a straight line basis to a $7 million
salvage value over its 8 year project life.
If the company goes ahead with the new product, it will have an effect on the
company's net working capital. At the outset (i.e. at t=0), inventory will increase by
$1.5 million and accounts payable will increase by $500,000. At t=8, the net working
capital will be recovered after the project is completed.
The new software is expected to generate sales revenue of $110 million per year for
each of the next 8 years. Operating costs, excluding management salaries, are expected
to be $40 million per year. New software experts will be hired and it is estimated that
these experts will be paid salaries totaling $5 million per year.
Because the new software is similar to some of Gwen-Aga's existing software products,
sales in the existing products will decrease by $10 million per year, once the new
product gets to the market.
The company's interest expense each year will be $3 million.
The company's tax rate is 40 percent.

Calculate Gwen-Aga's operating cash flow in year 2. (Show your work in the space
provided for possible partial credit. Record your final numerical answer on the proper
line on the answer sheet).

Correct Answer and Explanation of Partial Credit Answer:


Solving this problem involves multiple steps and the problem includes extraneous
information. The student must solve the problem and can show work that was hand
graded by the authors of the paper to allow for partial credit to be awarded for use
of the correct procedure but with an incorrect final numerical answer.

Fall/Winter 2011 63

This content downloaded from 142.66.3.42 on Tue, 28 Jun 2016 09:35:19 UTC
All use subject to http://about.jstor.org/terms

You might also like