BLENDED LEARNING
IN THE AGE OF SOCIAL CHANGE
AND INNOVATION
Proceedings of the
3rd World Conference on Blended Learning
Agnieszka Palalas
Helmi Norman
Przemyslaw Pawluk (Eds.)
INTERNATIONAL ASSOCIATION FOR BLENDED LEARNING
http://www.iabl.org
ISBN: 978-618-82543-3-6
Main Title: Blended Learning in the Age of Social Change and Innovation
Subtitle: Proceedings of the 3rd World Conference of Blended Learning
Editors: Agnieszka Palalas, Helmi Norman & Przemyslaw Pawluk (Eds.)
Place of Publication: Greece
Publisher: International Association for Blended Learning
Table of Contents
Papers
Mobile the Efficacy of Blended Learning Models of Teacher Professional Development .................. 1
Susan Ruckdeschel
Why OER for Blended Learning 2017 ....................................................................................................... 9
Rory McGreal
Unraveling the Multidimensional Structure of Information Literacy for Educators .......................... 13
Kamran Ahmadpour
Different Forms of Assessment in a Pronunciation MOOC – Reliability and Pedagogical
Implications .................................................................................................................................................. 34
Martyna Marciniak, Michal B. Paradowski and Meina Zhu
Blended Learning in Primary School - Looking for a New School Formula ....................................... 42
Dorota Janczak
How to Organize Blended Learning Support in Higher Education ..................................................... 46
Janina van Hees
The Use of Mobile Educational Application (MobiEko) as a Supplementary Tool
for Learning ................................................................................................................................................... 51
Mohamad Siri Muslimin, Norazah Mohd Nordin and Ahmad Zamri Mansor
Dronagogy: A Framework of Drone-based Learning for Higher Education in the
Fourth Industrial Revolution ..................................................................................................................... 55
Helmi Norman, Norazah Nordin, Mohamed Amin Embi, Hafiz Zaini and Mohamed Ally
Reconfiguring Blended K-12 Professional Learning Through the BOLT Initiative ........................... 63
Constance Blomgren
A Proposed Blended Educational Framework for Administration of Enterprises in
Nowadays’ Greek Financial Crisis ............................................................................................................. 68
Thalia Vasiliadou, Evgenia Papadopoulou and Avgoustos Tsinakos
Create a blended mobile learning space with Whatsapp ......................................................................... 76
Alice Gasparini
Mindfulness in Online and Blended Learning: Collective Autoethnography ...................................... 84
Agnieszka Palalas, Anastasia Mavraki, Kokkoni Drampala and Anna Krassa
Effective Use of Online Tools in Engineering Classes ........................................................................... 97
Yasemin Bayyurt and Feza Kerestecioglu
Investigating the Reasons for Low Level of Interaction in a Blended Course .................................. 102
Aysegül Salli and Ülker Vanci Osam
34
Different Forms of Assessment in a Pronunciation
MOOC – Reliability and Pedagogical Implications
Martyna Marciniak
Institute of Applied
Linguistics, University of
Warsaw, Poland
martyna.marciniak@student.
uw.edu.pl
Michał B. Paradowski
Institute of Applied
Linguistics, University of
Warsaw, Poland
m.b.paradowski@uw.edu.pl
Meina Zhu
Department of Instructional
Systems Technology, Indiana
University-Bloomington, USA
meinzhu@umail.iu.edu
ABSTRACT
Peer assessment has long been used as an alternative to instructor assessment of students’ learning. Yet, its receivers are
often skeptical about the effectiveness and validity of the evaluation (e.g. Strijbos, Narciss & Dünnebier, 2010; Kolowich,
2013; Formanek et al., 2017; Meek, Blakemore & Marks, 2017). Still, other studies (e.g. Cho & Schunn, 2007; Gielen et
al., 2010; Ashton & Davies, 2015) have found peer grading to be reliable and valid when accompanied by proper guidance,
and that when used appropriately, it may benefit both the learners who receive the feedback and those who provide it
(Dochy, Segers & Sluijsmans, 1999; Barak & Rafaeli, 2004).
Nowadays peer assessment remains an element vital to the existence of massive open online courses (MOOCs), and is
widely recognized by the research community as a topic which needs to be investigated in detail and improved in the future.
Massive open online courses whose primary focus is second language learning (LMOOCs) are organized by various
institutions around the world. Nevertheless, publications addressing issues related to this type of course are fairly scarce
(cf. Bárcena & Martín-Monje, 2015).
Pronunciation routinely accounts for a major share of communication breakdowns in non-native speaker interactions as
well as communication between native and non-native speakers (cf, e,g, Paradowski, 2013; Pawlas & Paradowski, under
review). Yet, in many language classrooms its teaching is brushed off in favor of imparting other skills. Luckily this
shortage is increasingly being addressed with the ready availability of CALL. We present a small case study of peer
assessment reliability in the context of a Japanese pronunciation MOOC offered by one of the popular online providers.
A phonetic analysis of the first author’s speech recordings has been carried out using Praat software (Boersma & Weenink,
2017) in order to assess the accuracy of feedback obtained from course participants. On its basis, an evaluation of the
pronunciation has been made and then compared with assessment provided by peers, a TA involved in the course, and an
independent Japanese native speaker teacher.
Although the peers’ comments conveyed a general idea about progress, their feedback was not sufficiently detailed. More
reliable was the assessment by the TA. Still, an evaluation completed by an independent Japanese native speaker showed
that a person not involved in any way in the MOOC was easily able to make even more observations. Thus, assessment
appeared objective and reliable only after triangulating all the sources of feedback.
The study revealed that peer assessment may not produce reliable results if the process of evaluation is not sufficiently
facilitated; namely, when there are no explicit guidelines and preparatory training exercises provided for the participants.
The peer evaluation was difficult to perform in a helpful manner since the assignments lacked clearly constructed rubrics.
Thus, future language courses, particularly those that concentrate on productive skills such as speaking, ought to implement
clear rubrics together with a grading tutorial.
Author Keywords
peer assessment, validity, reliability, language MOOCs (LMOOCs), pronunciation
PEER ASSESSMENT IN MOOCS
Peer assessment can be defined as “the process of a learner marking an assessment of another learner, for the purposes of
feedback and/or as a contribution to the final grade” (Mason & Rennie, 2006:91). Its main advantage is that it stimulates
learners to adopt the role of a person who grades work, thereby making them take time to reflect on the topic (ibid.).
Nevertheless, the scholars state that it is crucial to inform everyone taking part in the process about its established goals as
well as provide instructions on how and what to assess, because only on condition that these issues are understood can peer
assessment be a valuable experience.
Peer assessment has long been used as an alternative to instructor assessment of students’ learning. However, its receivers
are often sceptical about the effectiveness and validity of the evaluation (Strijbos, Narciss & Dünnebier, 2010; Kolowich,
2013; Formanek, Wenger, Buxner, Impey & Sonam, 2017; Meek, Blakemore & Marks, 2017). Still other studies (e.g. Cho
& Schunn, 2007; Gielen, Peeters, Dochy, Onghena & Struyven, 2010; Ashton & Davies, 2015) have found peer grading to
be reliable and valid when accompanied by proper guidance. It has also been argued that when used appropriately, it may
benefit both the recipients and the providers of the feedback (Dochy, Segers & Sluijsmans, 1999; Barak & Rafaeli, 2004).
1
48
35
Peer assessment has been particularly vital to the existence of massive open online courses (MOOCs). The notion was first
introduced in 2008, when Stephen Downes from the National Research Council of Canada and George Siemens of the
Technology Enhanced Knowledge Research Institute at Athabasca University launched their Connectivism and Connective
Knowledge course, presently known as CCK08 (Harber, 2014a:37). The person to use the label for the first time was David
Cromier from the University of Prince Edward Island, who called the courses “MOOCs” in a talk with the course designers
(op. cit.:39). Since that moment the number of courses offered as well as participants enrolling in them started growing
rapidly; as indicated in a report compiled by the HarvardX Research Committee at Harvard University and the Office of
Digital Learning at MIT, in the first year when the two institutions commenced the edX platform together (from autumn
2012 to summer 2013), the number of registrations equalled 841,687 with 597,692 individual users, 43,196 of whom
successfully completed courses (Ho et al., 2014:2). George Siemens himself commented on these high figures stating that
even though the courses eventually opened by his team attracted around 20,000 registrants in total, “it’s hardly a blip on
the Coursera scale (where student numbers in excess of 100,000 seems to be the norm)” (Siemens, 2012). As a consequence,
in an article published in The New York Times, 2012 was proclaimed “The Year of the MOOC” (Pappano, 2012). Bearing
in mind that the increasing worldwide interest in MOOCs is believed to continue in the future (Bárcena & Martín-Monje,
2015:2; “The return of the MOOC”, 2017), it appears reasonable to consider such courses as a promising field of study.
Massive open online courses offer learning materials which can be accessed through the Internet. Pappano (2012) states
that although MOOCs are usually free of charge and readily available to anyone who wishes to access them without
prerequisites, participants cannot expect the course creators to guide them through the learning process at all times. Thus,
the overall experience that students are going to get is based to a great extent on the design of the course and its mechanics.
At the core of MOOCs are instructional videos usually not longer than a dozen minutes (op. cit.). The courses also include
tests that check participants’ comprehension, homework assignments, final quizzes, and forums which enable the learners
to communicate with one another as well as with the staff. As Elena Bárcena and Elena Martín-Monje point out in their
(2015) publication Language MOOCs: Providing Learning, Transcending Boundaries, any subject seems possible to be
rendered into a MOOC, which only proves the form’s universality and versatility.
The reasons for introducing the peer assessment system into MOOCs are of a practical nature. According to Kulkarni et al.
(2013:3), open-ended assignments are difficult to be checked by a machine, thus normally require a person who would
assess them. This view is supported by other researchers, for instance Bachelet et al. (2015). What is more, engaging people
other than course staff in grading participants’ work is simply inevitable taking into consideration the incredibly high
numbers of learners (Harber 2014b:69). As far as the methods of providing feedback are concerned, Harber gives examples
of MOOCs during which self- or peer-grading was employed with use of rubrics that included criteria of evaluation, but
he also warns that this method has its drawbacks; for instance, it requires a certain level of language proficiency and scoring
abilities from those performing the assessment (op. cit.:72). Stressing the importance of peer assessment as a potentially
efficient tool in massive online courses, Lackner et al. (2014) include this element in their checklist which has been
designed as a valuable tool for MOOC creators. However, at the same time they underline that rules of peer review should
be simple and made known to all the parties involved in the process (op. cit.:4).
Language MOOCs
Educational technology has also made many inroads in foreign language education (Paradowski, 2015:38). Yet, while other
fields have been better represented and analysed, publications addressing foreign language courses (LMOOCs) are still
fairly scarce (Bárcena & Martín-Monje, 2015). This paper is an attempt towards filling the gap by analysing the
effectiveness of peer feedback in a pronunciation LMOOC.
Massive open online courses whose primary focus is a foreign language (LMOOCs for short) are organised by various
institutions, and as one could expect the majority concern languages with the highest numbers of speakers worldwide,
namely English and Spanish (Bárcena & Martín-Monje, 2015:6). In the opinion of Maggie Sokolik, the type of assessment
chosen for an LMOOC should match the unique goals agreed on by the course designers, but at the same time the author
admits that grading open-ended assignments such as essays or spoken responses is difficult for numerous reasons. She pays
attention to peer assessment as well and claims that grading in this case requires “a rubric developed by the instructor” on
the basis of which learners “assess [each other’s work] on a number of points” (Sokolik, 2015:24). However, this method
also has disadvantages of its own: the participants may not be competent enough so as to use peer assessment effectively,
some biases might be displayed (for instance with regard to the place of origin of those being evaluated), and language
proficiency can play a significant role in one’s capability of providing meaningful feedback. Nevertheless, the author
concludes that the best solution could be a mixture of different types of assessment, such as “auto-scored multiple-choice
or text-input items, in tandem with self-evaluation, and an effective discussion mechanism” (op. cit.:25).
PRONUNCIATION
An area of language that routinely accounts for a substantial share of communication breakdowns in non-native speaker
interactions as well as communication between native and non-native speakers is pronunciation (Paradowski, 2013; Pawlas
& Paradowski, under review). Yet, in many language classrooms its teaching is brushed off in favour of imparting other
skills. One solution here is computer-assisted language learning (CALL), including LMOOCs.
2
49
36
THE MOOC
This case study is based on the first author’s experience with a Japanese pronunciation MOOC offered by one of the popular
online providers. During the course, the participants needed to complete one pronunciation assignment each week. They
were supposed to record their own version of a short dialogue in Japanese and upload it to the platform. The recordings
were subsequently assessed by peers, although the last and the longest of the recordings was graded and commented on by
a teacher assistant instead of other participants.
Data triangulation
For the purpose of data triangulation, the analysis relied on four sources: i) a phonetic analysis of the first author’s speech
recordings with Praat (Boersma & Weenink, 2017), ii) assessment of the same stimuli, provided by peers, iii) feedback
from a TA involved in the course; iv) commentary by an independent Japanese native speaker teacher not involved in the
course. The primary foci of the analyses were three aspects of Japanese phonology: i) word accent, ii) intonation, and iii)
length of long vowels.
Summary of errors
Table 1 presents all the types of the first author’s mistakes which were revealed in the phonetic analysis along with the
number of occurrences and examples. The highlighted morae indicate the problem area. In total, 28 discrepancies were
detected between the original recordings and the author’s renditions. 19 mistakes concerned word accent and 9 intonation.
Within the two categories, the errors have been arranged by their gravity – the greater the probability that the mistake could
result in a misunderstanding in communication, the higher it appears in the table.
Type of error
No. of occurrences
Examples
1. Word accent – rise and fall of pitch switched
7
kasa, demo, muzukashii
2. Word accent – rise of pitch on the wrong mora
2
suggoku, ame
3. Word accent – fall of pitch on the wrong mora
1
tanoshimi-ni
4. Word accent – unnecessary rise of pitch
5
nen-niwa, kyō-wa
5. Word accent – unnecessary fall of pitch
3
tōkyō-de, shadōingu
6. Word accent – rise of pitch missing
1
natta
7. Intonation – excessive rise of pitch
1
gozaimasu
8. Intonation – fall of pitch instead of a rise
4
desu-ne
9. Intonation – rise of pitch instead of a fall
2
shiterun desu, shimashita
10. Intonation – excessive fall of pitch
1
desu-ne
11. Intonation – fall of pitch missing
1
tsukurimashita
Table 1. Summary of pronunciation errors
Provided below are two illustrative examples of the mistake. Fig. 1 reveals a rise of pitch on su in the second unit of the
model recording, which is missing from the researcher’s rendition, where an increase in pitch takes place on the subsequent
mora go. In Fig. 2, while in the model version the utterance ends with a noticeable increase of pitch on mora ne, in the
researcher’s rendition the intonation is falling.
Figure 1. Word accent example: Tōkyō orinpikku (1) suggoku (2) tanoshimi-ni (3)
3
50
37
Figure 2. Intonation example: Desu-ne
Length of long vowels
In total, the recordings contain 38 long vowel sounds, 19 each in the original and the researcher’s versions. In the vast
majority of cases (34), these are long ō sounds. There are also 2 ū and 2 ā sounds. The average length of long vowels in
the original recordings equals 0.164 s. The duration of the longest sound is 0.261 s and the shortest 0.104 s. The average
duration of long vowels in the researcher’s versions equals 0.173 s. The duration of the longest sound is 0.279 s and the
shortest 0.046 s.
For each sound the difference has also been calculated between the duration of both versions (original and researcher’s).
The average difference equals 0.044 s. The biggest difference in measurements of the same sound is 0.125 s and the smallest
0.006 s. The median difference equals 0.029 s. There are 6 instances in which the original extended vowel sound is longer
than its equivalent in the researcher’s rendition, with the differences between 0.027 s and 0.088 s. The researcher’s
realisation is longer than the model in 13 cases, with the differences between 0.006 s and 0.125 s. A summary of all the
long vowel durations measured in seconds is provided in Table 2.
Word
Target Actual production Difference
omedetō
0.104
0.145
0.041
arigatō
0.151
0.122
0.029
nijū
0.138
0.101
0.037
tōkyō
0.168
0.174
0.006
tōkyō
0.117
0.195
0.078
sō
0.147
0.176
0.029
tōkyō
0.177
0.090
0.087
tōkyō
0.170
0.193
0.023
ohayō
0.133
0.159
0.026
ohayō
0.151
0.276
0.125
kyō
0.209
0.226
0.017
sō
0.167
0.210
0.043
kinō
0.261
0.279
0.018
benkyō
0.134
0.046
0.088
dō
0.201
0.216
0.015
benkyō
0.158
0.088
0.070
hontō
0.152
0.202
0.050
nōto
0.199
0.172
0.027
kādo
0.186
0.218
0.032
Table 2. Length of long vowels
4
51
38
Peer assessment
Table 3 presents the content of the comments by peers who evaluated the recordings of the researcher’s pronunciation.
Very good pronunciation, you sounded almost identical to the example. Keep up the good work :)
Dobrze.
i think it is good but needs more security for talk! gambatte ne!
[Keep up the good work!]
It was easy to understand.
Sounds great!
悪くない 思います [I think it is not bad]
Everything sounds great except the second "Tokyo".
Instead of う ょう [tōkyō], it sounded like っ ょう [tokkyō].
Good morning, you have a good pronunciation, just a little intonation.
sounds clear and accent is good.
Accent on 雨 [ame] is not correct.
Good fluency and intonation.
excellent rhyme and pitch. after listening to yours I see where I made my errors.
あれ [are] sounds as if it is only one mora. Otherwise amazing.
Well done.
Very good, great accent, great intonation.
good
頑張ろう [Keep up the good work]
発音 適切
意味 わ る
[Pronunciation is appropriate and meaning understandable]
Good.
I think your pronunciation was good.
内容 よく伝わりまし
pronunciation.]
[You have conveyed the message well. Very clear
も れいな発音 し
Great!
good
GOOD
Table 3. Peer assessment
Compared with the results of the phonetic analyses, the peers did not notice (or chose not to write about) many mistakes in
the assignments, relating to both pitch accent and intonation. Very few peers stated the precise nature of the pronunciation
errors. The remaining feedback was formulated rather vaguely. It seems that the co-participants did not listen carefully
enough to pay attention to all the errors, but were satisfied if a speech sample was comprehensible on the whole.
Instead, a trait shared by quite a few comments was the need to cheer the participant to study hard and try her best. Such
willingness to support one another and build a community spirit among the participants was the strongest at the beginning
of the course.
TA’s feedback
The opinion cited below was offered by a teacher assistant as assessment of the last, sixth assignment, and constituted the
instructor feedback offered in the course:
“It was very good pronunciation that communicated your intention. The accent for "ええ" [ee] and " も" [demo] was
not correct. It sounded like “LH”. The correct accent is “HL”.
The accent for "
ら" was not correct. It sounded like “LHH”. The correct accent is “HLL”. Be careful of pitch. "ノー
5
52
39
ト" [nōto] sounded like "ノト" [noto]. Be careful of the long vowel sound. Keep trying your best to practice your
pronunciation!”
In comparison with the results of the phonetic analysis, the TA pointed out 3 out of the 7 word accent mistakes committed
by the researcher. As far as intonation is concerned, even though no remarks have been made by the teacher, there were 6
inaccuracies found between the original and the researcher’s versions in the phonetic analyses. As far as the long vowels
are concerned, the TA mentioned one sound whose duration was too short. In this case, the difference in length between
both versions was 0.038 s. However, in the same assignment there were 5 instances in which the difference was even higher
– in 3 of them it was actually the researcher’s vowel that was longer, and these cases were not referred to by the TA. For
this reason, pitch graphs were investigated once again. It transpires that although the difference in vowel length was
relatively not the most significant one and the pitch pattern in the researcher’s version was generally correct, it might have
been that her rise of pitch on mora no was too sharp, thus creating the impression of the vowel being too short. In the
original version, pitch increases more smoothly. As observed subsequently by the native speaker, this too might have
contributed to the impression that the researcher switched to stress accent instead of pitch accent.
Assessment by a native Japanese speaker
Below is a complete commentary by a native Japanese speaker, a lecturer of Japanese literature and language teaching:
“In the Japanese language word accent is carried out by lowering voice pitch. However, in the researcher’s case there is no
pitch accent. Using the so-called stress accent, she incorrectly accentuates mora ma, which results in an unnatural
pronunciation.
“There is a mistake in the pronunciation of mora wa. The researcher pronounces it as if it were a diphthong.
“The pitch should fall after mora so. In the researcher’s version it rises after this mora.
“Benkyō should be pronounced with a long vowel at the end. In the researcher’s version the last mora seems to be missing
and the word sounds like benkyo.
“Here we can observe a common problem with intonation. The researcher uses a rising pitch instead of a falling one, which
is why the interjection ee sounds as if it were a question.
“The pronunciation of both utterances is correct.
“There is an accent error in this utterance as well. According to the accentuation rules of the conjunction dakara, it should
be pronounced with the pitch falling after mora da. The researcher’s pitch is low on mora da and it rises after this mora.
“The voiceless alveolo-palatal sibilant /ɕ/ and vowel /i/, which are represented in kana by the symbol し, are wrongly
realised by the researcher as /si/.
“The pronunciation of the utterance is correct.”
The feedback in the native speaker’s opinion is on pronunciation as a whole, and thus goes into some aspects that were not
taken into consideration in the phonetic analyses. Consequently, his remarks shed new light on the issues discussed.
To begin with, the native speaker pointed to a total of 5 pitch accent errors, 3 of which had also been noticed by the
researcher. The remaining 2 mistakes consisted in switching from pitch accent to stress accent within a phrase, an
inaccuracy the researcher was not aware that she was guilty of. Surprisingly, the academic pointed to only 1 intonation
mistake throughout all the recordings. Apart from that he discovered 2 flaws in the length of long vowels, namely in his
opinion it was too short. In these cases, the difference between the duration of the original and researcher’s vowels equalled
0.087 s and 0.088 s. Still, there were 2 other instances of the vowel length being similarly longer in the original recordings
which were not spotted by the native speaker. What is more, the academic disregarded opposite cases of the researcher
pronouncing the vowels longer than the original. It might have been that these were less noticeable; however, in one case
the difference equalled 0.125 s, which is considerably more than the ones pointed out.
Among the additional elements identified by the native speaker, the most important issue was inappropriate articulation of
particular morae, namely wa, re and shi. This is an issue even some Japanese people struggle with. Furthermore, he paid
attention to 2 errors which were fairly obvious although not detected by the researcher, that is mistakenly replacing one
mora with another and deleting one mora at the end of a verb. The academic also stated that the researcher’s pronunciation
was good in 5 utterances, despite the phonetic analysis revealing deviations.
The numbers in Table 4 indicate how many of the specific errors were found in the particular analyses.
Type of mistake
Phonetic analysis
Native Japanese speaker
Peer assessment
TA
Rise and fall of pitch switched
7
2
X
3
Rise of pitch on the wrong mora
2
X
1
X
Fall of pitch on the wrong mora
1
X
X
X
6
53
40
Unnecessary rise of pitch
5
1
X
X
Unnecessary fall of pitch
3
1
X
X
Rise of pitch missing
1
X
X
X
Excessive rise of pitch
1
X
X
X
Fall of pitch instead of a rise
4
X
X
X
Rise of pitch instead of a fall
2
X
X
X
Excessive fall of pitch
1
X
X
X
Fall of pitch missing
1
X
X
X
Table 4. Summary of mistakes
Type of mistake
Gemination instead of a long vowel
Wrong articulation of morae
Wrong mora pronounced
Mora missing
Stress accent instead of pitch accent
Long vowel too short
Native Japanese speaker
Peer assessment
TA
1 (東京 tōkyō)
1 (東京 tōkyō)
X
6 (れ re, し shi, は wa, ん n)
X
X
1 (kattokunakya)
X
X
1 (chau-yo)
X
X
2 (gozaimasu)
X
X
1 (benkyō)
X
1 (nōto)
Table 5. Additional errors found in the assessment but not in the phonetic analysis
SUMMARY OF THE FINDINGS
In sum, while peers’ comments conveyed a general idea about progress, the feedback was not sufficiently detailed. Much
more reliable was the assessment by the TA (in line with common observations that people can learn much more from
those who are more experienced and better educated than from similarly ignorant peers; Paradowski, 2015:46). However,
the commentary by the independent Japanese native speaker indicates that a person not involved in the MOOC is easily
able to make even more observations. The take-home message is thus that assessment objective and reliable only after
triangulating all the available sources of feedback.
PEDAGOGICAL IMPLICATIONS
The findings have important pedagogical implications. They demonstrate that peer assessment may not produce reliable
and helpful results when there are no explicit guidelines and preparatory training exercises provided for the participants.
Consequently, future language courses, particularly those that concentrate on productive skills such as speaking, ought to
implement clearly constructed rubrics together with a grading tutorial.
REFERENCES
Ashton, S. & Davies, R. S. (2015). Using scaffolded rubrics to improve peer assessment in a MOOC writing course.
Distance Education, 36(3), 312-334. doi: 10.1080/01587919.2015.1081733
Bachelet, R., Zongo, D., & Bourelle, A. (2015, May). Does peer grading work? How to implement and improve it?
Comparing instructor and peer assessment in MOOC GdP. In European MOOCs Stakeholders Summit 2015 Proceedings
Papers, pp. 224-233.
Barak, M. & Rafaeli, S. (2004). On-line question-posing and peer-assessment as means for web-based knowledge sharing
in learning. International Journal of Human-Computer Studies, 61(1), 84-103. doi: 10.1016/j.ijhcs.2003.12.005
Bárcena, E. & Martín-Monje, E. (2015). Introduction. Language MOOCs: an emerging field. In E. Martín-Monje & E.
Bárcena (Eds.), Language MOOCs: Providing Learning, Transcending Boundaries (pp. 1-15). Berlin: De Gruyter.
https://www.degruyter.com/view/books/9783110422504/9783110422504.1/9783110422504.1.xml
Boersma, P. & Weenink, D. (2017). Praat: doing phonetics by computer [computer software]. Version 6.0.
http://www.fon.hum.uva.nl/praat/
Cho, K. & MacArthur, C. (2010). Student revision with peer and expert reviewing. Learning and Instruction, 20(4), 328338. doi: 10.1016/j.learninstruc.2009.08.006
Cho, K. & Schunn, C. D. (2007). Scaffolded writing and rewriting in the discipline: A web- based reciprocal peer review
system. Computers and Education, 48(3), 409-426. doi: 10.1016/j.compedu.2005.02.004
Dochy, F., Segers, M. & Sluijsmans, D. (1999). The use of self, peer and co-assessment in higher education: A review.
Studies in Higher Education, 24(3), 331-350. doi: 10.1080/03075079912331379935
7
54
41
Formanek, M., Wenger, M. C., Buxner, S. R., Impey, C. D. & Sonam, T. (2017). Insights about large-scale online peer
assessment from an analysis of an astronomy MOOC. Computers & Education, 113, 243-262. doi:
10.1016/j.compedu.2017.05.019
Gielen, S., Peeters, E., Dochy, F., Onghena, P. & Struyven, K. (2010). Improving the effectiveness of peer feedback for
learning. Learning and Instruction, 20(4), 304-315. doi: 10.1016/j.learninstruc.2009.08.007
Harber, J. (2014a). Where did MOOCs come from? In same, MOOCs. Cambridge, MA: MIT Press, 19-46.
Harber, J. (2014b). What makes a MOOC? In same, MOOCs. Cambridge, MA: MIT Press, pp. 47-88.
Ho, A. D., Nesterko, S., Seaton, D. T., Mullaney, T., Waldo, J., & Chuang, I. (2014). HarvardX and MITx: The first year
of open online courses. (HarvardX and MITx Working Paper No. 1). Retrieved from
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2381263
Huszcza, R., Ikushima, M. & Majewski, J. (2003). Fonetyka i prozodia. In M. Melanowicz & J. Linde-Usiekniewicz (Eds.),
Gramatyka japońska (pp. 17-114). Kraków: Wydawnictwo Uniwersytetu Jagiellońskiego.
Kolowich, S. (2013, Mar 18). The professors behind the MOOC hype. The Chronicle of Higher Education, 18.
http://www.chronicle.com/article/The-Professors-Behind-the-MOOC/137905
Kulkarni, C., Pang Wei, K., Le, H., Chia, D., Papadopoulos, K., Cheng, J., Koller, D., & Klemmer, S.R. (2013). Peer and
self-assessment in massive online classes. ACM Transactions on Computer-Human Interaction, 20(6). doi:
10.1145/2505057
Lackner, E., Kopp, M., & Ebner, M. (2014). How to MOOC? – A pedagogical guideline for practitioners. In Roceanu, I.
(Ed.). Proceedings of the 10th International Scientific Conference “eLearning and Software for Education” Bucharest,
April 24-25, 2014. Bucharest: Editura Universitatii Nationale de Aparare “Carol I”, pp. 215-222. doi: 10.12753/2066026X-14-030
Mason, R., & Rennie, F. (2006). Peer assessment. In same, Elearning: The Key Concepts. New York: Routledge, 90-91.
Meek, S. E. M., Blakemore, L. & Marks, L. (2017). Is peer review an appropriate form of assessment in a MOOC? Student
participation and performance in formative peer review. Assessment & Evaluation in Higher Education, 42(6), 10001013. doi: 10.1080/02602938.2016.1221052
Omuro, K., Baba, R., Miyazono, H., Usagawa, T. & Egawa, Y. (1996). The perception of morae in long vowels:
Comparison among Japanese, Korean, and English speakers. The Journal of the Acoustical Society of America, 100(4),
2726. doi: 10.1121/1.416186
Pappano,
L.
(2012,
Nov
2).
The
year
of
the
MOOC.
The
New
York
Times.
http://www.nytimes.com/2012/11/04/education/edlife/massive-open-online-courses-are-multiplying-at-a-rapidpace.html
Paradowski, M. B. (2013). [Review of the book Nature and Practical Implications of English used as a Lingua Franca.
Barbara Seidlhofer]. The Interpreter and Translator Trainer, 7(2) [Special Issue: English as a Lingua Franca.
Implications for Translator and Interpreter Education], 312-20. doi: 10.1080/13556509.2013.10798856
Paradowski, M. B. (2015). Holes in SOLEs: Re-examining the role of EdTech and ‘minimally invasive education’ in
foreign language learning and teaching. English Lingua Journal 1(1), 37–60.
Pawlas, E. & Paradowski, M. B. (under review). Communication breakdowns in ELF conversations: Causes, coping
strategies, and implications for the classroom.
Siemens, G. (2012, Jun 3). What is the theory that underpins our moocs? [Web log comment]. Retrieved from
http://www.elearnspace.org/blog/2012/06/03/what-is-the-theory-that-underpins-our-moocs/
Sokolik, M. (2015). What constitutes an effective language MOOC? In E. Martín-Monje & E. Bárcena (Eds.), Language
MOOCs:
Providing
Learning,
Transcending
Boundaries
(pp.
16-32).
Berlin:
De
Gruyter.
https://www.degruyter.com/view/books/9783110422504/9783110422504.2/9783110422504.2.xml
Strijbos, J. W., Narciss, S. & Dünnebier, K. (2010). Peer feedback content and sender's competence level in academic
writing revision tasks: Are they critical for feedback perceptions and efficiency? Learning and Instruction, 20(4), 291303. doi: 10.1016/j.learninstruc.2009.08.008
The return of the MOCC. Established education providers v new contenders (2017, Jan 12). The Economist.
http://www.economist.com/news/special-report/21714173-alternative-providers-education-must-solve-problems-costand
Venditti, J. J. (2005). The J_ToBI model of Japanese intonation. In J. Sun-Ah (Ed.) Prosodic Typology: The Phonology of
Intonation
and
Phrasing
(pp.
172-200).
Oxford:
Oxford
University
Press.
doi:
10.1093/acprof:oso/9780199249633.003.0007
8
55