Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
World Applied Sciences Journal 21 (Special Issue of Studies in Language Teaching and Learning): 117-124, 2013 ISSN 1818-4952 © IDOSI Publications, 2013 DOI: 10.5829/idosi.wasj.2013.21.sltl.2145 Quality Benchmarking for Online Writing Course: A Malaysian Case Study 1 1 Nuraihan Mat Daud and 2Mohammed Hakim Farrah Kulliyyah of Languages and Management, International Islamic University Malaysia. 2 Faculty of Arts, Hebron University, Hebron, West Bank, Palestine Abstract: The demand for online programmes is increasing and more institutions are opting to offer online courses. To make confident judgments concerning the efficiency of this mode of delivery, it is necessary to carry out an evaluation. This paper discusses an evaluation conducted on an online English for Academic Writing course offered by a public university in Malaysia and the reaction of the institution to the findings. The adapted version of the Institute of Higher Education Policy (IHEP 2000) benchmarks was used to determine if the course met the quality benchmarks for online learning. This was complemented by interviews with the teachers. Results show that support was lacking and management of the online mode needed to be improved. Steps were then taken to improve the programme by addressing the weaknesses identified in the study. The study also shows that it is not possible to evaluate the results of the implementation within a short period of time. The process of change takes time especially where the adoption of technology is concerned. Key words: Benchmarks Quality measures Online learning Writing measures and to examine aspects that may need improvement. The study focuses on the instructors since their perceptions can influence their practice [16]. Where relevant, students’ opinions were also sought to get their perspectives on the same issues. The long term impact of the action plans are also discussed. INTRODUCTION The availability of the Internet gives education providers the opportunity to explore its application in their institutions. Where the teaching of language is concerned, many studies have found that it has a positive impact on learning. For example, [1-5] have reported the use of a wide variety of discourse structures by students communicating electronically. The use of the Internet, however, is not devoid of limitations and criticisms. These limitations may arise from technological, managerial or pedagogical problems [6, 7]. Research shows that online instruction can be an effective means of teaching if certain guidelines are observed and reviews are performed [8-12]. Decisions can be made on the basis of an evaluation to improve learning and online delivery [13, 14] (p.14) observe that with thorough preparation and evaluation processes, costly mistakes can be avoided. [15] identify user satisfaction as one of the key factors in successful implementation of e-learning programmes. User satisfaction, however, is not limited to the learners. It is also important to consider the instructors as they are the facilitators in the learning process. This paper discusses an evaluation conducted on an online writing programme at a public university in Malaysia to investigate whether the course met quality Background to the Study: This study was conducted at the Centre for Languages and Pre-University Academic Development (CELPAD), International Islamic University Malaysia. The Centre offers Arabic, Malay, Mandarin, Japanese, French and English courses to equip students with the necessary language skills to perform well in their academic subjects. One of the compulsory courses offered by the Centre is English for Academic Purposes (EAP). This course is meant to develop students’ academic writing skills. Upon completion of this course, they should be able to compose an argumentative research paper. Students are guided to write in an argumentative mode, through a variety of tasks set throughout the semester. In the year 2000, the Centre embarked on a blended approach in the teaching of this course. The main reason for this decision was to reduce manpower problems due to insufficient teaching staff to cater for the large number of students requiring the course. On average forty sections were offered in each regular semester with Corresponding Author: Nuraihan Mat Daud, Kulliyyah of Languages and Management, International Islamic University Malaysia, Kuala Lumpur, Malaysia. E-mail: nuraihan@iium.edu.my. 117 World Appl. Sci. J., 21 (Special Issue of Studies in Language Teaching and Learning): 117-124, 2013 about thirty students per class. Since face-to-face was still the preferred method of teaching, more time was allocated compared to the online mode. A total of two out of six hours were allocated for the online mode of teaching. The option, however, was made compulsory to all students only in Semester II, 2004. This study examined whether the two online hours met the IHEP benchmarks for online learning. The online component of the EAP course provides students with a lot of teaching suggestions and links to improve their general academic writing skills and to encourage them to be independent learners. These materials are intended to motivate students to manage their time wisely. The course is divided into ten modules. Each module begins with a text that relates to the module, followed by a series of tasks. The tasks are for self-study and to assist students to practice what they have learnt in the face-to-face sessions. The course induces students to use the Internet as a tool in preparing and writing their papers. Students can also access a virtual resource room. Throughout the course, students are directed to useful sites to aid their learning process. In the face-to-face sessions, students are engaged in tasks in order to enhance their academic writing skills under the supervision of their instructors. The assessment of the course is based on the outline of the research paper, an oral presentation of the research paper outline, a 2500word research paper and a final examination. In the online mode, students are instructed to email drafts of their paper to their facilitator’s folder who would guide them in improving and rewriting their final paper [17]. The facilitator is to ensure that all students are engaged in the given tasks in the online module and initiate discussions via the discussion board and the chat room. In order to access the module, students are given the site address and the password to the programme. benchmarks as crucial measures to help institutions, faculty and students to judge the quality of Internetbased distance education. Several of these benchmarks were put together because they addressed the same issue. These are: (a) institutional support; (b) course development; (c) teaching and learning; (d) course structure; (e) student support; (f) faculty support; and (g) evaluation and assessment. The IHEP 2000 [9] has been used to measure the quality of online programmes in a number of institutions. The Technology Task Force (TTF) of the University of Medicine and Dentistry, in New Jersey, selected these benchmarks as the basis for evaluating their distance learning programme. They found that the standardized alpha indicated a high degree of internal consistency [19]. Similar evaluations using the same benchmarks were reported by [20], [21]and [22]. However, the nature of the programmes was not the same as the one in this study since theirs were distance learning programmes. Apart from nature of the programme, none of the studies mentioned evaluated the quality of a writing course. Objectives of the Study: The objective of this study is to analyse the extent to which the online mode of the English for Academic Writing course meets international level benchmarks, particularly the IHEP 2000 Benchmarks. The study will focus on the following benchmarks: Institutional support; Course development; Teaching and learning; Course structure; Student support; Faculty support; Evaluation and assessment. MATERIALS AND METHOD Evaluation of Online Programme: After six years of implementation, it was not known whether the course has achieved a certain standard where online learning was concerned. In order to determine whether it has met the international standard, the benchmarks developed by the USA Institute for Higher Education Policy (IHEP) [9] were used to evaluate the course. The Institute conducted a case study comprising three phases to review the guidelines that existed then and the benchmarks that dealt with the best practices in distance learning [18]. The study resulted in a list of twenty-four benchmarks for measuring the quality of Internet-based distance education programmes. The Institute considers these The case study approach was adopted to allow the researchers to gain a better understanding of the phenomenon [23]. This approach involves the use of a wide range of different methodologies. The survey and interview methods were the primary instrument for data collection in this study. [24] posits that a single case is acceptable provided that the objective is met. He suggests that case studies are favourable when contemporary events are investigated and when behaviour cannot be controlled. [25] proposes the use of a case study approach for reporting the work of teachers who develop ICT-facilitated learning. 118 World Appl. Sci. J., 21 (Special Issue of Studies in Language Teaching and Learning): 117-124, 2013 Interview is one of the most frequently used methods in the case study approach. It has the advantage of putting the interviewer in direct contact with the people involved in the research [26]. Data from the interviews serve to triangulate data collected from the survey done using the IHEP 2000 Benchmarks. Such a triangulation allows different aspects of the problems to be investigated [27]. Of the 30 teachers who taught the course 15 were selected to participate in this study. These teachers had more than a year of experience in online mode of teaching. A total of 30 students were also interviewed in this study. Their perspectives would be presented only where necessary since the focus of this study was on the teachers. Questionnaires based on the IHEP 2000 benchmarks were distributed in 2006 to elicit how the teachers felt about the online option of the English for Academic Writing course. Instructors responded to the statements based on a Likert scale from 1 (strongly disagree) to 5 (strongly agree). The instructors were also interviewed. Eight agreed to be audio-taped while the others declined. Data from the audio-taped interviews were transcribed and a selection of the comments is presented in the analysis section. Several measures were taken by the Centre based on the data obtained from the questionnaire survey. The response to these changes were noted by the management team. Validity and Reliability: In order to determine the construct validity, a factor analysis was performed using a Varimax rotation procedure. The rotated matrix for the seven factors was obtained using SPSS, Version 11.0. The analysis shows that the items were categorised into seven major constructs, as grouped in the IHEP 2000 benchmarks. The reliability of the questionnaire is reported based on Rasch analysis procedures. Table 1 shows that the reliability estimate of item is 0.85, which is an acceptable value in the Rasch model of measurement [28]. Rasch measurement also calculates a person reliability estimate. Table 2 shows that the reliability of the person difficulty estimate is very high at 0.92 [28]. This means that the measures produced by the instrument are highly reliable. The reliability coefficient of each of the scales in the study is tabulated in Table 3. The table reflects that the Cronbach Alpha Coefficient of each of the scales is considerably high. To see which benchmarks were met by the Centre as perceived by the instructors, the mean of difficulty was calculated for each of the criteria. Based on these means, the seven IHEP 2000 benchmarks could be ordered on a scale according to the degree the benchmarks were met. Figure 1 presents the overall perception of the participants based on the seven benchmarks: Figure 1 shows that to the instructors, the Centre excelled in teaching and learning, course structure and course development. The Centre, however, was seen struggling to meet institutional support, faculty support, evaluation and assessment and student support benchmarks. Analysis of Survey: Out of a total of thirty instructors who taught the course, twenty-eight responded to the questionnaire survey. The descriptive statistics revealed that 64% of the respondents were females. A total of 70% of them had more than 10 years of teaching experience. When the respondents were asked to mention the computer courses they had attended, less than 47% reported that they had attended computer literacy courses, 58% had attended Computer Assisted Language Learning (CALL) courses and more than 78% had undergone training to teach the online English for Academic Writing course at the Centre. Less than 11% mentioned that they had attended web-design, networking, multimedia, courseware and IT courses. When the respondents were asked to rate their computer skills, 50% reported average computer skills, 39% reported good computer skills and only 11% reported very good computer skills. Analysis of Each Benchmark: In addition to utilising Cronbach Alpha Coefficient to indicate reliability of the instrument, the Rasch analysis procedures are also adopted to determine the relative level at which a certain benchmark is met. Teaching and Learning Benchmarks: This category was rated highly by the instructors particularly in terms of feedback given to the students. They claimed that it was provided in a timely manner and that it was constructive and non-threatening. One of the teachers stated: Students like to email me especially if they are adult learners and shy to express themselves in class. It is not embarrassing to email somebody, so I use email. 119 World Appl. Sci. J., 21 (Special Issue of Studies in Language Teaching and Learning): 117-124, 2013 Table 1: Summary of Items Estimate Mean S.D. Max. Min. Estimate Error 0.00 0.73 1.02 -1.10 0.26 0.03 0.31 0.23 Infit mean Square A Student from a Different Class Gave the Opposite View: Outfit mean Square 1.01 0.32 1.72 0.57 1.00 0.35 1.86 0.55 My instructor is very good quickly to my assignments beneficial feedback. Reliability of Estimate: 0.85 Table 2: Summary of Person Estimate Mean S.D. Max. Min. Estimate Error -0.05 0.96 1.63 -2.58 0.24 0.04 0.34 0.21 Infit mean Square The above reflects that some instructors provided feedback but others did not. In general students expected that the instructors provide feedback using the online facilities available. Outfit mean Square 0.99 0.57 2.72 0.20 1.00 0.62 2.91 0.18 Course Structure Benchmarks: Overall the instructors perceived that the Centre has met the benchmarks for course structure. Of the five items in this category, the benchmark ‘Before starting students are advised about the programme’ received the lowest rating. An instructor commented that: Reliability of Estimate: 0.92 Table 3: Reliability analysis – scale (Alpha) Scale No. of Items Institutional Support Benchmarks Faculty Support Benchmarks Evaluation and Assessment Benchmarks Student Support Benchmarks Course Development Benchmarks Course Structure Benchmarks Teaching and Learning Benchmarks 4. 5 5 4 4 5 6 at responding and giving Alpha Value 84 .80 .86 .75 .71 .81 .87 The students are in the dark when they register for the course. Four other benchmarks in this category received a high rating from the respondents. Two of the benchmarks are availability of course outlines and clarity of learning outcomes for the course. The instructors may not have perceived this as problematic since a university level committee has been established to ensure that course outlines of all the courses offered by the university were of acceptable standard. The same applies to the benchmark that addresses library resources including virtual library since this facility is provided by the University library and it was easily accessible on the Internet. Hence it is expected that they met this benchmark. The instructors also perceived that they did not have problems where setting guidelines for assignment completion was concerned. Students expressed similar views in the interviews indicating that the guidelines were clear. Another Said: We use discussion rooms, chat rooms, emails, online folders. The students use the online folders to submit their assignments. Since the questions concerned their own teaching practice, they may be biased as their responses would reflect on their own teaching style. Hence students’ opinions were also sought to confirm the claim. When asked they seemed to have conflicting views on this issue. One student explained, Course Development Benchmarks: The course development benchmarks include those activities meant for the development of courseware. The instructors rated the benchmark for periodical review of instructional materials very low. However, benchmarks that address the course received high ratings. These include the existence of guidelines on the minimum standards of the course, the technology used for course delivery and course design. In the interview, an instructor claimed: We cannot learn from our mistakes in this online programme. When I submit my assignments I get a grade but no feedback. 120 World Appl. Sci. J., 21 (Special Issue of Studies in Language Teaching and Learning): 117-124, 2013 This is actually an achievement... We actually produced a lot of materials and we have it. We have done it. I think we are improving all the time. has been in place since 2000, no formal study had been done on the success of the programme which was not the case [17]. This in itself indicates that there was a lack of communication among the faculty members. A Satisfied Student Stated: I feel that I actually learn more because I have the flexibility to decide the best time of day to work on the course. The Head of the Division mentioned that they had content specialists and IT specialists working on the programme. A faculty, however, pointed out that though they were qualified language experts they were not technical experts. He felt that they needed to attend more computer courses. Faculty Support Benchmarks: Faculty support is important in ensuring the success of an innovation. Yet in this study the staff felt that they did not get the support needed. All benchmarks received a low rating from the instructors. The instructors teaching the online course did not feel that they were fully supported by the Centre. The instructors felt that peer monitoring resources, technical assistance and written resources were lacking. One of the instructors commented: Student Support Benchmarks: The four benchmarks relating to student support received mixed reactions from the instructors. As a whole, they perceived that the Centre did not meet the benchmarks for student support. An instructor mentioned that: When I need support, no one gives me support. We are left alone. Where technical assistance was concerned, another instructor stated: Some students might need some extra lessons in basic computing skills and on how to use the Internet. Most of the time the technicians are not there. You go out, cry, nobody comes. The instructors felt that the Centre has problems in ensuring that technical assistance was easily available to the students throughout the duration of the course/programme. In fact, this benchmark received the lowest rating when all items were compared. The instructors complained that the technicians were always unavailable whenever they were needed. The Centre did not seem to provide a structured system to address students’ complaints. Where addressing students’ complaints were concerned, 17 (56.7%) of the students interviewed mentioned that they sent emails and asked questions but the answers from the instructors were not adequate whereas the other students (43.3%) admitted that their complaints were addressed adequately. Workshops were actually conducted by the Centre to provide in-service training to the instructors. However, they were hardly attended. According to the trainer: The maximum that I had turned-up was three teachers and they usually came late. I can show you teachers here who have never been to the training. Those who had never got the course guideline because it is with me (sic). Members of staff were also asked about their lack of attendance in the workshops and the reasons they gave included schedule clash, lack of interest, no instruction from the Head and no monitoring of attendance. Some felt that the available information was sufficient for them to teach the course. Evaluation and Assessment Benchmarks: The evaluation and assessment benchmark includes those policies and procedures that address how the institution evaluates the online course. The instructors perceived that the Centre failed to meet the benchmarks for evaluation and assessment. The most problematic area seemed to be the use of data on enrolment, cost and successful/innovative uses of technology to evaluate programme effectiveness. In fact, from the instructors’ responses, evaluation and assessment of the programme were certainly lacking at the Centre. According to the instructors, although the course Institutional Support Benchmarks: This category received the lowest rating from the instructors. They felt that the most problematic area with regard to these benchmarks was the provision of electronic security measures to ensure the integrity and validity of information. One of the teachers was concerned about the lack of security measures: 121 World Appl. Sci. J., 21 (Special Issue of Studies in Language Teaching and Learning): 117-124, 2013 Well, in terms of security, I do not think we have any. Members of staff were required to attend workshops on matters relating to technology integration into the curriculum. Measures were taken by the management to ensure that all staff attended the programmes. Staff to be trained were identified before each workshop and a suitable time was chosen to ensure that the workshop did not clash with their other duties. Furthermore, staff were taught how to download teaching materials into the learning management system (LMS), include relevant links, make announcements, organise forums, create quizzes and use the learning tracks. Research grants were also provided for research on the effectiveness of the innovation. Technical and financial support was provided by the management to improve delivery. Decisions at university level also gave an indication that the university was taking this innovation seriously. Beginning semester I, 2009/10 session, one of the items in the student rating of teaching effectiveness survey was whether the teacher uses the LMS to enhance their learning. This survey is conducted every semester and it is often used to determine promotion, reappointment and merit pay to academics. Another action that was done at university level, which was providing computer literacy course to new students, also helped to overcome some of the obstacles identified by this study. In short, the instructors’ response to each benchmark helped the Centre in identifying which problems were to be addressed first in its effort to improve the language services offered to its students. In the year 2010, the Centre was awarded the Most Active Faculty Award in ICT Integration in the Classroom by the University. Its staff also won the Most Innovative and also the Most active User Awards from the University. The rigorous workshops which were conducted for the staff make it possible for the Centre to extend the online mode of delivery to all other language courses that it is offering. In 2011, a total of 11 articles on the use of technology in language teaching were written by the staff and published by the University. By 2012, the blended mode was fully implemented and the development of econtent is now a continuous effort at the Centre. He claimed that the chat room was not well-protected. It was also observed that, in addition to the reliability of the technology delivery system, the Centre had difficulties in meeting the benchmark for support for building and maintenance of the online education infrastructure. Moreover, the Centre seemed to have problems in providing measures to ensure quality standard. An instructor commented: We do not really have a body to look into this online learning concept…We need financial resources to upgrade but there is no approval from the upper management. RESULTS AND DISCUSSION This study revealed that the instructors perceived that the Centre had difficulties in meeting some of the benchmarks, particularly evaluation and assessment, faculty support, institutional support and student support benchmarks. This finding is almost similar to that of [20] who found that the quality standards for faculty support, evaluation and assessment and course development at a comprehensive university in Northwest Wisconsin were not met. In [22] study conducted in Hong Kong, the institutions did not meet the benchmarks for course development, student support, faculty support and evaluation and assessment. However, different findings were made in other institutions [21], for example, found that most of the state universities in Florida met the institutionally-controlled benchmarks: institutional support, student support, faculty support and evaluation and assessment. One important contribution of this study to the Centre is the identification of issues that needed to be addressed in order to improve delivery of the online course. The online mode of teaching was temporarily suspended and re-introduced in Semester II, 2008/9, with improvements in the way it was conducted. The findings in this study were used by the management team to improve the online mode of teaching. Steps were taken to review the course to make it suitable for the blended mode of teaching. Periodical review of courses offered was made one of the quality objectives of the Centre. The development of students’ ICT skills was also included as one of the learning outcomes of the course. CONCLUSION The study highlighted the need to evaluate an online programme to inform the institution of the strengths and weaknesses of its online courses. It serves to show whether it has achieved certain acceptable standards. 122 World Appl. Sci. J., 21 (Special Issue of Studies in Language Teaching and Learning): 117-124, 2013 The findings can be used to guide the institution to improve its online programmes. It points to the need to give greater thoughts to factors other than pedagogical aspects when a decision to adopt the online approach is made. The study also highlights the importance of management commitment in offering quality programmes. Success comes with planning and time is needed before one can see the success of its implementation. 8. 9. 10. ACKNOWLEDGEMENT 11. We would like to thank all the instructors who took part in the study and the International Islamic University Malaysia for funding this project. 12. REFERENCES 1. 2. 3. 4. 5. 6. 7. 13. Warschaur, Mark, 1998. Interaction, negotiation and computer-mediated learning in Educational Technology. In Language Learning: Theoretical reflection and practical applications, Eds. Darleguy, V. A. Ding and M. Svensson. Lyon, France: National Institute of Applied Science, Centre of Language Resources, pp:125-136. Sotillo, S.M., 2002. Discourse functions and syntactic complexity in synchronous and asynchronous communication. Language Learning and Technology, 4(1): 82-119. Toyoda, E. and R. Harrison, 2000. Categorization of text chat communication between learners and native speakers of Japanese. Language Learning and Technology, 6(1): 82-99. Ho, Wai-Chung, 2004. Use of information technology and music learning in the search for quality education. British Journal of Educational Technology, 35(1): 57-87. Abdul Ghani, Rozina, 2005. Communication patterns of a computer mediated classroom discussion: a case study of intermediate ESL/ EFL students in IIUM, PhD Thesis, International Islamic University Malaysia, Kuala Lumpur. Eynon, Rebecca, 2008. The use of the world wide web in learning and teaching in higher education: reality and rhetoric. Innovations in Education And Teaching International, 45(1): 15-23. Dawson, Kara, 2008. The role of teacher inquiry in helping prospective teachers untangle the complexities of technology use in classrooms. Journal of Computing in Teacher Education (JCTE), 24(1): 5-12. 14. 15. 16. 17. 18. 19. 123 Hiltz, S.R., 1994. The virtual classroom: Learning without limits via computer network. Norwood, NJ: Ablex, pp: 384. The Institute for Higher Education Policy, 2000. Quality on the line: benchmarks for success in internet-based distance education. Washington: The Institute for Higher Education Policy, pp: 37. Sellani, R.J. and W. Harrington, 2002. Addressing Administrative/Faculty Conflict in an Academic Online Environment. The Internet and Higher Education, 5(2): 131-145. King, F.B., 2002. A virtual student: not an ordinary Joe. The Internet and Higher Education, 5(2): 157-166. McGory, S.Y., 2002. Online, but on target? Internetbased MBA courses: A case study. The Internet and Higher Education, 5(2): 167-175. Macdonald, Ranald, 2006. The use of evaluation to improve practice in learning and teaching. Innovations in Education And Teaching International, 43(1): 3-13. Palloff, Rena M. and Pratt, Keith, 2001. Lessons From the Cyberspace Classroom: The realities of online teaching. San Francisco: John Wiley and sons, pp: 204. Chen, Nian-Shing, Kan-Ming Lin and Kinshuk, 2008. Analysing users’ satisfaction with e-learning using a negative critical incidents approach. Innovations in Education And Teaching International, 45(2): 115-126. Hardré, Patricia L. and W. Sullivan, David 2008. Teacher perceptions and individual differences: How they influence rural teachers’ motivating strategies. Teaching and Teacher Education: An International Journal of Research and Studies, 24(8): 2059-2075. Nuraihan Mat Daud and Ainol Marziah Zubairi, 2006. Online and offline writing course. In Online Teaching and Learning, Eds. Kabilan, M.K. N. Abdul Razak and M.A. Embi. Pulau Pinang: Penerbit Universiti Sains Malaysia, pp: 203. Phipps, R.A. and J.P. Merisotis, 2000. Quality on the Line: Benchmarks for Success in Internet-Based Distance Education (Institute for Higher Education Policy, Washington, D.C.). Available online at http://www.ihep.org/Publications/publicationsdetail.cfm?id=69 (accessed 17 October 2011). Scanlon Craig, L., 2003. Reliability and validity of a student scale for assessing the quality of Internetbased distance learning. Online Journal of Distance Learning Administration, VI (III). Available online at: http://www.westga.edu/~distance/ojdla/fall63/scanl an63.html (accessed 17 October 2011). World Appl. Sci. J., 21 (Special Issue of Studies in Language Teaching and Learning): 117-124, 2013 20. Hensrud, F.C., 2001. Quality measures in online distance education at a small comprehensive university, PhD thesis, University of Minnesota. 21. Sparrow, J.L.V., 2002. Online education at nine state universities in Florida, EdD thesis, University of Central Florida. 22. Yeung, D., 2003. Toward an effective quality assurance model of web-basd learning: the perspective of academic staff. Turkish Online Journal of Distance Education, 4 (1), 1-13. Available online at: tojde.anadolu.edu.tr/tojde9/articles/web_based_lea rning.htm (accessed 12 August 2008). 23. Stake, R.E., 1994. Identification of the case. In Handbook of Qualitative Research, Eds. Denzin, N.K. and Y.S. Lincon. Thousand oaks, CA: Sage, pp: 236-247. 24. Yin, R.K., 2003. Case Study Research: Design and methods, (Third ed.). Thousand Oaks, CA: Sage, pp: 217. 25. Lyons, Howard, 2009. Case study research methodology for publishing developments in ICTfaciliated learning in higher education – a prescriptive approach. Innovations in Education and Teaching International, 46(1): 27-39. 26. Walker, Rob, 1985. Doing Research – A handbook for teachers. Cambridge: Methuen, pp: 212. 27. Mason, J., 2002. Qualitative Researching. London: Sage Publications Ltd, pp: 223. 28. Bond, T.G. and C.M. Fox, 2001. Applying the Rasch model: fundamental measurement in the human sciences. London: Lawrence Earlbaum Associates Publisher. 124