Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Editorial 29(6)

2013, Australasian Journal of Educational Technology

Australasian Journal of Educational Technology, 2013, 29(6). ascilite Editorial: Volume 29 Issue 6 We are pleased to present a new issue of AJET and in the editorial we would like to discuss some of the challenges involved in undertaking and reporting on experimental research in education and in educational technology specifically. The first challenge relates to the need to find the right balance between internal and external validity in the research design, while the second relates to the need for clarity about the likely causes of learning effects: technology or learning design. A typical experimental research design in an educational technology context is one where two or more learning conditions are compared with respect to their effect on learning outcomes or an aspect of the learning process. For example, a study might be designed to compare the learning outcomes of a group of students who undertook a synchronous group discussion mediated by a web conferencing tool with those who undertook the same activity face-to-face without technology mediation. Such a study might, for example, be undertaken to explore a more general theoretical or conceptual idea about the affordances and constraints of online synchronous communication software. A key design goal in such a study is the control of variables in such a way as to minimise the possibility of confounding variables which could lead to misinterpretation of results. In this example, the characteristics of the students in terms of their prior experience, aptitude and motivation, the time of day that the activity occurred, and the teaching support provided during the activity could all be confounding variables if not controlled. One approach to minimising confounding variables in such a study is to carry out the study in ‘laboratory’ conditions where volunteer students undertake the learning activity at a set time in a set place and are randomly allocated to one of the two learning conditions. A criticism by Reeves (1995) and others is that in an attempt to control the potentially confounding variables, many such studies explore the use of a particular technology outside of the intended context or with learners who have no real reason for engaging in the learning process. Reeves (1995) proposes instead that research should be carried out in a natural setting, using participants who have a reason to achieve the intended learning outcomes. Ross and Morrison (1989) differentiate between ‘developmental’ research, which “is oriented toward improving technology as in instructional tool”, and ‘basic’ research, which is “oriented towards furthering our understanding of how these applications affect learning and motivation” (p. 20). Basic research, which normally uses experimental methods, attempts to maintain high internal validity by controlling variables and eliminating extraneous factors. Developmental research, however, which is normally carried out in an authentic context, can often have greater external validity, because the results can more readily be related to real-life applications. They argue (and we agree) that there is a need for both types of research. It is important, however, in writing up the results of educational technology research, to make explicit the goals of the research and to include within the discussion recognition of any limitations of the approach taken in terms of the internal validity or robustness of the research design, or the external validity or wider applicability of the findings. The second issue we would like to raise with regards to experimental research in educational technology is the question of the likely cause of any expected improvement in learning outcomes through a particular technology supported learning condition. The above example, comparing collaboration mediated by technology with face-to-face collaboration, is an example of a classic type of research design in educational technology research where learning through or supported by technology is compared with learning without technology support. These types of studies have been the subject of criticism from a number of researchers, most notably Clark (1983; 1994). Clark (1983) uses evidence from a meta-analysis of educational media research to argue that there is no connection between media (or technology) and learning and that on balance the numerous studies that have been undertaken suggest that there is no significant difference between learning using different learning media. Most importantly, he argues that studies that report a difference have in fact confounded the media with the teaching method. That is, they have compared the teaching of a concept using a particular type of media (or technology) and a particular learning design with an alternative design with a different type of media. He argues that the results indicate that it is the difference in learning design that is the main factor in the difference rather than the media. He does, however, acknowledge that by focussing on certain attributes of technology in conjunction with certain learning design characteristics,   i Australasian Journal of Educational Technology, 2013, 29(6). consistent learning effects can be found. Importantly, he argues that although such studies demonstrate that certain technology supported learning activities result in learning benefits they do not demonstrate that these activities and technologies are uniquely required to achieve the learning benefit, but rather the learning benefit could always be achieved using a different technology or without technology. Kozma (1994) in criticising the stance taken by Clark, argues that identifying a technology or learning design that optimally or uniquely leads to a learning outcome is not important. Rather, he argues that the goal of the researcher should be to demonstrate that in certain circumstances, for certain learners, certain learning outcomes can be achieved using a particular learning design and technology more or less effectively than using an alternative and to explore why this is the case. The clear documentation of this educational situation will then allow educators to make a judgement about whether similar resources will be appropriate in their own learning situation. Kozma (1994) also argues that it is unlikely that a certain learning design or technology attribute will ever be found to lead to the achievement of a particular learning outcome for all learners. Differences in learners’ aptitudes, motivations and prior experience ensure that there will always be differences in the learning outcomes obtained from any learning experience. Consequently, if a particular technology attribute in the context of a particular learning design can be found to result in statistically significant learning improvements then that provides a strong argument for its use by educators even though we can never guarantee that learning will occur. We concur with Clark’s (1983) argument that aspects of media (or technology) cannot be viewed in isolation from the learning design within which they are applied. We see the role of technology as ‘affording’ particular learning activities which can ultimately result in learning, rather than the technology itself directly influencing learning (see Dalgarno & Lee, 2010). However, we also accept Kozma’s (1994) argument that because the learning experiences required to achieve particular learning outcomes will vary for different learners in different learning situations, researchers need to move away from questions about whether technology influences learning to questions about the ways in which we can use the capabilities of technology to influence learning for particular students, within particular learning designs and learning contexts. There is certainly a place for controlled experimental studies which are theoretically informed and demonstrate in a robust way that particular technology supported learning activities have learning advantages over alternative activities. However, there is also a place for authentic studies, which demonstrate that a certain technology supported activity has significant learning benefits for a particular outcome in a particular learning situation. Readers will look for studies that demonstrate the successful application of technology supported activities for the achievement of learning outcomes similar to the ones they desire and in learning situations that are similar to their own, even if the learning benefits have not been demonstrated through a controlled experiment. If certain techniques can be found to be advantageous in one situation then there is hope that they can be found to be advantageous in another. This issue of AJET starts with a paper on the very topical subject of why students choose to use – or not to use – both online and face-to-face learning material and sessions when they attend University. The second paper addresses something of a similar issue from a different angle. Parkes, Reading and Stein present an interesting study on e-learning competencies and where differences may occur between what in theory and what in practice is needed at university. The following paper by Staines and Lauchs presents an investigation of another hot topic: how Facebook can be used with students to engage them in university learning. The next two papers both focus on 3D environments. Garrett and McMahon consider skills transfer – a perennial consideration for educators – after learning in a 3D simulation environment. Yimaz, Topu, Goktas and Coban present a study of factors that affect motivation and social presence in 3D virtual worlds. Pekerti presents an experimental study of the affects of augmenting instructional text with pictorial information, while the next three studies all touch in some way on teacher education. The first by Walta and Nicholas considers how mobile and associated technologies can support the development of a community of inquiry, while Koh presents a rubric, based on the well-known TPACK framework, for assessing teachers’ lesson plans. Ma and O’Toole present a study of various stakeholders’ attitudes towards video enhanced problem-based learning scenarios. The final paper in this issue presents a study on the ways in which knowledge management abilities can be improved though the use of personal blog portfolios.   ii Australasian Journal of Educational Technology, 2013, 29(6). In this, the final issue of AJET for 2013, we would like to extend particular thanks to AJET’s Associate Editors who have worked tirelessly throughout the year with manuscripts, authors and reviewers. We would also like to acknowledge the extensive group of reviewers for AJET who graciously provide both their time and expertise to the journal, and our field more generally. In short, without the generous volunteer work of these people we would never be able to produce AJET. Barney Dalgarno, Sue Bennett and Gregor Kennedy, Lead Editors Australasian Journal of Education Technology References Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21-29. Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53(4). Dalgarno, B. & Lee, M. J. W. (2010). What are the learning affordances of 3D virtual environments? British Journal of Educational Technology, 41(1), 10-32. Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42(2), 7-19. Reeves, T. C. (1995). Questioning the questions of instructional technology research, invited Peter Dean lecture presented for the division of learning and performance environments (DLPE) at the 1995 national convention of the Association for Educational Communications and Technology (AECT), Anaheim, CA, USA. February 8-12, 1995. Retrieved December 17, 2003, from http://www.gsu.edu/~wwwitr/docs/dean/index.html Ross, S. M., & Morrison, G. R. (1989). In search of a happy medium in instructional technology research: issues concerning external validity, media replication, and learner control. Educational Technology Research and Development, 37(1), 1042-1629.   iii