Viewpoint
Abstract
: Much has been written about insufficient user involvement in the design of eHealth applications, the lack of evidence demonstrating impact, and the difficulties these bring for adoption. Part of the problem lies in the differing languages, cultures, motives, and operational constraints of producers and evaluators of eHealth systems and services. This paper reflects on the benefits of and barriers to interdisciplinary collaboration in eHealth, focusing particularly on the relationship between software developers and health services researchers. It argues that the common pattern of silo or parallel working may be ameliorated by developing mutual awareness and respect for each others’ methods, epistemologies, and contextual drivers and by recognizing and harnessing potential synergies. Similarities and differences between models and techniques used in both communities are highlighted in order to illustrate the potential for integrated approaches and the strengths of unique paradigms. By sharing information about our research approaches and seeking to actively collaborate in the process of design and evaluation, the aim of achieving technologies that are truly user-informed, fit for context, high quality, and of demonstrated value is more likely to be realized. This may involve embracing new ways of working jointly that are unfamiliar to the stakeholders involved and that challenge disciplinary conventions. It also has policy implications for agencies commissioning research and development in this area.
doi:10.2196/jmir.9.2.e15
Keywords
Aims and Origin of This Article
This paper represents a personal viewpoint based on a nonsystematic review of the literature and the experience of observing and participating in the design, evaluation, and analysis of health informatics interventions. It originated as a briefing document for members of a multidisciplinary team of clinicians, researchers, and software designers, which was designed to foster shared understanding and plan a program of formative and evaluative work. The paper draws on existing literature advocating interdisciplinary methods in medical informatics but focuses on generating a dialogue between software developers and researchers working in this area.
The article begins by considering the increasing heterogeneity of the field, the need for multiple research perspectives, and the implications of scientific subcultures; it discusses the importance of research for ensuring that new eHealth technologies are adopted and effective; it highlights common concepts and methods in software design and health services research; and it then considers the benefits, challenges, and facilitators to interdisciplinary collaboration.
For the purposes of this paper, the term eHealth is used broadly as a synonym for health informatics or medical informatics and health services research for health technology assessment and health systems research.
Out of the Basement: Changing Stakeholders in Medical Informatics
Not so long ago, medical informatics was largely the preserve of computing professionals and managers due to its focus on aspects of information technology “hidden” beneath the surface of health care organizations, such as operating systems, architectures, and databases. While epidemiologists were quick to harness the potential of electronic patient records for research and disease surveillance, it was the growth in practice-based computing during the 1990s that increased awareness of information technologies among clinical stakeholders and saw their gradual integration into the processes of care. Among the general public, awareness of eHealth, as the field is becoming known [
], has burgeoned since the turn of the 21st century, paralleling access to the Internet and the proliferation of Web-based health and lifestyle resources. This is reflected at the policy level, where governments have become increasingly interested in the potential of information and communications technologies to improve the organization and delivery of health services and to support patient empowerment for self-care [ ]. In the United Kingdom, for example, the National Health Service (NHS) National Programme for Information Technology is gradually bringing the health service into people’s homes via initiatives such as NHSDirectOnline and NHSHealthSpace, which offer not only information but also opportunities for electronic consulting and personal health care organization (eg, records, appointments) [ ].Reflecting this societal trend, academic involvement in medical informatics has become ever more interdisciplinary, with growing participation by the social, economic, and legal sciences (eg, around managing change, ethics, and cost-effectiveness) and the emergence of translational fields such as bioinformatics that promise to challenge existing medical models. At the same time, the boundaries between scientific, policy, and commercial areas of research and development are becoming grayer, as academia and industry respond to government funding opportunities and the policy community responds to emerging evidence and new technologies.
Growing use of the term health informatics, in preference to medical informatics, also reflects a shift toward inclusiveness.
represents this shifting landscape in terms of the stakeholders, technical focus, disciplinary drivers, and objectives of medical informatics practice and research. It is not intended as a comprehensive chronological account of the field’s evolution, although the domains reflect observations from previous analyses of the literature [ ]. A key change has been the increasing breadth and complexity of the field not only in terms of new technologies but also the perspectives that are being brought to bear in planning, understanding, and evaluating these technologies.Scientific Subcultures and Their Implications for Interdisciplinary Working
The increasing heterogeneity of the eHealth field raises challenges for interdisciplinary working and the translation of research to policy and practice. These challenges have to do with the management of nonshared concepts and languages and the values ascribed to different forms of scientific and technological endeavor, within what may be termed the knowledge economy of eHealth.
Despite popular stereotypes about “the” scientific paradigm, different disciplines have evolved uniquely over time and have their own theoretical or applied stance, criteria for appraising quality, and their own ways of working. Although we agree on basic principles of objectivity and methodological adherence, the way in which we see the world and approach new research problems is affected by a host of contextual and historical factors that are specific to individual disciplines, making truly interdisciplinary working difficult to achieve [
].Unpicking all these factors is beyond the scope of a single paper; however, a useful guide is to be found in the comparison of two important areas of activity that are central to modern eHealth—namely, software design and health services research (HSR). The reason for concentrating on these is that managing their relationship is fundamental to ensuring that eHealth innovations achieve their potential to improve the quality, efficiency, and safety of patient care. This paper focuses on the software development process, with particular reference to user-centered design methods. Its origins lie in the author’s growing appreciation of the nature, value, and limitations of evaluation methods used in the software engineering community and the lack of awareness of these among health services researchers evaluating eHealth resources. While several high-profile documents have explored potential synergies between HSR and the broad field of medical informatics, and these have undoubtedly contributed to quality improvements in some areas, their potential impact across the wider eHealth landscape is far from being realized [
- ]. In practice, many eHealth software developments, and the HSR projects associated with them, take place in the context of short contractual episodes, where neither developers nor health services researchers have the time or incentive to engage in cross-disciplinary learning. As a result, developers and researchers of eHealth regularly work in parallel universes, each regarding the other’s domain of activity as separate and neglecting the potential for useful interaction.The Need for More Research in Development
Although developing technical solutions remains central to medical informatics, recent years have seen a growing emphasis on identifying and resolving barriers to implementation. Particular attention has been devoted to understanding so-called people and organizational factors, such as stakeholder resistance to change and the appropriate integration of new technologies into work patterns [
, ]. Two key themes have emerged from this discourse, which have direct relevance for the potential effectiveness of eHealth innovations:- The clinical appropriateness and usability of eHealth technologies have been compromised by insufficient end-user engagement in the design process.
- The effectiveness of emerging eHealth technologies in improving the processes or outcomes of health care is unproven.
To consider the first theme, while there is general consensus among software designers on the importance of engaging users in software design and testing, commercial drivers and a historical focus on product development have meant that this has often been inadequate in the past, resulting in top-down developments whose problems may only emerge after rollout. The health care sector has been particularly prone to such problems in recent years, and there are numerous examples of potentially useful systems that have failed or been abandoned due to unanticipated technical, human, or organizational issues [
- ]. Design flaws can affect the ease of use and reliability of systems and may even be dangerous, creating ill-feeling and reducing clinicians’ willingness to use emerging systems, software, and hardware in practice [ , ]. Even seemingly minor problems with usability or conceptual fit can destabilize the implementation of otherwise highly engineered and valid technologies. The discussion that follows illustrates how developers are rising to this challenge.To consider the second theme of eHealth technologies being unproven, while research in this area is burgeoning, it remains a fact that there is little reliable evidence to demonstrate the measurable impact, risks, or cost-effectiveness of eHealth innovations, except in a modest number of application areas [
- ]. This creates uncertainty and hence a reluctance on the part of clinicians and policy makers to implement such technologies. Where rigorous research designs have been employed, this has often been in the context of academic studies in which the future sustainability or generalizability of the products being evaluated cannot be assured. Indeed, a recent systematic review of health information technologies demonstrated that of 257 published evaluations, a staggering one quarter emanated from four academic institutions that implemented internally developed systems, while only nine reported on commercially developed systems [ ].Tackling these problems requires the application of joint thinking between practitioners in the two fields so as to ensure high-quality, user-informed products of demonstrated effectiveness. However, cultural divides between the traditional software developer and health researcher communities have inhibited this process.
An Evolutionary Snapshot
Software development represents an application of computer science, a field rooted in engineering and mathematics. Although it has drawn on philosophy (eg, semantics, logic) and social science (eg, human-computer interface research, social technology studies), its historical focus has been on building machines and the software they require, albeit with ever more complex digital innovations such as the Internet and intelligent agents. This focus on product development has led to a close alliance with the business and service sectors, and, although basic science is highly valued, there has been an understandable emphasis on applied research and development within university curricula. Within the workforce itself, economic drivers prioritize the production of resources that meet key functionality criteria and client-defined requirements within commercially viable time frames. Evaluation often takes a lower priority, and rapid application development using small convenience samples of users is common [
].HSR is an interdisciplinary field concerned with the scientific study of the structure, processes, and effects of health services, technologies, and policies. This harnesses traditionally medical research approaches from epidemiology and clinical science, alongside the social and economic sciences, utilizing a mixture of quantitative and qualitative methodologies appropriate for the specific problem under investigation. It is closely allied to the evidence-based medicine movement, which holds that clinical practice should be driven by evidence of what interventions work best and for whom. As well as measuring impacts, such research is also about enquiry. For example, it may explore the needs of particular stakeholder groups or demographic patterns of health and health care utilization in order to identify the place of a potential new intervention or examine the reasons why an intervention is more easily adopted or more effective in different contexts. A defining characteristic of this field is the strong emphasis on methodological rigor. From randomized controlled trials to qualitative case studies, the focus is on detailed planning and recording of procedures and on transparent, theoretically informed participant sampling and data analysis. This area is less influenced by commercial drivers, although there is a strong emphasis on research that addresses health service policy needs.
Thus, software design is mainly concerned with developing interventions, and HSR, with evaluating them. But look closer and the reality is not so clear cut. In fact, much of HSR is geared toward informing the design of new interventions, including eHealth technologies, while rigorous software design encompasses evaluation processes that would be very familiar to health services researchers.
However, within these two communities there has been a mutual lack of awareness of each others’ theoretical stance, motives, and modus operandi, exacerbated by differences in language, epistemologies, and the representation of concepts. This reflects the origins of the two disciplines and the funding environment, which place different expectations on eHealth design and research projects.
Compatibilities in Models and Methods of Software Development and Health Services Research
While various academic approaches have been applied to the study of software design and diffusion (eg, in the management literature), in the context of this paper the compatibilities between process models of software development and HSR are particularly relevant. These compatibilities illustrate the importance of exploration and evaluation for informing developments and quality improvements in both domains, the value of user engagement in this process, and the natural progression to assessment of the effects.
Lifecycle Models as an Exemplar from Software Engineering
Within software engineering, numerous models have been proposed to describe the process by which products should be designed and tested to ensure they are fit for purpose in the intended setting [
]. Particular parallels with HSR can be found in a category known as lifecycle models, the most common of which are the Waterfall, Spiral, and Star models, referring to the sequence and pattern of substages involved ( ) [ - ]. Of these, the Spiral and Star models are frequently advocated due to their ability to cope with iteration and complexity, although in practice the more sequential Waterfall method is often used [ ]. All of these illustrate the codependence of development and evaluation, while the Spiral and Star models emphasize iterative design. Although they have been slow to evolve, software design and development methodologies now almost universally include user-developer interaction for requirements determination, testing, and acceptance activities; indeed, this is a central feature of the Star Model. illustrates the model of user-centered design of the International Organization for Standardization (ISO) that is increasingly adopted when developing interactive systems [ ]. The critical feature of this, and other approaches to user-centered design (or usability engineering), is the emphasis on determining users’ needs of the system, understanding the context in which the system will be delivered, and designing products from the ground-up rather than based on developers’ preconceptions or rigid procurement briefs. Such methods are being increasingly advocated, and their successful use is being reported in the medical research literature [ , ]. In some development settings, the user has taken a further step toward the center of the design process; for example, a paradigm employed in the defense sector uses software to directly involve users in developing their own problem-solving intelligent agents [ ].Phased and Iterative Models of Health Services Research
There is little awareness of software lifecycle models among typical health services researchers, yet these are highly compatible with phased approaches to drug development and the evaluation of complex interventions in health care, which emphasize the need for exploratory, explanatory, and pragmatic phases, as illustrated in
and [ ]. Particular parallels can be seen in the stages of concept formation, needs assessment, and evaluation in the intended setting.Similarly, models of user-centered design bear a close resemblance to iterative HSR models such as Action Research [
] and Continuous Quality Improvement [ ], examples of which are given in and . These also conceive of a cycle or series of cycles through which users’ needs are assessed, interventions developed, problems identified, and changes made to the intervention or the management of its delivery. Indeed, these models are advocated within both the health care and software development arenas [ , ].Overlapping Research Techniques in Health Services Research and Software Design
As well as the parallels between overarching process models, there is considerable overlap in the precise research techniques used in both software development and HSR. While the terminology varies, user-centered design methods such as requirements gathering, observation of task walk-throughs, think aloud protocols, and group-based feedback are similar to HSR methods such as needs assessment, participant observation, semistructured interviews, and focus groups; indeed, these terms are regularly used to describe activities in information technology labs, although the way in which they are applied may be somewhat different. Examples of techniques used in both fields are provided in
to illustrate areas of overlap and divergence. Differences in the portfolios of methods reflect the somewhat dissimilar (although overlapping) goals of evaluation within software engineering and HSR: the former focusing on optimizing product design and fitness for purpose, and the latter on exploring new phenomena, generating hypotheses, demonstrating impact, or informing policy. Most noteworthy is the absence of rigorous impact assessment (controlled trials) within the scope of software engineering, in contrast to its high status within HSR. An important differentiating feature not reflected in is the heavy emphasis on theoretical sampling and meticulous time-intensive approaches to qualitative data analysis within HSR. This contrasts with the more rapid identification of needs and responses common in development projects and the often unstructured and iterative nature of the design process. Nevertheless, software engineering can also involve quantitative usability techniques that draw on cognitive psychology. When these are employed, it is often in a highly systematic manner, involving multiple measurements and theoretically based analysis, although the objectives are best met with depth studies of small numbers of users. These methods have great value for the understanding of human errors and information processing, and, although there is little knowledge of them within the HSR community, they are increasingly being reported in medically indexed journals [ - ].Software and Usability Engineering | Health Services Research |
Needs Assessment (conceptual, formative) Requirements gathering: assessment of prototypes/simulations; user interviews | Needs Assessment (conceptual, formative) Interviews; document analysis; telephone or postal surveys; focus groups; observation; discrete choice simulations |
Assessment (primarily formative) Heuristic evaluation; cognitive walk-throughs; formal usability inspection; pluralistic walk-throughs; feature inspection; consistency inspection; standards inspection; guideline checklists; thinking aloud protocol; prototyping; co-discovery methods; question asking protocol; performance measurement; gaze tracking; ethnographic study / field observations; surveys; questionnaires; journaled sessions; self-reporting logs; remote usage observation; screen snapshots; blind voting; card sorting; archetypal research, action research | Assessment (formative or summative) Experimental and quasi-experimental designs (eg, randomized controlled trial; controlled before and after study; interrupted time series; case control study; cost-benefit analysis) Qualitative outcomes assessment: rigorous qualitative data analysis using sociological methods (eg, ethnographic studies) Observation/exploration: remote (eg, epidemiological, records-based); direct (eg, participant or nonparticipant) Participative evaluation (eg, action research / continuous quality improvement) |
Note: This is a non-exhaustive list drawing on several taxonomies that is designed merely to illustrate some of the common and distinctive techniques used in both disciplines.
Integrated Models in Medical Informatics
Within the interdisciplinary field of medical informatics, hybrid models have appeared that draw on both traditions, an example of which is offered in
. Importantly, there is a growing acceptance that evaluation should ideally be approached as a longitudinal process occurring through a series of overlapping and iterative stages relevant to the maturity of the technology in its lifecycle, from initial conception to rollout. Various authors have attempted to represent these stages and to provide taxonomies of research methods appropriate to each [ ]; however, three broad phases of activity may be discerned. The first of these involves drafting new interventions based on an assessment of stakeholder needs and theory, and testing these with content experts and users to ensure they fulfil these needs and are technically robust (concept and prototype evaluation). The second involves assessing the impact of the innovations on the processes and outcomes of care in selected target settings, including hard measures such as efficiency, clinical status, cost, and error rates, softer measures of attitudes and satisfaction, and qualitative outcomes (outcomes evaluation). A third phase involves evaluating systems after rollout (pragmatic evaluation), for example, to assess variations in uptake, reported errors, technical problems, stakeholder satisfaction, or longer term impacts on process or outcome indicators. At each of these stages, the model should allow for the results of the research to inform continuous quality improvements. In practice, key stages are often neglected, reducing both the quality and adoption of new eHealth products.Benefits, Challenges, and Facilitators of Interdisciplinary Collaboration
Potential Benefits
The types of conceptual and methodological commonalities outlined above demonstrate that collaborative working between eHealth services researchers and software developers is both possible and appropriate. In addition to producing better and safer interventions [
, , ], effective collaboration will strengthen the quality of evaluations and enhance the evidence base in this area, thus facilitating policy and purchasing decisions. While involvement in formal research may seem like a hindrance to developers, particularly in the commercial environment, economic as well as intellectual benefits may accrue by demonstrating that systems are effective, cost-effective, and safe, as well as technically robust, accessible, and usable [ ]. Indeed, the value of so-called evidence-based business is being increasingly recognized in the technology sector [ ]. At the same time, rigorous qualitative studies can demonstrate the acceptability and utility of new tools to users as well as features of the setting or implementation methods that may influence their adoption. For health services researchers, the ability to enter the world of the developer presents valuable opportunities to influence the scoping, design, and evaluation processes used to develop electronic health care interventions that may then be subjected to clinical trials, thus ensuring conceptual fit and minimizing the risk of confounding by suboptimal functionality or usability [ ]. It also offers a new skill set that may help researchers to recognize cognitive barriers to technology adoption and thus aid interpretation of descriptive or evaluative data. An added benefit for both groups is the increased ability to publish that comes from adopting systematic and replicable sampling and analytic methods in the course of user-centered design, thus facilitating dissemination to both audiences [ ].Challenges
There is a need to move the current agenda away from a state of parallel working, which is common in multidisciplinary projects involving the computing and medical sciences, to one of truly interdisciplinary working. This requires an appreciation of each others’ terminologies, goals, and methods and the sharing of experiential learning about the benefits and limitations of alternative approaches. It also calls for the generation of a new breed of transdisciplinary experts familiar with the implementation of both skill sets and able to combine them in novel ways in order to achieve maximum value for eHealth research and development. Overcoming cultural and methodological divisions between disciplines represents a major challenge. There is a natural inclination to remain pure to concepts and methods that may have taken many generations to evolve, and questions arise around how far to take joint approaches before compromising each discipline’s ability to demonstrate their specific expertise [
]. There is also a fundamental tension between the need to innovate, which may require conceptual leaps of faith and rapid developments, and the pressure to adopt methodologically robust standards of scoping, sampling, and evaluation that may be time-consuming and of questionable value at the early stages of prototyping. This can create antagonism and defensiveness in both camps, thus inhibiting potential synergies. Successful interdisciplinarity therefore requires the establishment of trust and mutual respect in addition to methodological pluralism, and this represents a challenge, particularly where the opportunity to become embedded in the other’s world is not available. Importantly, different approaches will be appropriate for addressing different objectives in different settings and at different stages of software maturity, and a challenge for project leaders and commissioners is to develop a deep enough understanding of multiple methods to be able to tailor these appropriately. For example, controlled trials may be ideal for studying the impact of eHealth systems on measures of clinical outcome or efficiency, but they are poorly suited to exploring social, contextual, or technical barriers to adoption and certainly will have little to offer developers designing a new Web interface. Conversely, think aloud methods may be extremely useful for assessing the usability of a decision support tool but say very little about its clinical validity or effectiveness [ - ]. The value attributable to different forms of evidence thus varies depending on the context in which it is used, although adherence to high standards of data collection, analysis, and reporting is a universal objective. It should also be recognized that academic incentives favoring controlled studies (eg, research funding, impact ratings) may create a conflict for health services researchers wishing to engage in applied and collaborative projects.Facilitators
The move toward more holistic training in medical informatics advocated by bodies such as the American Medical Informatics Association represents one step to achieving these goals [
, ], and there is evidence of a trend toward increasing pluralism in the objects of evaluation projects, which may signal a move toward greater interdisciplinarity [ ]. Pockets of transdisciplinary working are emerging as eHealth becomes a target of research, for example, within academic units of human-computer interaction and science and technology studies, while the field of information science has a long tradition of research exploring socioeconomic and organizational influences on technology development and adoption, from which eHealth researchers and developers have much to learn [ ]. Nevertheless, few individuals working on eHealth projects have received formal cross-disciplinary training and many are doing so as part of a broader portfolio of projects (often on short-term contracts), restricting their motivation to invest in learning the methods and modus operandi of their disciplinary counterparts.There is a need to influence potential funders, who have traditionally held different expectations for design and evaluation projects in terms of expected outputs (eg, new products vs new knowledge) and methodologies (eg, user-centered design vs studies of clinical impact) and who may underestimate the value of unfamiliar approaches. Importantly, it is essential for those commissioning new eHealth products to be aware of the value of high-quality evaluation during the development process and to allow the time and resources for this to be built into the project. While research agencies are coming to recognize the need to pay attention to usability engineering and other software design methodologies when developing eHealth tools for research, the message of added value needs to be more widely communicated. This is particularly so in view of the revenue currently being invested in health-related websites, many of which are often of poor quality and unknown effectiveness [
], and the vast expenditure being devoted to eHealth technologies by governments worldwide [ ]. Without this understanding, those commissioning products in this area will continue to be unwittingly complicit in the process of suboptimal design, while those commissioning evaluation will risk poor value for money if the questions asked are inappropriate or the research methods not suited to answering them [ - ].Conclusions
Designing effective eHealth systems and services requires the application of expertise from diverse fields and will benefit from interdisciplinary collaboration. This may be eased by increasing familiarity with each others’ terminologies, theoretical bases, and research methods, with the ultimate objective of achieving transdisciplinary working. There is sufficient overlap in the techniques and concepts employed within the software design and HSR communities to make this a reality, but realizing this requires the development of mutual trust and respect for each others’ aims, epistemologies, and contextual drivers, as well as a willingness to step outside traditional working boundaries. New funding strategies that recognize the value of alternative methodologies and of joint working between developers and evaluators are also called for.
This paper has merely scratched the surface of a wider debate on the value of interdisciplinarity for improving the quality and effectiveness of eHealth, although it is hoped that by highlighting the potential synergies between HSR and software development it will help to provoke constructive dialogue between these two communities. Maximizing the potential of eHealth also requires the involvement of a wider constituency of disciplinary experts, including social, management, and legal scientists, all of whom have a stake in the field. Interdisciplinary networks, such as the one managed by the author, offer one means of addressing this need.
Acknowledgments
The author would like to thank Professors Denis Protti, Jeremy Wyatt, and Peter Gregor for their constructive comments on the draft.
Conflicts of Interest
None declared.
Multimedia Appendix
PowerPoint slides of MedNet 2006 presentation PPT file (MS Powerpoint), 1008 KB
References
- Pagliari C, Sloan D, Gregor P, Sullivan F, Detmer D, Kahan JP, et al. What is eHealth (4): a scoping exercise to map the field. J Med Internet Res 2005;7(1):e9 [FREE Full text] [Medline] [CrossRef]
- McConnell H. International efforts in implementing national health information infrastructure and electronic health records. World Hosp Health Serv 2004;40(1):33-37, 39-40, 50-2. [Medline]
- ; Department of Health. Delivering 21st Century IT Support for the NHS. National Strategic Programme. London, UK: Department of Health; 2002. URL: http://www.medgraphics.cam.ac.uk/Downloads/NHS_0007.PDF [accessed 2007 May 11] [WebCite Cache]
- Weingart P. Interdisciplinarity: the paradoxical discourse. In: Weingart P, Stehr N, editors. Practising Interdisciplinarity. Toronto, ON: University of Toronto Press; 2000:25-42.
- Corn M, Rudzinski KA, Cahn MA. Bridging the gap in medical informatics and health services research: workshop results and next steps. J Am Med Inform Assoc 2002 Mar;9(2):140-143 [FREE Full text] [Medline] [CrossRef]
- Ammenwerth E, Brender J, Nykänen P, Prokosch HU, Rigby M, Talmon J; HIS-EVAL Workshop Participants. Visions and strategies to improve evaluation of health information systems. Reflections and lessons based on the HIS-EVAL workshop in Innsbruck. Int J Med Inform 2004 Jun 30;73(6):479-491. [Medline] [CrossRef]
- Gagnon MP, Scott RE. Striving for evidence in e-health evaluation: lessons from health technology assessment. J Telemed Telecare 2005;11(Suppl 2):S34-S36. [Medline] [CrossRef]
- Henderson J, Noell J, Reeves T, Robinson T, Strecher V. Developers and evaluation of interactive health communication applications. The Science Panel on Interactive Communications and Health. Am J Prev Med 1999 Jan;16(1):30-34. [Medline] [CrossRef]
- Shortliffe EH, Garber AM. Training synergies between medical informatics and health services research: successes and challenges. J Am Med Inform Assoc 2002 Mar;9(2):133-139 [FREE Full text] [Medline] [CrossRef]
- Mandl KD, Lee TH. Integrating medical informatics and health services research: the need for dual training at the clinical health systems and policy levels. J Am Med Inform Assoc 2002 Mar;9(2):127-132 [FREE Full text] [Medline] [CrossRef]
- Grigsby J, Brega AG, Devore PA. The evaluation of telemedicine and health services research. Telemed J E Health 2005 Jun;11(3):317-328. [Medline] [CrossRef]
- Kaplan B. Evaluating informatics applications--some alternative approaches: theory, social interactionism, and call for methodological pluralism. Int J Med Inform 2001 Nov;64(1):39-56. [Medline] [CrossRef]
- Kaplan B, Brennan PF, Dowling AF, Friedman CP, Peel V. Toward an informatics research agenda: key people and organizational issues. J Am Med Inform Assoc 2001;8(3):235-241 [FREE Full text] [Medline]
- Karsh BT. Beyond usability: designing effective technology implementation systems to promote patient safety. Qual Saf Health Care 2004 Oct;13(5):388-394 [FREE Full text] [Medline] [CrossRef]
- Southon G, Sauer C, Dampney K. Lessons from a failed information systems initiative: issues for complex organisations. Int J Med Inform 1999 Jul;55(1):33-46. [Medline] [CrossRef]
- Stumpf SH, Zalunardo RR, Chen RJ. Barriers to telemedicine implementation. Usually it's not technology issues that undermine a project--it's everything else. Healthc Inform 2002 Apr;19(4):45-48. [Medline]
- Lorenzi NM. Beyond the gadgets. BMJ 2004 May 15;328(7449):1146-1147 [FREE Full text] [Medline] [CrossRef]
- Ammenwerth E, Shaw NT. Bad health informatics can kill--is evaluation the answer? Methods Inf Med 2005;44(1):1-3. [Medline]
- Dumay ACM, Freriks G. Quality management issues for medical ICT. Stud Health Technol Inform 2004;103:93-100. [Medline]
- Johnston D, Pan E, Middleton B. The demonstrated value of healthcare IT: decision makers need hard evidence on all potential benefits. Healthc Inform 2003:93-96.
- Pagliari C, Gregor P, Sullivan F, Sloan D, McGillivray S, Ricketts I, et al. Literature review and conceptual map of the area of e-health. Presentation at: Fourth National SDO Conference; April 27, London, UK. 2005. URL: http://www.lshtm.ac.uk/hsru/sdo/files/adhoc/conference-2005-pagliari.pdf [accessed 2007 May 11] [WebCite Cache]
- Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 2006 May 16;144(10):742-752 [FREE Full text] [Medline]
- Fitzgerald B. An empirical investigation into the adoption of systems development methodologies. Information & Management 1998;34:317-328 [FREE Full text] [CrossRef]
- Khoo B. A Survey of Major Software Design Methodologies. Khoo Homepage. URL: http://userpages.umbc.edu/~khoo/survey2.html [accessed 2007 May 11] [WebCite Cache]
- Royce WW. Managing the Development of Large Software Systems. Proceedings of IEEE WESCON. 1970 Aug. p. 1-9 URL: http://www.cs.umd.edu/class/spring2003/cmsc838p/Process/waterfall.pdf [accessed 2007 May 14] [WebCite Cache]
- Boehm BW. A spiral model of software development and enhancement. Computer. 1988. p. 61-72 URL: http://www.sce.carleton.ca/faculty/ajila/4106-5006/Spiral%20Model%20Boehm.pdf [accessed 2007 May 14] [WebCite Cache]
- Hartson HR, Hix D. Toward empirically derived methodologies and tools for human-computer interface development. Int J Man Mach Stud 1989;31(4):477-494. [CrossRef]
- Laplante PA, Neill CJ. ACM. 2004 Feb. (10) URL: http://www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=110 [accessed 2007 May 14] [WebCite Cache]
- ; International Organization for Standardization (ISO). ISO 13407:1999 (2004 version). Human-centred design processes for interactive systems. ISO. URL: http://www.iso.org/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=21197 [accessed 2007 May 14] [WebCite Cache]
- Thursky KA, Mahemoff M. User-centered design techniques for a computerised antibiotic decision support system in an intensive care unit. Int J Med Inform 2006 Sep 1. [Medline] [CrossRef]
- Waller A, Franklin V, Pagliari C, Greene S. Participatory design of a text message scheduling system to support young people with diabetes. Health Informatics J 2006 Dec;12(4):304-318. [Medline] [CrossRef]
- Bowman M, Lopez A, Donlon K, Tecuci G. Teaching Intelligent Agents: Software Design Methodology. Crosstalk: J Defense Software Engineering. 2001 Jun. (6) p. 10-14 URL: http://www.stsc.hill.af.mil/crossTalk/2001/06/bowman.html [accessed 2007 May 14] [WebCite Cache]
- Campbell M, Fitzpatrick R, Haines A, Kinmonth AL, Sandercock P, Spiegelhalter D, et al. Framework for design and evaluation of complex interventions to improve health. BMJ 2000 Sep 16;321(7262):694-696 [FREE Full text] [Medline] [CrossRef]
- Zuber-Skerrit O. Models for action research. In: Pinchen S, Passfield R, editors. Moving On: Creative Applications of Action Learning and Action Research. Queensland, Australia: Action Research, Action Learning and Process Management; 1995:3-29.
- Deming WE. Out of the Crisis. Cambridge, MA: MIT Press; 2000.
- How to Improve. Institute for Healthcare Improvement. URL: http://www.ihi.org/IHI/Topics/Improvement/ImprovementMethods/HowToImprove/ [accessed 2007 May 14] [WebCite Cache]
- Lewis W. Software Testing and Continuous Quality Improvement. 2nd edition. Boca Raton, FL: CRC Press; 2000.
- Kushniruk AW, Patel VL, Cimino JJ. Usability testing in medical informatics: cognitive approaches to evaluation of information systems and user interfaces. Proc AMIA Annu Fall Symp 1997:218-222. [Medline]
- Jaspers MWM, Steen T. Cognitive engineering in interface design. Stud Health Technol Inform 2002;90:123-127. [Medline]
- Jaspers MWM, Steen T, Van Den Bos C, Geenen M. The think aloud method: a guide to user interface design. Int J Med Inform 2004 Nov;73(11-12):781-795. [Medline] [CrossRef]
- Friedman CP, Wyatt JC. Evaluation methods in biomedical informatics (Health informatics). New York: Springer-Verlag; 2006.
- Coiera E, Westbrook J, Wyatt J. The safety and quality of decision support systems. Methods Inf Med 2006;45 Suppl 1(suppl 1):20-25. [Medline]
- ; Department of Trade and Industry. Competing in the Global Economy: The Innovation Challenge. Innovation Report. UK Department of Trade and Industry. London, UK: HMSO; 2003. URL: http://www.dti.gov.uk/files/file12093.pdf [accessed 2007 May 14] [WebCite Cache]
- Montgomery AA, Fahey T, Peters TJ, Macintosh C, Sharp DJ. Evaluation of computer based clinical decision support system and risk chart for management of hypertension in primary care: randomised controlled trial. BMJ 2000 Mar 11;320(7236):686-690 [FREE Full text] [Medline] [CrossRef]
- Machan C, Ammenweth E, Bodner T. Publication bias in medical informatics evaluation literature: recognizing the problem, its impact and the causes. Conference paper 51. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie; September 14, 2006. 2006. URL: http://www.gmds2006.de/Abstracts/129.pdf [accessed 2007 May 14] [WebCite Cache]
- Heathfield H, Pitty D, Hanka R. Evaluating information technology in health care: barriers and challenges. BMJ 1998 Jun 27;316(7149):1959-1961 [FREE Full text] [Medline]
- Gustafson DH, Wyatt JC. Evaluation of ehealth systems and services. BMJ 2004 May 15;328(7449):1150 [FREE Full text] [Medline] [CrossRef]
- Ammenwerth E, De Keizer N. An inventory of evaluation studies of information technology in health care trends in evaluation research 1982-2002. Methods Inf Med 2005;44(1):44-56. [Medline]
- Ewusi-Mensah K. Software Development Failures. Cambridge, MA: MIT Press; 2003.
- Eysenbach G, Powell J, Kuss O, Sa ER. Empirical studies assessing the quality of health information for consumers on the world wide web: a systematic review. JAMA 2002;287(20):2691-2700 [FREE Full text] [Medline] [CrossRef]
- Anderson GF, Frogner BK, Johns RA, Reinhardt UE. Health care spending and use of information technology in OECD countries. Health Aff (Millwood) 2006 May;25(3):819-831. [Medline] [CrossRef]
- Clegg M. Commissioning evaluation: is there an enlightened approach? Proceedings of the UK Evaluation Society Conference; December. 2002. URL: http://www.evaluation.org.uk/conference/Conf%20presentations%202002/Clegg.pdf [accessed 2007 May 14] [WebCite Cache]
- Pagliari C. Implementing the National Programme for IT: what can we learn from the Scottish experience? Inform Prim Care 2005;13(2):105-111. [Medline]
- ; DG Enterprise and Industry of the European Commission. Supporting the Monitoring and Evaluation of Innovation Programmes. A Study for DG Enterprise and Industry. Final Report. Brussels-Luxembourg: DG Enterprise and Industry of the European Commission; 2006. URL: ftp://ftp.cordis.europa.eu/pub/innovation-policy/studies/smeip_finalreport_master2.pdf [accessed 2007 May 14]
Abbreviations
HSR: health services research |
Edited by G Eysenbach; submitted 09.11.06; peer-reviewed by E Ammenwerth, J Aarts; comments to author 14.02.07; revised version received 19.04.07; accepted 04.05.07; published 27.05.07
Copyright© Claudia Pagliari. Originally published in the Journal of Medical Internet Research (http://www.jmir.org, 27.05.2007). Except where otherwise noted, articles published in the Journal of Medical Internet Research are distributed under the terms of the Creative Commons Attribution License (http://www.creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited, including full bibliographic details and the URL (see "please cite as" above), and this statement is included.