Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Environment and Planning A 1997, volume 29, pages 771-787 Software for qualitative research: 1. Prospectus and overview M A Crang Department of Geography, Durham University, Science Laboratories, South Road, Durham DH1 3LE, England; e-mail: m.a.crang@durham.ac.uk A C Hudson Department of Geography, Cambridge University, Downing Place, Cambridge CB2 3EN, England; e-mail: achl005@hermes.cam.ac.uk S M Reimer School of Geography and Earth Resources, University of Hull, Cottingham Road, Hull HU6 7RX, England; e-mail: s.m.reimer@geo.hull.ac.uk SJHinchliffe Department of Geography, Keele University, Keele, Staffs ST5 5BG, England; e-mail: s.j.hinchliffe@keele.ac.uk Received 20 May 1996; in revised form 29 October 1996 Abstract. In recent years there has been growing interest in the use of computers within qualitative geography. In this paper we review the types of software packages that have been adopted and outline some of their distinctive features. We discuss the intellectual and institutional reasons for the interest in the software and highlight the ways in which such reasons have shaped the use made of these packages. We argue that only a contextual account of how packages are adopted, adapted, and used can explain the situation in geography. Furthermore we suggest that the archaeologies underlying the packages—their theoretical presuppositions—are remarkably homogeneous and need to be clearly understood before deciding how the packages might be used. We outline how some of these presuppositions have affected the ways in which the packages have been used, and develop—from our own experiences—some points about informal networks of adoption and institutional contexts. The point of this is to suggest the minimal role played by formal software guides and manuals in choosing whether and how to use a package. The paper outlines the current 'state of play' and raises issues of future use to be addressed in a second paper on this theme. Our intention is neither to sell a particular package, nor to say "to do X, use package Y", because such recommendations are often misleading. Rather, our aim is to provoke discussion about the use of software packages in qualitative geography. Introduction It seems an appropriate moment for a commentary on the forms of software designed to aid the interpretation of qualitative material for three intersecting reasons. First, there has been a rapid expansion of available software during the last five or so years. Software has become increasingly user-friendly and compatible with the machines and operating environments used by qualitative geographers. In short, these programs have ceased to be the province of experts using arcane programming languages and large processors. Second, and relatedly, an increasing number of geographers are adopting such technologies, for a variety of reasons and in a variety of ways which we address later. Third, the use of such software has reopened debates, if they can be called that, over the reliability and scientific attributes of qualitative methods. Therefore, in this paper and in a second (Hinchliffe et al, 1997) we seek to address and respond to what we see as an interesting situation. We do not seek to proclaim some unproblematic answer, nor any or all software as the answer. We regard this reaction to new technologies as unhealthy and unhelpful. All too often technology enters the social sciences as a deus ex machina, credited with almost divine powers to remould disciplines and solve 772 M A Crang, A C Hudson, S M Reimer, S J Hinchliffe theoretical problems (for example, Openshaw, 1991; 1995; 1996). Instead, in these papers, we interrogate the machines and the way they function and seek to provoke discussion about the role of computers in qualitative geography. There has been a boom in the use of computer aided qualitative data analysis software (hereafter CAQDAS), with a mushrooming of seminars, symposia, and projects sponsored by the Economic and Social Research Council. Such academic booms involve both a quantitative expansion and & constriction in prior freedoms (Morris, 1988a), CAQDAS has become a buzzword in some quarters, in both a pejorative and supportive sense. It has become something of a 'good thing', but it has also become a symbol for hopes and fears about qualitative research. As Latour (1987) suggests, 'scientific' procedures become standardised and boxed—turned into a single word, a single thing at the moment where an orthodoxy and power are inscribed in a technique. We would suggest that this is now happening with CAQDAS. From being an area of openness and some personal choice, it is becoming an area where new technological gatekeepers are becoming ever more important (compare Shapin and Schaffer, 1985). Perhaps as a result, software technologies look set to become the required grammar through which qualitative analysis is spoken. We shall return to the effects of this later. Meanwhile, given the methodological concern common in qualitative work, it seems surprising that there has not been a study of the way CAQDAS is moving through geography and the particular effects it may have within the discipline. The use of qualitative software within geography would seem to raise some fears particular to the discipline. This is, in part, because as a technology to aid interpretation of qualitative materials, these programs come from outside human geography. This is not some sort of disciplinary xenophobia. Rather it is to highlight that, in a discipline where information technology (IT) has been the province of nonqualitative geography, qualitative geographers have treated CAQDAS with a certain healthy scepticism and a degree of naivete. We shall argue, however, that such reactions risk taking these technologies as 'immutable mobiles', viewing these packages as things which transfer procedures and ideas from one domain to another without examining what is going on inside them. To this end, we try to help researchers to make sense of, and make sensible decisions about, these technologies. We have divided the issues into two papers: in this paper we account for the current 'state of play' in geography; in the second we address the possibilities and problems opened up by these new technologies. As a collective project we have sought to utilise our diverse relationships to CAQDAS to highlight the specificities of use and abuse of programs. In this first paper we strive to address the questions of "where are we?" and "how did we get here?" We begin with a discussion—curiously absent from geographical journals—of approaches to the analysis and interpretation of qualitative materials. We then sketch the range of software and give some indication about their intended uses. Next, we examine the ways in which the software has been taken up within geography and explore the networks that it has formed, the routes along which it has travelled, and the webs that sustain it. Finally, returning to a science studies approach, we suggest how CAQDAS, as a technology, has become invested with certain ideas and meanings. In this paper we begin with a detached viewpoint, an overview or survey, for the sake of brevity. However, we hope to reveal the constructed nature of this position in later sections where we highlight the importance of the specific routes through which we, as researchers, have engaged with particular pieces of software. We argue that getting the most out of these 'tools' involves refusing to view these programs as invariant devices. Rather, we should be aware of their specific possibilities in specific research contexts. Software for qualitative research: 1 773 Styles and expectations: the sense of a gap A fundamental question faced by all researchers is: "what can I justifiably say?" In contrast to more quantitative approaches, qualitative researchers have not produced a definitive answer to this question. CAQDAS promises to help researchers to work out what we can say; to codify the process of making sense of qualitative material; to fill the gap sensed by qualitative researchers. Perhaps some of our uneasiness with the packages is that geographers have not been altogether sure that they need this order, especially because qualitative methods and analysis have often been adopted in an ad hoc manner, matching particular and often idiosyncratic styles to particular research situations. So, although researchers are lured to the packages by the offer of increased ease of ordering and a structured approach to organising (field or other) materials, there is a worry that this marks too much order. The ordering of research seems equally and alternately a promise and a menace. This is especially so in the context of budgetary constraints and the development of courses to learn particular software where the choice of software packages may have important implications within institutions about whether interpretive style will remain such an idiosyncratic matter. Choosing a package for an institution or research project has effects on the types of analytical styles and approaches that may then be encouraged (see the appendix).(1) Given that this raises the thorny issue of standardisation in research practice, we are perhaps reenacting the skirmishes between more freeform 'interpretation' and scientised 'analysis'. In this section we want to put these moves into perspective, by outlining some of the more formal approaches from which geographers have drawn inspiration. Our purpose is not to outline the detailed practicalities of each approach, but to consider how these approaches have been used, and in what ways they have created a sense of a gap that CAQDAS promises to fulfil. We do not provide a comprehensive catalogue of all conceivable approaches [as Tesch (1991) managed to identify at least 43 of those]; instead we want to highlight what geographers have sought through different approaches. To do this it may help to think of three broad approaches to interpretation: theoretically selective analysis; analytically inductive analysis or 'grounded theory'; and formal structural analysis. We are not suggesting these categories are some menu from which to choose, nor that they are all equally represented in different fields, but that they offer some purchase on groups of approaches. At the root of our inquiry is the suggestion of Richards and Richards (1991, page 39) that much of the interest in CAQDAS involves an increased questioning of the criteria for judging the reliability of our methods. In the past such reliability has often boiled down to either getting lots of 'field materials' or lots of different materials ('triangulation'). Although both strategems are potentially aided by the sorts of packages which concern us here, these packages sometimes promise rigour through the approach taken to analysis itself. We want to probe why and how these ideas emerged, examining the three analytical approaches in turn. Theoretically selective analysis A good deal of qualitative geography employs the 'informant quotation' as the most visible sign of close analysis of qualitative materials. Generally it is clear that a particular excerpt has been selected for its pertinence to the argument of the paper, for its apposite and illustrative contents or style. There are many possible reasons for using quotations, but in this analytical style a quotation is selected according to the theory of the paper or book. Quotations inform through illustration, 'showing', rather than (1) This might sound a little determinist at this point, but there is no doubt that the more sophisticated packages are meant to 'read' in certain ways. We expand on these 'ways' later in this paper and return to the question of determinism in the second paper. M A Crang, A C Hudson, S M Reimer, S J Hinchliffe 'telling about'. At the same time they give a brief taste of the real world in the rarefied portals of academic writing.(2) Quotations have the rhetorical effect of saying "I was there", as well as the evidentiary effect of saying "this is what happened" (Atkinson, 1991). Too often in geography, this illustrative function has been relied on in an unexamined fashion. The reader must trust that these do 'represent', in unspecified ways, a wider reality. We know full well that the normal selection criteria are shaped by the pressures of word limits, the need for pithy examples which do not sprawl into unrelated fields, and the discovery of particularly evocative turns of phrase. We do not argue otherwise, but do suggest that this has confused the functions served by such illustrative materials. Indeed the most successful occasions for this sort of use are perhaps those termed by Morris (1988b) "exemplary allegories" where the logic is not one of numerical representativeness but theoretical fitness. Illustrative material is chosen and interpreted in the light of how it helps to throw into relief certain important issues, as in the trope of an opening vignette out of which the significant elements are then teased. There is, in the extreme version of this, no reason why such vignettes could not be fictional. This may be acceptable and a very productive approach; perhaps the obvious example is the construction of Weberian 'ideal types'. However, we must then be clear that such materials are not 'empirical' informants that can be turned to in an evidentiary manner. Bearing these points in mind, we turn to the second analytical style which says more, perhaps, about how qualitative researchers turn their materials into evidence. Grounded theory Whatever the final appearance of their texts, most qualitative researchers spend a long time sifting, sorting, rereading, and analysing their materials, in an attempt to extract important elements or significant features. If these processes do not form a large part of the published output they certainly occupy a considerable amount of time. Perhaps the dominant mode of qualitative analysis in geography is one of analytic induction broadly conceived or, more specifically, grounded theory (see Glaser and Strauss, 1967; Strauss, 1987). Most geographers utilise, perhaps unknowingly, ideas of grounded theory in a fairly catholic manner. It is not our purpose here to debate the theoretical differences of the 'analytically inductive' approach developed by Glaser against the merits of 'grounded theory' propounded by Strauss (compare Huber, 1995; Lonkila, 1995). It is sufficient to say that each looks for the logics of respondents and provides ways in which to develop categorisations based on qualitative materials, ways of coding statements that then produce a logically coherent way of interpreting the reasons and significance of statements in part of a larger schema. In this schema, initial materials are fragmented into component issues embedded in statements and are then recontextualised according to the logics thus inferred. Quotations in final academic products thus come to exemplify not so much, or not only, the informant, but the interpretative logic of the analysis. Formally, this tends to be done by 'coding' or 'categorising' statements and then relating statements between codes and categories in order to make visible the reasoning behind particular attitudes and beliefs. For example, in a project on international banking an excerpt from a transcript in which an interviewee said "we have the power to enact legislation for the Bahamas" might be coded as sovereignty; related to a section—"our laws are different from Cayman's"—coded Bahamas versus Cayman; and the relationship or link between these two ideas labelled (2) In contrast, compare Pahl's argument that "if I had attempted to demonstrate the divisions of labour by using single quotations from the many different and diverse households that were studied, it would have done violence to the complexity that the interviews revealed" (1984, page 278). Software for qualitative research: 1 775 as place. It is this lengthy and time-consuming process that most CAQDAS set out to aid and speed up. Specifically, it is often suggested that computers allow more material to be processed, so raising the 'methodological ceilings' on the amount of material that can be analysed (Richards and Richards, 1994). This quest to 'enlarge the database' arises, we believe, from the intersection of a set of relations and conditions around this form of interpreting qualitative materials. For the moment it is worth mentioning that not all researchers, and not all programmers, share this rationale for extending the size of a study (for example, Seidel, 1991). Formal structural approaches Perhaps less common, though influential, is the more textual linguistic model of interpretation. Although this often appears to veer towards the ineffable—the sudden scholarly insight—the idea of studying both literature and landscapes as texts has given a certain momentum to studying formal linguistic structure. Perhaps thankfully, textual analysis came to geography after the high water mark of formal structuralism and without much of the baggage of linguistics. Still, it is apparent that materials are studied for, say, their narrative form, the tropes they employ and/or the metaphors and metonymy in their construction. For example, one might explore the ways in which different sets of actors (private bankers, central bankers, politicians) talk about an event (the development of the Bahamas offshore financial centre) to examine whether they agree on the key events or turning points, where they tell a story of individual heroism, teamwork, or accidental development, and which other actors they feel played important roles. This style of analysis is inspired by approaches to history that focus on the writing and construction of the world, as well as ethnographies focusing on the textual construction of a sense of realism. As we have suggested, few accounts have explicitly adopted structural models (indeed, most concur with the poststructuralist critiques of such models), but implicitly at least they look to the structures that generate meaning. In such a sense they focus upon the rules of discourse—the rules that semiotics suggests produce meaning far more commonly than the contents of utterances. Our purpose here is not to judge the merits of different approaches, but to suggest how they may map into different 'needs' and requirements for CAQDAS, to suggest how CAQDAS may (seem to) fill the gap sensed by qualitative researchers. It is worth stressing that our focus here is upon grounded theory, because it appears to be symptomatic of the sorts of software adopted in geography and in many ways expresses the limits of vision for such software as currently utilised. Overview of software types A recent and influential survey of CAQDAS by Weitzman and Miles (1995) attempted a comprehensive assessment of the relative strengths and weaknesses of various packages. They divided programs into four broad categories: text-searching databases; text-coding and text-retrieval programs; more sophisticated text-organising packages; and those which claim to use elements of artificial intelligence in developing theories based on textual material. Although this seems a useful categorisation, we feel that the last category has so far had a minimal impact, and in many ways obscures the greater possibilities and different directions of some hypertext writing packages. Further, the degree of overlap between the second and third categories is such that we would prefer to regard them as a continuum. So we shall follow the broad outlines laid out by Weitzman and Miles but will attempt to tailor the survey specifically to geography. 776 M A Crang, A C Hudson, S M Reimer, S J Hinchliffe Text-search databases The first and simplest software application is one that anyone who uses a wordprocessing package will be familiar with; it is the ability to search for specific words or phrases. The existence of a search function within word-processing packages might seem to obviate the need for specialist software, but we would suggest this is not the case. Such an argument misses the speed advantages of specialist software. In contrast to the pedestrian pace of word processors, specialist text-searching software can search through several hundred pages of text in less than a minute and then display every occurrence as a complete report. Text-searching programs have had a wide application in literary fields to produce what are known as KWICs (key word in context), where the search word or phrase is displayed with a section of (con)text around it, an amount which varies from program to program. For example, one might use a text-searching program to extract all the sentences or paragraphs containing the word 'sovereignty', or the phrase 'sovereignty to enact our laws'. We are willing to hazard that most qualitative geographers will treat these systems with the scepticism which their limitations deserve. Searching in this way can be useful in large, heterogeneous bodies of research materials (such as a library catalogue or archive) but their use seems somewhat limited in qualitative geography. One reason is that we suspect (and our own experiences tell us this is so) that, if you have conducted interviews and transcribed them laboriously for hours, a text searcher is unlikely to turn up many surprises. What gives these packages their speed is the way in which they manipulate data: they take all the text into the system as a database, rather than leaving it as discrete files to look through in turn. An important consequence of this form of data handling is that many of the more 'sophisticated' programs (discussed below) suggest they can do this form of searching but are in fact much slower than their 'simpler' rivals, because many work by loading and then saving each case rather than treating them all as a single database. As a result, we shall urge, and continue to urge, a 'horses for courses' approach. Programs such as the Oxford Concordance Programme (Oxford Computing Services) offer extremely rapid search capacities but are limited because the material has to be specially formatted and then a different set used for anything else. On the other hand programs such as Textbase Alpha and Kwalitan allow these search functions, and in the case of the latter then allow you to go to the original source of any individual incident to look at it in context, and also allow a coding of materials as below. Coders, retrievers, and concept builders Mentioning the latter two packages leads us on to programs that are oriented around an approach of coding and retrieving. This forms the central practical plank of much analysis and most programs. The exact theoretical principles often vary slightly, but are generally derived from grounded theory (Glaser and Strauss, 1967; Strauss, 1987) to such an extent that this is becoming something of a common denominator among programs (Coffey et al, 1996; Lonkila, 1995).(3) Essentially the process of analysis informing these programs is the coding of texts, by categorising segments according to topics derived either from the participants or from the theoretical perspective of the researcher (for more details see, for instance, Agar, 1986; Cook and Crang, 1995; Crang, 1997). 'Traditionally', coding was managed by physically cutting and pasting material in photocopies to form stacks and piles of segments physically grouped (3) As Lynn Richards, one of the co-designers of Nud.ist put it "grounded theory is the bumper sticker we all carry" (Seminar 1995). We might add there was also a minor scramble to gain endorsement of programs from Anselm Strauss—who had an array in his office but himself remained uncommitted (personal communication from Mike Fisher, University of Bristol). Software for qualitative research: 1 777 according to topic in what Agar (1986) called the "long couch or short hall" approach (we return to this in the second paper). The vast majority of CAQDAS programs aim to facilitate this process, but there are important differences in approach. Essentially, the computer is being used to serve as an automated clerk, or a rather fast and flexible filing system (Richards and Richards, 1991). The computer, in various ways, is used to attach a code of some sort of a segment of text and allow its retrieval. For example, having coded a pile of interview transcripts relating to the development of the Bahamas offshore financial centre with codes including sovereignty and place, the user could retrieve all the segments with one, or both, of these code words attached and collect them together. Such a recontextualisation of these stories about sovereignty and place might then help the user to think about the importance of those terms and their relationship in the development of the Bahamas. Thus, although many fears expressed about the use of software focus upon the alleged 'automation' of the process, it is important to emphasise that at this level the programs simply allow researchers to assign electronically codes which they would traditionally have put on paper. The machines do not do the analysis.(4) The central dynamic in the evolution of these sorts of programs is the development of functions dealing with filing. For instance, The Ethnograph originated as a cluster of ten separate programs gradually linked together, each performing some task of assigning codes to segments, or search routines for codes. What grew was the range of functions to search among the codes: to retrieve all similarly labelled sections; then to perform Boolean searches; then to search for overlaps between categories (see Seidel and Clark, 1984). The principal differences between these programs emerge in the way they envisage the categorising of material. What follows below then is an attempt to look at specific functions, at what we might call the archaeologies of these programs. Although we shall tend to consider specific programs along with characteristic functions, this should not be taken to imply either that those programs only have that capacity, or that only those programs have that capacity. In short, this description is not designed to be an allencompassing tabulation of functions [see Weitzman and Miles (1995) for a more comprehensive overview, and the appendix for a more concise overview]. Although there is not room or scope to compare all the functions of all the programs, it is appropriate here to draw out some of the different visions animating them. They are all based on the working practices of their designers, by and large qualitative researchers in their own right. Each program thus bears the imprint of how these researchers have approached their analyses. Thus The Ethnograph maintains an idea of grounded theory, whereby the user is encouraged to print out transcripts and doodle down the side (that is, 'open coding'). The program then allows the translation of these doodlings into a set of codes. Lines down the side of the text mark the overlaps of codes and allow a nesting of codes and subcodes. Kwalitan echoes this process, though not the display system. In essence, Kwalitan involves the development of categories of increasing abstraction and includes a way of organising categories into a hierarchy, and adjusting that hierarchy directly. The Ethnograph works well to retrieve segments of text, which are initially decontextualised from the original interview, but arranged such that all the similarly coded segments appear together. It is perhaps misleading to use the term decontextualised as this represents a recontextualisation which very usefully allows the juxtaposition of segments of any length, from several lines to several pages long w We are a little reticent in describing this as a simple mimicking of former procedures but we address this unease further in the second paper. The point we are making here is that these packages do not serve as automated analysers. An illustration of the point is the demo version of Atlas/ti which carries an option to 'magically code' all items of significance and interest. When chosen, the package cheerily informs the user to dream on. 778 M A Crang, A C Hudson, S M Reimer, S J Hinchliffe [but see Dey (1995) and issues we raise in the second paper]. Similarly, The Ethnograph also offers the possibility of sorting materials by 'face sheets', into which are encoded, for example, the sociodemographic variables of the interviewee, such that searching can be done by gender or age as well as the content of the material. This might be useful, to return to our example of the development of the Bahamas offshore financial centre, to see whether there is a pattern of expatriates and locals expressing different opinions about the benefits derived locally from hosting offshore financial activity. A slightly different idea can be seen in programs such as Textbase Alpha or HyperQual2. Both of these programs have explicit functions to integrate a range of different materials; they both allow the precoding of prestructured data, such as structured interview or questionnaire responses. Hence it is possible to do a preliminary sorting of answers to specific questions, as well as incorporate freeform materials such as interviews or field notes. HyperQual2 does this on the basis of HyperCard for Macs, sorting types of material into stacks of electronic cards, including one for secondary source material such as official reports or press releases, which can include both the content as well as other relevant information on publication and so forth. Textbase Alpha and HyperQual2 therefore facilitate triangulation within the programs. This may be a useful feature, if your project will involve triangulation, because in some of the other packages this kind of cross-referencing and corroborating evidence has been left out of the package and treated as a separate stage of analysis. Nud.ist and HyperRESEARCH, two of the most popular and biggest Windows packages, have a slightly different emphasis. HyperRESEARCH has a friendly interface and uses a simple mechanics for coding that is gradually becoming common to virtually all programs; simply highlight the chosen text, click on the chosen code, and the segment of text has been coded. Additionally, however, HyperRESEARCH organises materials by 'cases' that then include individual interviews or a series of interviews or field notes. This allows for a more complex arrangement of multiplesourced material, from different locations perhaps, a feature which may be of particular interest to geographers. A further—and to our mind more dubious—feature is the 'hypothesis testing' function offered by HyperRESEARCH. In fact, this function does not really test hypotheses, but rather provides a way of labelling cases. Thus any case where a particular set of occurrences is noted can be labelled as a 'type'. Although we can see some use in this, as a mean of grouping informants by their concerns for example, we worry that HyperRESEARCH is marketed in terms of 'inter-rater reliability' and the possibility of making formal, that is, replicable, generalisations about findings (Hesse-Beiber and Dupuis, 1995, page 135; Hesse-Beiber et al, 1991, page 305). Nud.ist has had a long evolution and it too starts from a system that is especially sensitive to multiple sources of material. Indeed it has been modified to allow multiple workers on the same project to collate their efforts. Again, it has a process of indexing, whereby codes gain in abstraction. This is apparent in one of Nud.ist's most obvious features, a hierarchical tree diagram of categories. This allows it to make the bold claim to 'theorise', albeit in a different way from HyperRESEARCH. First, in obvious accordance to the precepts of grounded theory, its hierarchies could be used to move from emic to etic codes, from codes contained within the data to more abstract codes from the researcher's head or theoretical perspective.(5) Additionally and more importantly, like Kwalitan, HyperQual2, Hypersoft, and later versions of The Ethnograph, (5) However, this hierarchical pattern seems to work less well with theorising (though it obviously does not exclude it) than with a more empirical categorisation. Thus the 'demo version' illustrates separating materials by the empirical categories of 'mothers', 'carers', and 'children', followed by a segmentation within each group by their associated empirical behaviours. Software for qualitative research: 1 779 but unlike HyperRESEARCH—Nudist encourages the use of memos attached to segments of text. Researchers are encouraged to record their working ideas, contextual information about what they felt during the interview, or notes about significance. It is, then, possible to code and search through these memos in a cumulative process which allows ideas and comments to be incorporated into the original material. The program thus attempts to set grounded theory as a process of constant reflection against the way many researchers claim to be using grounded theory when they are just 'coding5 (Coffey et al, 1996, paragraph 7.6; Lonkila, 1995). In order to fit the designers5 interpretation of grounded theory, the package is set up to encourage the multiple indexing5 (that is, categorising) of text segments. Nud.ist encourages overlaps and allows rapid resorting and shifting, in part because the designers felt that manual methods made this process so slow that there was an unhealthy pressure to limit the categories used at too early a stage (Richards and Richards, 1991). Nud.ist is designed for the continual development and refining of an indexing structure by reincorporating search results into the coding of material. This refining of ideas is possible because of the extensive searching capacities of Nud.ist. So, at a first cut, one might find all the instances where mothers talk about employment, after which this subset could be searched for subthemes common to just this group—mothers talking about employment of a part-time nature, perhaps, and for themes that do not wholly overlap or occur close to each other. There are certain limitations, however, to a program which is organised around hierarchical categorisation and progressive revisions. For instance, to adapt a branching, condensing, and bifurcating flowchart of incidents to the graphical tree in Nud.ist requires some dextrous rethinking. Similarly, attempts to think about opposition and differential relationships between types of codes sometimes map a little awkwardly onto the tree-diagram approach. An alternative animating spirit can be found in Hyper soft, which is another Mac-based HyperCard system (Dey, 1993; 1995). Hyper soft has memoing functions and the capacity to produce graphics to 'test5 ideas about relationships in the material (about which we have the same reservations as the 'testing5 in HyperRESEARCH). Hypersoft is able to revise codings, and has the capacity to use initial retrievals and memos to build into later searches. Although it has a code and retrieve mechanism, it discourages the building of hierarchies of codes; any such hierarchies have to be based in the researcher's head rather than in the machine. What it offers in addition is an ability to sort material not just via these codes but through embedding electronic links directly between segments of material (see also Cordingley, 1991, page 169). That is, instead of looking for meaning by comparing bits of text, Hypersoft looks for meanings through links between them. This is a different way of thinking with materials, where the researcher can trace connections through the various materials. The program then allows a (somewhat crude) graphical presentation of the structure of these links. The graphical representation of linkages between codes is taken further by the programme Atlas/ti. This program has appeared perhaps rather more recently than most others and from the start was structured around a graphical user interface, operating with icons and pictures rather than command lines.(6) Atlas/ti allows the user to code and retrieve, and further to make notes and memos about specific codes, documents, segments, or indeed 'free floating5 notes unconnected to the text. (6) We make this point both to suggest its attention to graphical styles and also because, until summer 1996, the Atlas/ti interface was incompatible with Windows or other common 'graphical environments'. Because Atlas/ti used its own specific graphical interface, installation required some technical work and you often needed to reboot your system before switching to a wordprocessing package. 780 M A Crang, A C Hudson, S M Reimer, S J Hinchliffe Each memo can then be assigned a status as a 'summary', 'critique', or whatever. The real point of interest is how this process is then organised around graphical links, representing what the designer terms "semantic links". Thus one click of the mouse calls up a graphical presentation of the relationship of codes to text, and another click take you to any segment of text depicted as relating to that code. Or, you can trace the links of text to memos, memos to codes, codes to codes, and any permutation of this. Most interestingly you can then draw new links or move the graphics around to show different relationships, and the links you make on the drawings affect the links as embedded in the system. You can drag a code and all the related bits of text over to another and create a higher level category to link both or you can separate them. The graphic interface provides another means to manipulate the material. This interface will format links into a tree diagram if you wish, but unlike Nud.ist it does not demand this. Instead, the program provides the flexibility to move elements on the diagram directly with the mouse and specify the nature of the links [so the codes can be arranged as anything from a flow diagram through to a semiotic square should the researcher wish (compare Crang, 1997)].(7) Atlas/ti thus takes an approach allowing a flexible movement from a 'textual level' of concern with the primary texts to the various possible structural relationships between codes, their 'families' or groups, and their logical relationships expressed through the graphical networks (Muhr, 1991, page 359). It also allows the materials to be entered and stored as an assemblage of all the 'primary texts' along with their associated notes, memos, and codes, in a 'hermeneutic unit'. One of the advantages of this is that it allows multiresearcher teams to work on the same materials, where each researcher's work creates new entities in the hermeneutic unit that are annotated by date and creator. This terminology perhaps shows some of the program's origins in that it specialises in the hermeneutic processes of comparing texts, drawing links and differences. Thus, although the designer has specifically used some processes from grounded theory—and indeed terms such as axial coding—it seems just as, if not more, embedded in other approaches to texts. Thus it takes up the hypertext linking idea, not only allowing bridges between codes and memos to the text to be generated but also enabling the researcher to map out itineraries through the texts themselves and represent them graphically. For anyone working with narrative modes of analysis, for example, Atlas/ti provides an alternative approach to the 'delinearisation' of text into coded segments, allowing the linking of segments and the specification of the sort of links between them (Muhr, 1991, page 357). Working in this way with Atlas/ti seems to move analysis away from the approaches that have been the focus of most of the packages we have mentioned. The program clearly opens up possibilities for working with more textual or linguistic styles of analysis. Delimiting CAQDAS This last function then perhaps displays the symptomatic limits of these programs even as it expresses the possibilities opened up by a more hermeneutic approach to analysis (compare Coffey et al, 1996). The limitation is that all the programs currently spreading through geography come from one social science background. Perhaps this limitation is best expressed by thinking counterfactually—so the issue is not what tasks software has been designed for, but rather to see which possibilities have not been developed. Thus, symptomatically—and we are not sure whether this is a good or bad thing —there has been no adoption of GIS-type systems into CAQDAS (compare Gilbert, 1995). To date, no program offers to connect qualitative materials to spatial sociodemographic (7) These graphical representations can at present be exported as pictures that can be incorporated into later publications. The version currently beta-testing promises to allow their export in HTML format. Software for qualitative research: 1 781 data, though some programs might allow spatial sources to be factored in. (} Likewise, no CAQDAS has emulated the systems being developed by local authorities and libraries to link pictures, places, and descriptions into virtual palimpsests about landscapes. Equally, although CAQDAS programs offer powerful ways of looking at texts they do not really allow the user to organise these verbal materials around, say, corporate structures nor generally to record in detail the provenance of a document. This emphasis also means that, although some programs allow the coding of off-line material(9), and some programs (HyperSoft and Atlas/ti) can produce graphical representations, the overwhelming concern is with the spoken word.(10) One limitation discouraging development in this area is that the standard textual 'product' of research, and indeed hard copy produced during research, is very poor at retaining hypertext links (Cordingley, 1991, page 175). The linking of text and images in this manner is very common on the World Wide Web, the function of which is not analysis but rather representation. In contrast, all the CAQDAS programs considered in this paper have been seen as tools to help produce 'results' in different ways, results which are then taken elsewhere to be 'written up'. The possibility of hypertext writing programs surely begins to raise a possibility of writing through, rather than writing up, in novel and constructive ways, and opens up at least the potential for new possibilities of interpretation and even the inclusion of reader interaction (Coffey et al, 1996, paragraph 8.7; Gilbert, 1995; Landow, 1992). Although hypertext writing is speculative and poses problems for the form taken by research 'output', we must be alert to the possibilities of linking 'analysis' to writing. Indeed, the next revision of Atlas/ti promises to include a 'professional' version that will allow the exporting of 'findings' as hypertext files (Muhr, 1996, personal communication). Thinking of these future possibilities returns us to consideration of how CAQDAS has so far been used in geography and what desires have stimulated and been generated by its adoption. Our aim here is to demonstrate that no amount of abstract listing of program functions will give the reader a sense of what is used to achieve what ends. We want to show how and why packages were adopted and adapted, and through this to think about the current context of their use, the 'state of play' in the field of CAQDAS in geography. In recontextualising in this way we hope to be able to think further about the future possibilities of using CAQDAS in geography. Networks of adoption Each of the authors has a different story to tell about their experiences with CAQDAS. However, there are important similarities which can be drawn out of our diverse tales. In this section we seek to indicate, first, that the ways in which these technologies were adopted can affect the ways in which research projects are developed; and second, that the ways in which these technologies are used are shaped by, and to some extent shape, (8) Several programs (for instance, Nud.ist, Atlas/ti, and Textbase Alpha) do, however, allow the processing of structured materials or the generation of statistical output that could then be introduced into packages such as SPSS (SPSS Inc., Chicago, IL). Even so, it has to be pointed out that some of the designers are quite critical of what use this might be and regard it as a very limited option (Muhr, 1995). (9) For example, Nud.ist can attach codes to videotape, audiotape, or pages of material not directly entered. In relation to videotape, Nud.ist does not store the video materials, but assigns coders to specific time periods that can be recalled using software to link the computer to a videoplayer such as C-Video. (10) The latest version of Atlas/ti also looks set to allow images to be incorporated as the primary material, with codes attached to them, or links to other texts or images. The designer specifically thought of interpreting paintings or nonverbal behaviour (Muhr, personal communication). It also promises an improved graphical output. 782 M A Crang, A C Hudson, S M Reimer, S J Hinchliffe the contexts of research. To ignore this contextualisation would, we believe, ignore some vital clues to understanding the current status of CAQDAS in geography and the ways in which the use of CAQDAS might develop in the future. Each of us first encountered CAQDAS as fledgling researchers, conducting research towards our PhDs. We were all conducting research which included an element of intensive qualitative research. We all had a 'sense of a gap', feeling unsure of what we were supposed to do next. What should we do now with our qualitative data we had so laboriously collected through all the tribulations of interviews, groupwork, and ethnographic fieldwork? Each of us had a feeling of uncertainty, and a desire to maintain some rigour and order in our analysis. Amidst the chaos and disorder of transcripts and field notes we all felt the need for some way to order or systematise our analysis and interpretation, not only to systematise our thoughts but our practices—at least typing in codes or notes produced something tangible to show for a day wrestling with our materials. Such a 'product' was reassuring given the absence of structure in the PhD process. This sense of a gap came before our encounters with CAQDAS, but once we became aware of the availability of such packages they did seem to promise a degree of order. Our introductions to CAQDAS were haphazard and informal: Mike Crang and Steve Hinchliffe stumbled across a package in their department; Suzy Reimer and Steve then became departmental neighbours elsewhere; during a visit back to his former department, Alan Hudson bumped into Mike and a colleague using different software. Our adoption of CAQDAS was not driven by any training programme or systematic search of the possibilities offered by different packages. Rather, adoption took place through chance encounters in social networks. It would be fair to say that our views about CAQDAS now differ: from both our pre-CAQDAS days and between each of us. Some of us are more sceptical than others, in part because we used the software in different ways, adopting and adapting packages for our own requirements. The use of such software has helped us to cope with masses of qualitative material; to order the process of analysis; to juxtapose creatively different voices from the field; and to remember events and ideas which we might otherwise have forgotten. However, we are also concerned about the downsides of CAQDAS: particularly that it provides an illusory order; and that it encourages the researcher to collect more material rather than think more creatively with existing material. So, we remain somewhat ambivalent about CAQDAS. We are also more realistic in our expectations. It is clear that such packages do not provide a miracle solution for the problems of qualitative research—the researcher still has to work out the appropriate level of abstraction, for instance—and many of the functions they offer are unlikely to be used by any one researcher. The use we made of CAQDAS as individuals was very much shaped by the institutional context in which we found ourselves, and we, in turn, have shaped the institutional context. In one case no sooner had CAQDAS been stumbled upon than it was being incorporated as part of a mandatory 'research training' programme. Through our interest in and experiences with CAQDAS we have, largely unintentionally, contributed to the CAQDAS bandwagon. Somewhere down the line, and in differing institutions, being known as someone who uses software has become a not entirely welcome badge. For nonqualitative geographers it has become in some ways talismanic that this is what we do. If we have so far noted what our use of CAQDAS meant to us, it is clear it may have meant other things to other people. It is for these reasons that we now move to reflect on the ways in which the use of computers is constructed in the wider academy. This part of the paper links what we have said so far about the adoption and adaptation of software in research projects with the concerns that we raise in the second paper. Software for qualitative research: 1 783 The social construction of a good thing It is important to address the desires and fears that surround CAQDAS. We have already encountered the ways in which the programs provided a structure to which fledgling researchers could cling and the ways in which the practices required coding of materials and entering codes provided a tangibility to the process of interpretation. CAQDAS gave analysis both a sense of direction and a product. Moreover—reflected in the reactions of others—it is possible to tease out the fears and desires pinned on this software. For example, it is still not uncommon for us to encounter rank fear and scepticism from some qualitative researchers. This is not so much pure luddism as a fear that mechanisation will reduce all the interpretative scope of qualitative work— that the machines represent either an attempt to quantify the material (as concordance search software can), or that they automate the process in such a way as to deny the complexities and nuances in the materials. It should be clear we do not see these as necessary consequences. Even so, it is true that there is a desire in some quarters for a technique to make "respectable scientific data" out of fieldwork materials (Richards and Richards, 1991, page 40; compare Hesse-Beiber and Dupuis, 1995). Although the doubters often fear the computer as the Trojan horse of a shallow 'objective' science, some enthusiasts want the credibility to be gained from using that notion. We see the fear that the software will crudely automate analysis in the name of some spurious scientised ideal as misplaced—or at least reliant on a particular conception of the software or science. However, in the eyes of many (and as already noted), CAQDAS has become somewhat of a talisman—people without knowledge of the processes can now label us through the technique and the technique through the software. From the first moment Mike used The Ethnograph, he became familiar with hydrologists looking across from other terminals and saying "so that is your programme", his otherwise nebulous connection to their geography being fixed through the technology. Although this labelling of a researcher through the software he or she uses may be irritating, in other ways it has formed one of the hopes of qualitative researchers and their institutions. Certainly, one of the attractions of CAQDAS is the legitimacy of an established system adopted from the broader social sciences. Equally the process of interpretation can now be boxed with the software and placed in a discrete sentence in research grant application forms. Instead of difficult theoretical explanation, research proposals can say simply "field materials will be interpreted using [for instance] HyperRESEARCH software". Equally, one point of resistance to teaching undergraduates qualitative methods, at one institution, was a result of it being seen as reducing the 'scientific' content of the course, which in turn had to be squared against the lesser amounts of money received for arts as opposed to science undergraduates. A physical geographer was quick to see that laboratory classes on CAQDAS could be 'scientific content'. Never mind that teaching a package may detract from thinking about the topic studied (Tallerico, 1991, page 282). CAQDAS has rapidly come to be seen as a good thing to have—even if it is never used as suggested. Meanwhile, discerning histories of science suggest how possessing specific technologies becomes the way in which access to scientific knowledge is controlled and spread. Thus departments have a stake in becoming centres which can supply this knowledge. It is not too hard to see that what is currently a 'bonus' on applications for research funds could quickly become obligatory in the incessant interinstitutional competition being promoted in Britain at the moment. The adoption of this software also plays into some of the more deep-seated fears and desires of qualitative research. Some designers recount their motivation for developing software in terms of how manually coding material committed much valuable time to clerical work, while seeming to postpone the 'real' analysis (Richards and 784 M A Crang, A C Hudson, S M Reimer, S J Hinchliffe Richards, 1994, page 148). This seems to stem from a common view that coding is graft, hard labour, and neither as 'exciting' as the field nor truly academic. It is thus something willingly consigned to a black box. Such a view suggests that the work of coding is a time constraint to the possibilities of dealing with more material. In some senses we suspect this plays to a desire for 'validity' by getting more 'data'. A contrary view is one that stresses the conceptual thought in coding, that this is a period of developing ideas and making connections. Thus the developer of The Ethnograph terms the quest for expanding 'data' a computer "madness" (Seidel, 1991), and suggests that what is required are programs that enable us to think more clearly and deeply about our existing materials. That is, speeding up clerical processes and refining 'data management' should lead to more thought rather than filling the time with more material. It is quite clear then that The Ethnograph is designed to encourage the building of more complex systems of categories around a relatively small set of materials whereas Nud.ist leans towards the multiauthor, larger project. One of the central reasons for writing these papers, then, was a desire both to weaken some of the myths surrounding these programs and to help a balanced assessment of their relative merits. What we see as possible at the moment is a rapid ossifying of a random pattern of adoption. Many institutions are currently launching research training courses or new master's courses, and are also looking at buying one or other CAQDAS package. We hope, therefore, that this has been a timely intervention for, at a time when there is a creative plurality of strategies for fieldwork and writing, it is possible to see this software as driving a new analytical orthodoxy (Coffey et al, 1996, paragraph 1.4). Conclusions Our conclusions of this first cut through CAQDAS and geography are necessarily tentative. They aim to clarify where we shall start a second piece reconceptualising how this software is being and can be deployed. We have been trying to show the polarities at work in present ideas, each feeding off its opposite: there is a fear of mechanisation, based on a particular view of what software entails; a promise of rigour based on a particular view of science; an institutional patterning based on serendipity yet aspiring to play on systematic appeals. Each reaction and strategy implies its opposite. We want to look for a way round this process. To do this we believe it is essential to look at—as Derrida might put it—the mis-semination of knowledge about these programs, about how they are used in ways that cannot be simply read from the manuals [indeed, which suggests how little the manuals are ever read (see Fielding and Lee, 1995)] and the social relations that make their use significant. We are convinced that greater attention to these issues will help to clarify the process of interpreting qualitative materials. We are, however, fully aware that this should not take the form of a new fetishism of the technical possibilities of ordering and storing data (Richards and Richards, 1991, page 53). It is easy to become bewitched by the abstract possibilities presented by CAQDAS, offering new functions, new configurations, and new analytic tools. Instead we suggest that productive engagement and understanding will emerge from the embedded uses of these technologies which we shall explore further in the second paper on this theme. Acknowledgements. Parts of this paper are based on a Symposium on Computers and Qualitative Geography held at Durham University, July 1995 and sponsored by the Social and Cultural Geography Study Group: some parts were presented at the International Conference of Social Sciences and Information Technology at the University of Stirling, September 1995; the authors owe these audiences their gratitude for sharpening these ideas. We would like to thank the three referees for their amendments and suggestions. Any flaws remain the responsibility of the authors. Software for qualitative research: 1 785 References Agar M, 1986 Speaking of Ethnography (Sage, London) Agar M, 1991, "The right brain strikes back", in Using Computers in Qualitative Research Eds N Fielding, R Lee (Sage, London) pp 181 -194 Atkinson P, 1990 The Ethnographic Imagination (Routledge, London) Coffey A, Holbrook B, Atkinson P, 1996, "Qualitative data analysis: technologies and representations" Sociological Research Online 1(1) http://www.socresonline.org.uk/socresonline/ 1/1/4.html Cook I, Crang M, 1995 CATMOG 58. Doing Ethnographies (GeoBooks, Department of Environmental Sciences, University of East Anglia, Norwich) Cordingley E, 1991, "The upside and downside of hypertext tools: the KANT example", in Using Computers in Qualitative Research Eds N Fielding, R Lee (Sage, London) pp 164-180 Crang M, 1997, "Qualitative analysis techniques", in Methods in Human Geography Eds R Martin, D Flowerdew (Longman, Harlow, Essex) chapter 13, forthcoming Dey 1,1993 Qualitative Data Analysis (Routledge, London) Dey 1,1995, "Reducing fragmentation in qualitative research", in Computer-aided Qualitative Data Analysis: Theory, Methods and Practice Ed. U Kelle (Sage, London) pp 69 - 79 Fielding N, Lee R, 1995, "User's experience of qualitative data analysis software", in Computer-aided Qualitative Data Analysis: Theory, Methods and Practice Ed. U Kelle (Sage, London) pp 29 - 40 Gilbert D, 1995, "Between two cultures: geography, computing and the humanities" Ecumene 21 -14 Glaser B, Strauss A, 1967 The Discovery of Grounded Theory: Strategies for Qualitative Research (Aldine, Chicago, IL) Hesse-Beiber S, Dupuis P, 1995, "Hypothesis testing in computer aided qualitative data analysis", in Computer-aided Qualitative Data Analysis: Theory, Methods and Practice Ed. U Kelle (Sage, London) pp 129-135 Hesse-Beiber S, Dupuis P, Kinder T, 1991, "HyperRESEARCH: a computer programme for the analysis of qualitative data with an emphasis on computer testing and multi-media analysis" Qualitative Sociology 14 289 - 306 Hinchliffe S J, Crang M A, Reimer S M, Hudson A C, 1997, "Software for qualitative research: 2. Some thoughts on 'aiding' analysis" Environment and Planning A 29 forthcoming Huber G, 1995, "Qualitative hypothesis examination and theory building", in Computer-aided Qualitative Data Analysis: Theory, Methods and Practice Ed. U Kelle (Sage, London) pp 136 -151 Landow G P, 1992 Hypertext: The Convergence of Contemporary Critical Theory and Technology (Johns Hopkins University Press, Baltimore, MD) Latour B, 1987 Science in Action: How to Follow Scientists and Engineers through Society (Harvard University Press, Cambridge, MA) Lonkila M, 1995, "Grounded theory as an emerging paradigm for computer-assisted qualitative data analysis", in Computer-aided Qualitative Data Analysis: Theory, Methods and Practice Ed. U Kelle (Sage, London) pp 41-51 Morris M, 1988a, "The banality of cultural studies" Discourse 10(2) 3-29 Morris M, 1988b, "Things to do with shopping centres", in Grafts: Feminist Cultural Criticism Ed. S Sheridan (Verso, London) pp 193-225 Muhr T, 1991, "Atlas/ti—a prototype for the support of interpretation" Qualitative Sociology 14 349-371 Muhr T, 1995 Online Manual Demo Version LIE February 1995 diskette, Scientific Software Development, Trautenstrasse 12, D-10717, Berlin Openshaw S, 1991, "A view of the GIS crisis in geography, or, using GIS to put Humpty-Dumpty back together again" Environment and Planning A 23 621 - 628 Openshaw S, 1995, "Human systems modelling as a new grand challenge area in science" Environment and Planning A 27 159-164 Openshaw S, 1996, "Fuzzy logic as a new scientific paradigm for doing geography" Environment and Planning A 28 761 - 768 Pahl R E, 1984 Divisions of Labour (Basil Blackwell, Oxford) Richards L, Richards T, 1991, "The transformation of qualitative methods: computational paradigms and research process", in Using Computers in Qualitative Research Eds N Fielding, R Lee (Sage, London) pp 38-53 Richards L, Richards T, 1994, "From filing cabinet to computer", in Analyzing Qualitative Data Eds A Bryman, R Burgess (Routledge, London) pp 146 -172 Seidel J, 1991, "Method and madness in computer applications", in Using Computers in Qualitative Research Ed. N Fielding, R Lee (Sage, London) pp 107-116 786 M A Crang, A C Hudson, S M Reimer, S J Hinchliffe Seidel J, Clark J, 1984, "The Ethnograph: a computer program for the analysis of qualitative data" Qualitative Sociology 7 110-125 Shapin S, Schaffer S, 1985 Leviathan and the Air-pump: Hobbes, Boyle and the Experimental Life (Princeton University Press, Princeton, NJ) Strauss A, 1987 Qualitative Data Analysis for Social Scientists (Cambridge University Press, Cambridge) Tallerico M, 1991, "Applications of qualitative analysis software: a view from the field" Qualitative Sociology 14 275-285 Tesch R, 1991 Qualitative Techniques and Software Tools (Falmer Press, Lewes, Sussex) Weitzman E, Miles M, 1995 Computer Programmes for Qualitative Data Analysis (Sage, London) APPENDIX Technical specifications for software The following table is an abbreviated guide to some of the different technical details and possibilities of some of the software packages available for qualitative analysis. Readers seeking more detailed cross-comparisons of this sort should refer to Weitzman Table Al. Guide to software packages. Software Designer and/or distributor Virginia Systems Software Services Inc., 5509 West Bay Court, Midlothian, VA 23112, USA B Summerland, Textbase Alpha distributed by Qualitative Research Management, 73-425 Hilltop Road, Desert Hot Springs, CA, USA Kwalitan V Peters, Postbus 9104, 6500 HE Nijmegen, The Netherlands HyperRESEARCH ResearchWare Inc., PO Box 1258, Randolph, MA 02368-1258, USA Ian Dey, Hypersoft Department of Social Policy, University of Edinburgh, AFB George Square, Edinburgh EH8 9LL, Scotland Qualitative Solutions and Research, La Trobe University, Nud.ist Melbourne, Australia, distributed by Sage Publishing T Muhr, Scientific Software Development, Atlas/ti Trautenstrasse 12, D-10717 Berlin Germany Qualis Research Associates, PO Box 2070, Amherst, MA 01004, USA The Ethnograph R Padilla, 3327 N Dakota, Chandler, AZ 85224, USA HyperQual2 Meta Software Corporation, 125 Cambridge Park Drive, Cambridge, MA 02140, USA MetaDesign Sonar a Operating system a Windows DOS DOS Windows Mac Mac Mac Windows DOS Mac Mac Windows Which platform (DOS, Mac, Windows) is the package designed for? This does not take into account the increasing ability of Macintoshes to emulate PCs. b Can the package work with prestructured data, for example, data taken from questionnaires as well as unstructured material? c Can the package produce statistical reports? Mostly these reports work with prestructured data, but they can be developed to work with unstructured data, to assess the statistical relationships between codes, for instance (how often were 'sovereignty' and 'place' assigned to the same chunk of text?) Most programs which have this facility are SPSS compatible. HyperRESEARCH does this under a different rubric, a 'hypothesis testing' routine. 787 Software for qualitative research: 1 and Miles (1995) which tabulates 24 programs by 75 categories. Such a large table, while needed to account for the different features available, risks being too complex to comprehend. There are then two cautions necessary. First, any table—even Weitzman and Miles's—is incomplete, because software is developing all the time. Currently we know both of new versions being developed of programs listed here that may have very different characteristics, and of new software altogether being written. Second, we do not believe that most software is or can be chosen as though from a recipe book to preordered specifications. Therefore, we cannot say "to perform function X, use software Y"; the possible permutations are too complex. In addition, we are wary of seemingly definitive lists which can so easily conceal an agenda behind their 'neutral' tabulations. We hope this indicative list may provide a starting point for readers to think about different possibilities. Structured data b Statistical output 0 KWIC d Coding or retrievale Mernoingf Concept pictures6 Hypertext11 # # 0 O O o o m ® • @ o o o o o • ® • • o o o o o o o o • • • 3 3 m # 3 • • • o o 3 • O o m • 3 • o o o o 3 • • • d Can the package retrieve key words in context? In fact, all the packages can do this. Our assessment refers to the speed and flexibility of this function. The rating 3 refers to a limited capacity, whereas • indicates a high performance feature. e Does the package allow the user to tag sections of text with user-defined codes and to organise searches using these codes? f Does the package allow the user to make notes which are electronically attached to chunks of text during the course of analysis and which can be used later to develop working ideas? g Can the package produce a graphical representation of the relationship between codes? h Can the package link different pieces of text using Hyperlinks?