Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Devil in the detail: using a pupil questionnaire survey in an evaluation of out-of-school classes for gifted and talented children

Education 3-13, 2008
The use of questionnaires to evaluate educational initiatives is widespread, but often problematic. This paper examines four aspects of an evaluation survey carried out with very able pupils attending out-of-school classes: ethics, design, bias and interpretation. There is a particular focus on the interpretation and analysis of pupils’ answers to open questions. Conclusions are drawn from this analysis which will help teachers and others to take a careful and critical approach to their use of questionnaires in educational evaluation. Author profile: http://www.wlv.ac.uk/default.aspx?page=13209 [A limited number of eprints of this article are available - send me a message if you require one]....Read more
Devil in the detail Mike Lambert (2008) 1 Devil in the detail: using a pupil questionnaire survey in an evaluation of out-of-school classes for gifted and talented children Mike Lambert* University of Wolverhampton, Walsall, UK The use of questionnaires to evaluate educational initiatives is widespread, but often problematic. This paper examines four aspects of an evaluation survey carried out with very able pupils attending out-of-school classes: ethics, design, bias and interpretation. There is a particular focus on the interpretation of pupils’ answers to open questions. Conclusions are drawn from this analysis which will help teachers and others to take a careful and critical approach to their use of questionnaires in educational evaluation. Keywords: evaluation; questionnaires; analysis; gifted and talented; out-of- school Context I'm a haunter of the devil, but I hunger after God. (Gamaliel Bradford, ‘Hunger’) The growth of out-of-school classes for very able pupils, called ‘ advanced learning centres’, has been encouraged by the Depart ment for Education and Skills through its ‘Excellence in Cities’ programme. The network of centres now attracts over 1,500 children a year, most (but not all) near the end of their primary schooling. The pupils attend these ‘ALCs’ on Saturday mornings or a fter school, completing courses in advanced mathematics, English, ICT or a range of other subjects. A national evaluation of the centres was recently completed (Lambert, 2006). The evaluation methodology included a pupil survey using a four-page questionnaire. Nearly 800 pupils responded to this survey. The survey process was monitored using a research journal. This tracked the design of the questionnaire and the collation and analysis of pupils’ responses. This paper examines some of the dilemmas which were deliberated in this journal and considers ways in which some aspects could have destabilised outcomes.
Devil in the detail Mike Lambert (2008) 2 Questionnaires Questionnaires attract a bad press. They are prone to distortion and are inappropriate when understanding human beings (Pring 2000). They avoid problems of context, discourse and meaning and may generate large amounts of data of dubious value; ambiguities and misunderstandings in the questions may not be detected (Robson 2002). ‘Self -reports on behaviour are not always reliable’ (Muijs 2004, 45), they are susceptible to the temptation to give the ‘socially desirable response ' (52). They may reflect transitory behaviours and feelings (Tymms 1999). Complex realities of children's lives are reduced to scores on instruments and questionnaires, counts of individual behaviours, behaviours in contrived settings: ‘observing children and coming away with nothing but numbers ... has told us little about the day-to-day interactions of children(Graue and Walsh 1998, 4). My own experiences added to this uncomplimentary picture. I found the instrument fixed and immovable. It had none of the flexibility available to interviewers and observers, who can change and refine their approach during the research process. I could not see how respondents were responding were they taking it seriously? or how the questionnaire was being administered were the administrators taking it seriously? I became conscious too that ‘the answer [to questions] … reflects a myriad of impinging forces, only some of which may be readily apparent’ (Peterson 2000, 10). The open questions, which required pupils to write one or more sentences in response, troubled most. I was struck by the triple process pupils were interpreting experiences in their own minds, and my questionnaire questions, and I was interpreting their answers. These were three unsteady stepping-stones and the investigation could fall off any of them. There was an inevitable reductionism in turning the qualitative data of these responses into categories determined by myself and counting their occurrence (Spencer , Ritchie and O’Connor 2003). I realised too that my presumptions and understandings as an adult and educator did not always match those of children and the ‘being educated’: ‘As adults we bring to our encounters with children a particular package of attitudes and feelings’ (Greene and Hill 2005, 8). For all this negativity, there are positive sides to questionnaire surveys too. Muijs (2004) found them well suited for descriptive studies or for looking at relationships between variables, particularly for canvassing opinions and feelings. Punch (2003) highlighted their substantive and accumulative contribution to knowledge, their appropriateness when time and other resources are limited, and their common use as an organised and systematic way to collect information, meaning that they can be well understood by those administering and responding to them. Robson (2002) acknowledged that they could be a straightforward and low-cost approach to study of attitudes, values, beliefs and motives. Indeed I found the survey process was
Devil in the detail Mike Lambert (2008) Devil in the detail: using a pupil questionnaire survey in an evaluation of out-of-school classes for gifted and talented children Mike Lambert* University of Wolverhampton, Walsall, UK The use of questionnaires to evaluate educational initiatives is widespread, but often problematic. This paper examines four aspects of an evaluation survey carried out with very able pupils attending out-of-school classes: ethics, design, bias and interpretation. There is a particular focus on the interpretation of pupils’ answers to open questions. Conclusions are drawn from this analysis which will help teachers and others to take a careful and critical approach to their use of questionnaires in educational evaluation. Keywords: evaluation; questionnaires; analysis; gifted and talented; out-ofschool Context I'm a haunter of the devil, but I hunger after God. (Gamaliel Bradford, ‘Hunger’) The growth of out-of-school classes for very able pupils, called ‘advanced learning centres’, has been encouraged by the Department for Education and Skills through its ‘Excellence in Cities’ programme. The network of centres now attracts over 1,500 children a year, most (but not all) near the end of their primary schooling. The pupils attend these ‘ALCs’ on Saturday mornings or after school, completing courses in advanced mathematics, English, ICT or a range of other subjects. A national evaluation of the centres was recently completed (Lambert, 2006). The evaluation methodology included a pupil survey using a four-page questionnaire. Nearly 800 pupils responded to this survey. The survey process was monitored using a research journal. This tracked the design of the questionnaire and the collation and analysis of pupils’ responses. This paper examines some of the dilemmas which were deliberated in this journal and considers ways in which some aspects could have destabilised outcomes. 1 Devil in the detail Mike Lambert (2008) Questionnaires Questionnaires attract a bad press. They are prone to distortion and are inappropriate when understanding human beings (Pring 2000). They avoid problems of context, discourse and meaning and may generate large amounts of data of dubious value; ambiguities and misunderstandings in the questions may not be detected (Robson 2002). ‘Self-reports on behaviour are not always reliable’ (Muijs 2004, 45), they are susceptible to the temptation to give the ‘socially desirable response' (52). They may reflect transitory behaviours and feelings (Tymms 1999). Complex realities of children's lives are reduced to scores on instruments and questionnaires, counts of individual behaviours, behaviours in contrived settings: ‘observing children and coming away with nothing but numbers ... has told us little about the day-to-day interactions of children’ (Graue and Walsh 1998, 4). My own experiences added to this uncomplimentary picture. I found the instrument fixed and immovable. It had none of the flexibility available to interviewers and observers, who can change and refine their approach during the research process. I could not see how respondents were responding – were they taking it seriously? – or how the questionnaire was being administered – were the administrators taking it seriously? I became conscious too that ‘the answer [to questions] … reflects a myriad of impinging forces, only some of which may be readily apparent’ (Peterson 2000, 10). The open questions, which required pupils to write one or more sentences in response, troubled most. I was struck by the triple process – pupils were interpreting experiences in their own minds, and my questionnaire questions, and I was interpreting their answers. These were three unsteady stepping-stones and the investigation could fall off any of them. There was an inevitable reductionism in turning the qualitative data of these responses into categories determined by myself and counting their occurrence (Spencer, Ritchie and O’Connor 2003). I realised too that my presumptions and understandings as an adult and educator did not always match those of children and the ‘being educated’: ‘As adults we bring to our encounters with children a particular package of attitudes and feelings’ (Greene and Hill 2005, 8). For all this negativity, there are positive sides to questionnaire surveys too. Muijs (2004) found them well suited for descriptive studies or for looking at relationships between variables, particularly for canvassing opinions and feelings. Punch (2003) highlighted their substantive and accumulative contribution to knowledge, their appropriateness when time and other resources are limited, and their common use as an organised and systematic way to collect information, meaning that they can be well understood by those administering and responding to them. Robson (2002) acknowledged that they could be a straightforward and low-cost approach to study of attitudes, values, beliefs and motives. Indeed I found the survey process was 2 Devil in the detail Mike Lambert (2008) manageable in relation to my normal full-time work and the practical need for the most part to investigate at a distance from the activity being evaluated. As a relatively new researcher too, the approach proved a useful entry to research methods as a whole and to handling numerical, quantitative methods in particular. I found that children enjoyed the questionnaires, the methodology met the ‘desirability of matching child to method’ (Greene and Hill 2005, 17). To a certain extent at least, the survey acknowledged the idea that children have voices, they express opinions, observe and judge (Scott 2000). Most significantly for me, the process was useful because it engaged the wide range of people whose activity was being evaluated, far more so than did interviews and observation which were also part of the investigation. The whole network of centres got to know about the evaluation and had the opportunity to be involved in it – coordinators enabling it, teachers administering it, pupils responding to it. Many showed concern that things should be done correctly and comprehensively. They wanted to know the outcomes – for their own centre, and how these compared nationally. Furthermore, the opinions expressed in the questionnaires had a direct permanence, in the respondents’ own handwriting - primary evidence which could be scrutinised, argued over and interpreted. I had lots of it, and analysis highlighted commonalities and nuances which would not have been evident from analysis of a smaller number. My questionnaire The key issues for my evaluation were drawn from an earlier analysis of reports drawn up by centre personnel themselves (Lambert, 2004) and by concerns of the network’s steering committee. They related to equality of access to the centre network, pupils’ enjoyment of, engagement with and learning from the centre sessions, and the extent to which pupils’ centre learning was evident in results of their end-of-Key Stage 2 Standard Assessment Tests (SATs). Through the survey and its four-page questionnaire, pupils themselves had a role in answering some of these questions. This approach paid regard to children’s views about their education (Department for Education and Skills 2004), drew on the experiences of those most closely involved in the learning processes of the centres, and acknowledged the kind of gap identified by Gentry, Rizza and Owen (2002) between perceptions of pupils and their teachers about educational activity. Nevertheless I had to recognise that the research agenda was my own, designed and interpreted ‘in terms of adult discourses about children’s development’ (Woodhead and Faulkner 2000, 12). The questionnaire had three main parts. The first – Sections A, B, C and D – asked questions about pupils’ perceptions of their work and learning at their out-of-school 3 Devil in the detail Mike Lambert (2008) centre. The second part – Section E – asked for details about the pupils themselves – their age, gender, ethnicity and so on. The last part – Section F – was a request for permission from the pupils for me to approach their school for their SATs and other results. Most questions in the questionnaire were ‘closed’ statements. Examples were: ‘I look forward to sessions’; ‘The work I do at the centre is easy’; ‘My work at the centre helps me to prepare for my SATs’. Pupils were asked to estimate by ticking a box the extent to which the statement occurred: ‘Always’; ‘Usually’; ‘Sometimes’; ‘Rarely’; or ‘Never’. Section D questions were more ‘open’. There were four of these questions – about pupils’ work at the centre; about its level of difficulty; its relation to work at school; and an opportunity to write anything else which the pupil might wish to say. In research terms this was a simple ‘mixed methodology’, where the researcher collects two kinds of data, quantitative and qualitative. The quantitative data – numerical data to be counted – came from the tick-box sections; qualitative data – written text containing perceptions and ideas – came from the open ‘D’ questions. Cresswell (2003) called this simultaneous approach a ‘concurrent nested strategy’ (218). A decision which needs to be made with this dual approach is how to analyse both kinds of outcome in an integrated way, so that outcomes from each can be compared and synthesised with each other. For this I decided to code the qualitative data according to a self-designed framework of categories, giving particular ideas particular codes, then counting the incidence of each idea. As a result I ended up with quantitative data from all sections of the questionnaire. I then counted frequencies of responses from all sections using the data-analysis computer programme SPSS©, and cross-tabulated these frequencies according to a range of variables. This paper examines selectively four problematic aspects of survey detail: ethics, design, bias and interpretation. Lewis and Lindsay (2000) have suggested that questionnaire surveys receive relatively little attention in the literature, particularly in relation to children. This paper may address a little that deficiency. Ethics Ethical considerations require that research should not cause harm to those involved and that everything should be done with their ‘informed consent’. This entails allowing individuals to make a knowledgeable choice as to whether they wish to participate, by giving full information about the investigation and allowing them to volunteer freely to take part (Jones and Tannock 2000). Several factors complicated the apparently straightforward process. The first was that I was one or two steps removed from my respondents, the pupils. My communications 4 Devil in the detail Mike Lambert (2008) about the questionnaire were with what Oliver (2003, 44) and others have called their ‘gatekeepers’: centre coordinators and teachers who liaised with the pupils. These gatekeepers had to be informed and encouraged to allow pupil-respondents to consent or decline without coercion. Yet educational culture is generally that pupils do what teachers ask them to do – I wondered to what extent could they then ‘freely volunteer’ to participate. The second complication was deciding how and from whom consent needed to be sought. I wrote a letter to all pupils, informing them about the questionnaire and how their responses would be treated. This letter presumed their consent unless they actively informed their teacher or myself that they did not wish to take part – none took this option. I considered whether parents and carers needed to give their ‘informed consent’ too. The literature suggested that for older pupils at least they did not: ‘A parent cannot consent to research on behalf of a competent child’ (Masson 2000, 39). Nevertheless, choosing discretion ahead of valour, I wrote also to all parents and carers, offering them the chance to withdraw their child from the process, if that is what they wished – again none did so. The result was a pack of materials sent to each centre, with the request to send letters to children and their parents a week ahead of when the questionnaire would be completed. I suspect that the pack and the task were not always welcomed by very busy centre teachers, although most seemed to take the process efficiently in their stride. A third complication related to confidentiality. I had to ask for pupils’ names on the questionnaire in order subsequently to approach their primary schools for their SATs results. I provided an envelope for each pupil, with instructions to seal their completed questionnaire inside. Nevertheless any overly inquisitive teacher could fairly easily find out what had been written. Because of this possibility, I could not assure pupils that their responses would be fully confidential, only assure them that ‘your teachers have been asked not to look at your completed forms’. This may of course have affected what they wrote, although some frank comments suggested that it was not a worry for all. A related concern was the one child who indicated in responses some distress at a particular piece of work – I could not compromise the assumption of confidentiality by feeding this back to the centre involved. Design The design of the questionnaire aimed to ensure as far as possible that pupils would respond clearly, accurately and informatively. The five-point ‘Always’ – ‘Never’ rating scale was intended to give pupils a clear reference to their regular and recurrent centre sessions. Alternate shaded and unshaded lines for each question (as are often used in the publication of football league tables) lessened a tendency for pupils to tick two responses in one line then to omit the next. The question order reflected principal themes of the survey, with the most important questions – Section C – in the middle, as recommended by Peterson (2000). Placing the open questions in Section D after the 5 Devil in the detail Mike Lambert (2008) closed questions in Sections A to C allowed pupils to get involved in the questionnaire before writing at greater length. However this did lead to ‘response spread’, with the earlier closed questions clearly influencing the ideas offered in answer to the open questions. The double check on the date of completion – in the questionnaire itself and at its end – proved useful where one was unclear, as did the request for the pupils’ date of birth and for their school year. A request for their signature at the end was greatly appreciated by pupils – some were impressed that they, not the parents, were being asked to provide this. The youngest respondent was in Year 1 (she completed her questionnaire independently); the oldest in Year 10. This wide age-range called for a degree of simplicity and straightforwardness in the questions, and a direct relation to children’s experiences and language. My piloting of the questionnaire with two groups highlighted the pitfalls, and the need to use ideas and words with which respondents are already familiar (Griesel, Swart-Kruger and Chawla 2004). For instance, it proved problematical asking pupils about free school meals (interpreted by some as taking sandwiches) or disability (did this include wearing glasses? asked one). Some adult phraseology sounded awkward to pupils – the phrase in the pilot ‘what do you find valuable?’ was abandoned. My request for pupils’ ‘full name’ resulted in some four or five name responses – asking for ‘your name’ was sufficient. Of the questions which remained in the final version, it was the open ‘D’ questions which presented the most difficulties. They started awkwardly, carelessly perhaps, with the first question, D1: ‘What do you enjoy or find useful?’ This phraseology was an attempt to open dialogue, to paint the overall picture. It is probably the way many teachers talk. But it was essentially a double-barrelled question, one of the faults which Peterson (2000) warned against. Many pupils treated it as such and answered both parts: I like looking at each other’s webpages I also find this useful. I enjoy the challenge and the things I learn are useful. It meant that some responses were ambiguous: Making website. Did this mean that the child enjoyed it or found it useful? Others responses answered one part of the question, most commonly indicating enjoyment, less commonly indicating benefit. I wondered if this choice indicated some kind of priority in respondents’ minds: I enjoy meeting new people at the ALC. I find using computers useful. 6 Devil in the detail Mike Lambert (2008) I was left wondering if this question clarified or confused. Its duality helped the manageability of the questionnaire, and meant that pupils gave a large number of ideas in response. Yet this did not balance (as recommended by Rubin and Rubin 2005) with its more concise counterpart, Question D2: ‘What did you find difficult?’, to which few pupils offered more than one suggestion. The ambiguity of some responses which the question caused made it difficult to separate out perceptions of enjoyment on the one hand and benefit on the other. I ended up analysing under the general concept of ‘appreciation’, and gauging the extent and focus of this more general concept. There were similar tensions in Question D3. This asked: ‘How is your ALC work similar to what you do at school? How is it different?’ Some pupils felt obliged, sensibly perhaps, to provide an answer to both questions: It is the same because it’s ICT, it’s different because it is harder. Others took it as an indication to state more directly the extent either of the difference or of the similarity they perceived: We do completely different subjects in maths at my school. It’s basically the same. In a few responses it was not clear if the child was indicating an aspect which was similar or different. Again I was left wondering if I should have provided two separate questions rather than putting both elements in one. Bias I was conscious of a number of biases in the sample of pupils responding to the survey. Most crucially, as in most such evaluations, the survey only covered pupils who had attended their centre through to the end. Those who had dropped out were missing, also some of those with less regular attendance who might have been absent on the day their questionnaire was completed. The survey had a sample therefore greatly biased in favour of those who were regular attenders and to the disadvantage of those who, for whatever reason, had dropped out from the classes along the way. Within the questionnaire itself the open D questions were the area most affected by other aspects of bias. The open questions favoured articulate pupils. A complex response attracted up to five codings, simpler ones just one. Pupils attending English centres wrote more than pupils attending other subjects; girls wrote more than boys. Older pupils generally (but not always) wrote with greater complexity than younger pupils, although there were also considerable differences within the same age groups. Those who claimed regular centre attendance wrote more than those whose 7 Devil in the detail Mike Lambert (2008) attendance was less frequent. It seemed too that pupils may have had differing times in which to complete their questionnaires. Limited time may have been a reason for shorter responses, such as ‘nothing’, ‘everything’; pupils with more time may have been able to write more fully. The questions themselves seemed to have biases too. The implication of Questions D1 and D2 was that there would be things which pupils enjoyed (D1) and things they found difficult (D2), but not that there might be things they did not enjoy – there was no open question about this. Question D1 itself – ‘What do you enjoy or find useful?’ – linked enjoyment with learning, and perhaps biased responses towards making this link too. I seemed here and elsewhere in the questionnaire to be leading respondents along some pre-determined paths. Interpretation Cresswell (2003) highlighted the importance of language in research as a direct instrument of measurement and emphasised how terms must be applied uniformly and consistently. Indeed, most difficulties came with interpretation of responses, particularly again to the Section D questions, and sometimes awkward decisions needed to be made to allocate codes to what pupils had written. Consultation with colleagues on these proved essential in providing objective eyes and a measure of triangulation in this decision-making. These are some areas of difficulty encountered in interpretation of responses: (1) Sometimes responses were simply ambiguous or unclear: I like the break times that’s it. Coding enjoyment of break-time was straightforward, but what did ‘that’s it’ mean? Was it that she only enjoyed break-time, or (with the stress on ‘that’s) that break-time was what she enjoyed but she may have enjoyed other things as well? I enjoy learning new things and being able to use the things I have learned at school. (2) Specific words created problems: Being able to do more things. Did ‘able’ mean having the capability to do more, or being allowed to do more? It is quite different to school because we never make webpages. Did ‘quite’ mean a little or completely? 8 Devil in the detail Mike Lambert (2008) The work we do is very different and always enjoyable. Did ‘different’ mean varied, or new, or not the same as school? (3) Sometimes responses were clear but needed interpretation for coding: Drama rules! I coded this as enjoyment of the subject of the classes. It is different because at school we mainly do just drawing. I interpreted this as meaning that the work at the centre in the subject (art) had greater variety than that at school. Write when you feel OK, don't write if you can't get in the right mood. I coded this as having freedom and choice. (4) It sometimes seemed that the more interesting the response, the more difficult it was to find an adequate coding: Sometimes I find it difficult when the teachs say dont do that but I now they whont to help. I coded this as having difficulty understanding what to do. But this was a creative arts centre – the coding seemed limiting and perhaps inappropriate in relation to the child’s activity. When the staff say really hard words that I don’t understand when I ask them what it means, they make it even more difficult. The coding allocated related to difficulty understanding the work, yet this seemed insufficient for this important perception. (5) Sometimes two or more elements within one response could be coded individually, but causal and other links between elements were lost. The teachers are always kind and friendly and that makes me want to carry on ALC. Nothing is difficult because we have got people to help us. The lessons with Mr. X are difficult but I learn more. 9 Devil in the detail Mike Lambert (2008) Codings were given for both elements in each of these responses, but the relationship between them was not captured in the coding system. (6) In some cases there was tension between different aspects of the same response: It is a lot different because at school we don’t have Microsoft office. The child’s assertion that it is ‘a lot different’ may only be partially justified by citing just the use of a different computer programme. [I enjoyed] Being able to get help and doing it yourself. One part of the response may be contradicted by the other. This process of interpretation was a balancing act, especially when analysing in a numerical way essentially open, qualitative data. It had to cover a range of responses, from those which were simple, factual and one-dimensional, to those which were complex, multi-dimensional, and more creative. The coded framework needed to be specific enough to identify differences, but general enough to get the overall picture, a trade-off between detail and the ability to generalise highlighted by Scott and Usher (1999). The point of balance needed to relate to the evaluation’s aims and to the desired specificity of its outcomes. Looking at the data as a whole, I was conscious of how the coding categories which I had designed themselves influenced the quantitative outcomes. When using very specific criteria, the number of responses was small and their cumulative voice was weak. More general criteria for a category, or the grouping of several criteria under a single coding, meant that numbers were greater, and that those perceptions were more evident in the outcomes of this part of the evaluation. Invariably too there was a temptation – not always satisfied – to find a home for each element of each response within the coding system. For standard, commonlyexpressed responses this was relatively simple. For the unusual, complex, striking or idiosyncratic response, the match to a category was less straightforward. The coding system hid the complexity of thought, and carried the constant risk of misinterpretation. The common responses governed the outcomes, as one would expect, but the less common or the difficult-to-analyse lost impact in the evaluation’s final outcomes. Conclusion What advice can be drawn from the survey experience? 10 Devil in the detail Mike Lambert (2008) In terms of ethics, establishing credibility is crucial. Materials need to be clear, concise and well presented. Procedures must be realistic and show appreciation of potential difficulties and concerns. The researcher should be always open to direct contact from any source – ‘gatekeeper’, pupil or parent – and indeed to invite contact in every communication. When administering questionnaires oneself, one needs to behave appropriately, respecting the teacher’s authority and being sensitive about interacting with pupils themselves. In terms of design, a key aspect is to try everything out beforehand – not just completion of the questionnaire itself, but the whole procedures associated with it – the sending of materials, the administration of questionnaires, subsequent analysis of data. This is not just a case of ironing out difficulties, but also a process of improving the research approach so that better data and more valuable findings result. When recognising bias, I learnt a great deal by entering data on SPSS myself, and not delegating the task to others. Much is gained from having this close personal familiarity with the data. Analysis of responses across a large sample exposes the distinctions and variations of which one is only partially aware when designing and piloting the questionnaire. This acquaintance with the data helps the process of interpretation also. Not that it makes every decision an easy one – on the contrary it highlights new meanings, unexpected tensions and more problematic dilemmas, all of which would remain hidden with less assiduous contact. However, this familiarity allows decisions to be made in a more informed, balanced and critical way. Questionnaires as a research instrument can appear deceptively simple. Behind their inviting exterior, however, lies the same richness of complexity and dilemma as with other approaches to investigation and evaluation. In research we are looking for some kind of ideal, some kind of reality or truth. For the survey researcher, engaged in this quest, it is indeed in the detail where the devil lies. References Cresswell, J.W. 2003. Research design: qualitative, quantitative and mixed methods approaches. 2nd ed. London: Sage. Department for Education and Skills. 2004. Every Child Matters: change for children. Nottingham: Department for Education and Skills. Gentry, M., M.G. Rizza, and S.V. Owen. 2002. Examining perceptions of challenge and choice in classrooms: The relationship between teachers and their students and comparisons between gifted students and other students. Gifted Child Quarterly 46, no. 2: 145–55. 11 Devil in the detail Mike Lambert (2008) Graue, M.E., and D.J. Walsh. 1998. Studying children in context: theories, methods and ethics. London: Sage. Greene, S., and M. Hill. 2005. Researching children’s experience: methods and methodological issues. In Researching children’s perspectives: approaches and methods, eds. S. Greene, and D. Hogan, 1–21. London: Sage. Griesel, D., J. Swart-Kruger, and L. Chawla. 2004. Children in South Africa can make a difference: an assessment of ‘Growing Up in Cities’ in Johannesburg. In The reality of research with children and young children, eds. V. Lewis, M. Kellett, C. Robinson, S. Fraser, and S. Ding, 277–301. London: Sage. Jones, C., and J. Tannock. 2000. A matter of life and death: A reflective account of two examples of practitioner research into children’s understanding and experience of death and bereavement. In Researching children’s perspectives, eds. A. Lewis, and G. Lindsay, 86–97. Buckingham: Open University Press. Lambert, M. 2004. The impact on learning of out-of-school ‘advanced learning centres’ for gifted and talented pupils: A meta-evaluation for the DfES and the National Primary Trust. Wolverhampton: University of Wolverhampton. ——. 2006. Evaluation of ‘advanced learning centres’ for gifted and talented pupils. Research Report 742. London: Department for Education and Skills. Lewis, A., and G. Lindsay. 2000. Emerging issues. In Researching children’s perspectives, eds. A. Lewis, and G. Lindsay, 189–97. Buckingham: Open University Press. Masson, J. 2000. Researching children’s perspectives: legal issues. In Researching children’s perspectives, eds. A. Lewis, and G. Lindsay, 34–45. Buckingham: Open University Press. Muijs, D. 2004. Doing qualitative research in education with SPSS. London: Sage. Oliver, P. 2003. The student’s guide to research ethics. Maidenhead: Open University Press. Peterson, R.A. 2000. Constructing effective questionnaires. London: Sage. Pring, R. 2000. Philosophy of educational research. London: Continuum. Punch, K.F. 2003. Survey research: The basics. London: Sage. Robson, C. 2002. Real world research: a resource for social scientists and practitionerresearchers. 2nd ed. Oxford: Blackwell. Rubin, H.J., and I.S. Rubin. 2005. Qualitative interviewing: the art of hearing data. 2nd ed. London: Sage. 12 Devil in the detail Mike Lambert (2008) Scott, D., and R. Usher. 1999. Researching education: data, methods and theory in educational enquiry. London: Cassell. Scott, J. 2000. Children as respondents: the challenge for quantitative methods. In Research with children: Perspectives and practices, eds. P. Christensen, and A. James, 98– 119. London: Routledge. Spencer, L., J. Ritchie, and W. O’Connor. 2003. Analysis: practices, principles and processes. In Qualitative research practice: a guide for social science students and researchers, eds. J. Ritchie, and J. Lewis, 199–218. London: Sage. Tymms, P. 1999. Baseline assessment and monitoring in primary schools: achievements, attitudes and value-added indicators. London: David Fulton. Woodhead, M., and D. Faulkner. 2000. Subjects, objects or participants? Dilemmas of psychological research with children. In Research with children: Perspectives and practices, eds. P. Christensen, and A. James, 9–35. London: Routledge. ___________ This is an electronic version of an article published in Education 3-13. This journal is available online at: www.tandfonline.com. The full reference for the article is: Lambert, M. (2008) Devil in the detail: using a pupil questionnaire survey in an evaluation of out-of-school classes for gifted and talented children. Education 3-13, 36(1) February 2008, 69-78. Its URL is: http://www.tandfonline.com/openurl?genre=article&issn=03004279&volume=36&issue=1&spage=69 *Email: m.lambert@wlv.ac.uk Profile: http://www.wlv.ac.uk/default.aspx?page=13209 13
Keep reading this paper — and 50 million others — with a free Academia account
Used by leading Academics
Irina Malkina-Pykh
Saint-Petersburg State University
Simon Baron-Cohen
University of Cambridge
Elliot Jurist
The City College of New York
Asimina M Ralli
National and Kapodistrian University of Athens