Article
Research Use in School
District Central Office
Decision Making:
A Case Study
Educational Management
Administration & Leadership
40(6) 786–806
ª The Author(s) 2012
Reprints and permission:
sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/1741143212456912
emal.sagepub.com
Elizabeth N. Farley-Ripple
Abstract
The current educational policy climate in the USA places immense pressure on school district
central offices to use evidence to inform their decisions in order to improve student learning. In
light of both the expectations of evidence-based decision making and the significance of central
offices in supporting teaching and learning, there is considerably little understanding of whether,
how and why central office decision makers use research evidence to support educational decisions.
Through an embedded case study of Hamilton School District and three central office decisions,
this research examines the role of research in central office decisions, focusing on how research is
used, what research resources are used and the factors that influence use. Evidence of limited
instrumental and political uses of research in comparison to conceptual and symbolic use,
preferences for practitioner-oriented resources, and the importance of research attributes,
organizational context and culture, and decision-maker characteristics are presented. Findings suggest a need for strategies to improve instrumental use, including reconsidering the production and
dissemination of research, facilitating the flow of knowledge within the central office, and further
examination of conceptual uses of research.
Keywords
central office, evidence-based decision-making, research use, school districts
Introduction
Significant research has established the importance of school district central offices in supporting
and engendering change in education (Corcoran et al., 2001; Datnow and Castellano, 2003; Elmore
and Burney 1997; Honig et al., 2009, 2010; McLaughlin and Talbert, 2003; Marsh et al., 2005;
Massell, 2000; Supovitz, 2006) and in supporting student achievement (see MacIver and
Farley-Ripple [2008] for a review). As a result of its relationship to teaching and learning, the
Corresponding author:
Elizabeth N. Farley-Ripple, Willard Hall, School of Education, University of Delaware, Newark, DE 19716, USA
Email: enfr@udel.edu
Farley-Ripple: Research Use in School District Central Office Decision Making
787
consequences of decision making in central offices have become increasingly significant. This is
reflected in current policy which mandates that decisions be informed by evidence.
The United States Department of Education’s federal data reporting guidelines clearly state that,
‘the accountability provisions included in the No Child Left Behind Act of 2001 (NCLB) significantly
increased the urgency for states, local school district central offices, and schools to produce accurate,
reliable, high-quality educational data’.1 Further, when faced with decisions about school reform,
districts are expected to search for and interpret evidence about program effectiveness, emphasizing
educational programs and practices that have been ‘clearly demonstrated to be effective through rigorous scientific research’ (Desktop reference xiii). The legislation also includes several sections in
which the use of social science research should inform instructional decision making while other
portions of the legislation discuss the role of evaluation and data collection in educational accountability. Furthermore, the law directs districts on the use of these evaluations within their own
decisions, namely to improve, assess the effectiveness of, and determine continuation of funding for
programs. Additionally, federal guidance on implementing NCLB, specifically in regard to implementing school-wide improvement programs, suggests that districts considering programs developed
by outside agencies should ‘insist on seeing solid, research-based evidence of a program’s success’.2
The USA, however, is not alone in emphasizing evidence-based decision making in education.
Connections between education and economic growth, concern with financial accountability, and
the availability and quality of educational research have elevated the issue globally (OECD, 2007).
Efforts to increase the role of research evidence in educational decision making have emerged in
places such as Canada, Finland, Japan, Singapore and the UK (reviewed in OECD, 2007;
Bransford et al., 2009; Cooper et al., 2009). These initiatives, often labeled under knowledge mobilization, are not embedded in or mandated by national education policy, as they are in the USA, nor
do they explicitly focus on central offices (or comparable levels of the system). However, they are
a response to pressure for accountability and effectiveness in education, an important commonality
across the USA and other cases.
US federal mandates privilege data and research as valid forms of evidence for use in decision
making, yet research on central office decision making is only beginning to emerge. The role of
data in decision making has received great attention in recent research and practice (Datnow
et al., 2007; Kerr et al., 2006; Massell, 2001; Wayman and Cho, 2007); however, less attention has
been paid to the role of research evidence, particularly at the central office level. In light of both the
expectations of evidence-based decision making and growing significance of central offices in
supporting teaching and learning, there is considerably little understanding of whether, how and
why central office decision makers use research evidence to support educational decisions. A
review of the literature, conducted by Honig and Coburn (2008), observed a substantial body of
work advocating or prescribing processes for utilizing research and evaluation evidence in
education but far fewer that examine use empirically. Within that body of literature there were few
studies that focus on the process of use or variation in use within the central office.
The purpose of this study, therefore, is to build upon existing literature by exploring the role of
research in district central office curricular and instructional decision making through an
embedded case study of three decisions. Through intensive interviews and observations conducted
over the course of one school year, this research seeks to examine how research evidence is used in
central office decision making, what kinds of research inform decisions and what factors influence
whether and how research is used. Findings reveal limited instrumental and political uses of
research evidence in comparison to conceptual and symbolic, as well as a preference for
practitioner-oriented research resources.
788
Educational Management Administration & Leadership 40(6)
Background
The issue of evidence use is not new, and this research builds upon 30 years of work that
began with knowledge utilization in the 1970s and continues with evidence-based and
data-driven decision-making research today. The present study draws on this body of work
in conceptualizing ‘research’ and ‘use’ and in understanding the complex factors which
influence evidence use in education.
‘Research’ and ‘Use’
Over several decades of research, the literature has employed a number of terms relating to the use of
evidence in decision making. Within the knowledge utilization literature, ‘knowledge’ is described
as social science research, policy research or analysis, or evaluation. Current federal mandates, such
as NCLB described earlier, emphasize the social science research and evaluation as evidence in decision making, yet it is unclear whether these forms are valued in educational decision making and
whether there is a shared understanding of what constitutes research evidence across stakeholders
(Bransford et al., 2009). Therefore, for the purposes of this study, ‘research’ is left broadly defined,
and how decision makers identify and value research is one of the foci of this project.
This broad conceptualization of ‘research’ is not the only evidence that can inform decision
making. Recent research on data-driven decision making focuses more on quantitative data and
synthesized information resulting from such data. Decision makers also tend to use what Kennedy
(1982b: 1–2) calls ‘working knowledge’, defined as ‘the entire array of beliefs, assumptions,
interests, and experiences that influences the behavior of individuals at work’. Evidence can also
be anecdotal, information collected through casual conversation, and similar ‘senses’ of the
problem. While the purpose of this study is to focus on the role of research evidence, it is important
to consider these other forms in order to understand alternatives available to decision makers as
well as the relationship between research and these other forms.
With respect to ‘use’, research has identified four distinct purposes of evidence use: instrumental, conceptual, political and symbolic. Instrumental use is found where respondents are able to cite
or document specific ways in which evidence that was used in decision-making. For example,
Coburn and Talbert’s (2006) examination of conceptions of evidence in school districts finds four
types of purposes for evidence use: meeting accountability demands, informing program and
policy decisions, monitoring student progress to inform placement decisions and monitoringstudent progress to inform instructional practices (Coburn and Talbert, 2006: 12). Conceptual use
describes gradual shifts in policymakers’ awareness and re-orientation of their basic perspectives,
meaning that use can occur even if direct application of evidence does not. This ‘enlightenment’
function (Weiss and Bucuvalas, 1980) of research ‘contributes to the policy process indirectly and
over time by shaping more general interpretations and understandings of issues and gradually altering the working assumptions and concepts of policymakers’ (Porter, 1997: 34). Huberman (1990)
articulates political use as relating to the manipulation of evidence to attain specific power or profit
goals, such as a political gain. In one study, central office administrators appear to use evidence to
build political support for particular improvement efforts (Corcoran et al., 2001). Closely related to
the idea of political use is symbolic use. According to Coburn et al. (2009), ‘Reference to research
findings in such general terms is a common observation in recent literature on evidence-based decision-making.’ Furthermore, the authors found it was actually more common for district personnel
to use research studies, data, or general claims that ‘research says’ to justify, persuade, and bring
Farley-Ripple: Research Use in School District Central Office Decision Making
789
Table 1. Summary of research on factors shaping use
Characteristics of evidence
Source of evidence (internally or externally produced) (Caplan et al.,
1975; Corcoran et al., 2001; Fillos and Bailey, 1978; Kean, 1980; Klein,
2003; Nelson, 1987; Supovitz and Weiss and Bucuvalas, 1977)
Accessibility of information (Corcoran et al. 2001; Gross et al., 2005;
Honig, 2003; Roberts and Smith 1982; West and Rhoton, 1994)
Format and complexity (Reichardt, 2000; West and Rhoton, 1994)
Relevance to policy or decision needs (Maynard, 2006; Supovitz and
Klein, 2003; West and Rhoton, 1994)
Ambiguity of findings (March, 1994; Hannaway, 1989)
Whether findings are consistent with previous beliefs (Birkeland et al.,
2005; Corcoran et al. 2001; David, 1981; Weiss et al., 2005)
Characteristics of organizational Organizational structure of central office (David, 1981; Hannaway,
context
1989; Meyer and Scott, 1983; Rowan, 1986; Spillane, 1998)
Organizational politics (David, 1981; Kerr et al., 2006)
Demands on decision-makers limits time (Gross et al., 2005; Supovitz
and Klein, 2003; Wayman and Stringfield, 2006)
Culture and norms of decision making (Corcoran et al., 2001; Honig,
2003; Rich and Oh, 1993; West and Rhoton, 1994)
Financial or personnel capacity to search for and use (Supovitz and
Klein, 2003; West and Rhoton, 1994; WestEd, 2002)
Characteristics of decision
Technical capacity to understand/apply evidence (Reichardt, 2000;
makers
Supovitz and Klein, 2003; West and Rhoton, 2994)
Pre-existing cognitive frameworks mediates interpretation and search
for information (Honig and Coburn, 2006; Kennedy, 1982b)
Beliefs about forms of evidence and their value (Coburn 2001, Coburn
and Talbert, 2006; Corcoran et al., 2001; David, 1981; Fillos and Bailey,
1978; Light et al., 2005)
legitimacy to potential solutions that were already favored or even enacted. Such use, they argue, is
symbolic in nature.
Both evidence and use are complicated concepts, and a review of their definitions in the literature is useful in framing this research. This discussion has illustrated that evidence refers to a
number of types of information, ranging from anecdotal data to formal evaluation and research.
Furthermore, use includes a range of possibilities, including instrumental, conceptual and political
uses. Establishing the range and definition of evidence and use clarifies the key concepts of this study.
Research on Factors Shaping Use
Research on evidence-based decision making often focuses on factors influencing whether or how
evidence is used. Table 1 briefly summarizes research on these factors, which generally fall in
three categories: characteristics of evidence, characteristics of the organizational context and characteristics of decision makers.
Research Evidence in School District Central Offices
A review of the literature establishes a theoretical framework in which we identify a complex set of
‘evidence’, ‘uses’ and factors (evidence, organizational context, decision makers) that influence
790
Educational Management Administration & Leadership 40(6)
the decision-making process. This review, however, is informed by research across several
disciplines and contexts, rather than by empirical studies of central offices. Honig and
Coburn’s (2008) review reveals that there are limited empirical studies which examine social
science research or evaluation evidence use in central offices. Such studies offer evidence of
the political uses of research evidence by central office administrators and in central office
decision-making (Corcoran et al., 2001; Kennedy, 1982a, 1984), limited or constrained search
(Kerr et al., 2005), use as mediated by interpretation (Kennedy, 1982b, 1984; Spillane, 2000),
and barriers to use associated with characteristics of evidence (Coburn and Talbert, 2006;
Corcoran et al., 2001; Kean, 1980; Kennedy, 1982a; West and Rhoton, 1994), organizational
context and politics (David, 1981; Kerr et al., 2006; Spillane and Thompson, 1997) and decision makers (Bickel and Cooley, 1985; Coburn and Talbert, 2006; Corcoran et al., 2001; Kennedy, 1982a, 1982b; Spillane 2000;). These findings represent a small body of evidence on
which to draw conclusions about research use in central office decision making. Honig and
Coburn (2008) call for further research on evidence use in central offices noting in particular
the need to explore variation in processes within a central office, further examination of processes associated with use, and the conditions under which factors shaping evidence use support or hinder use.
This article responds to this call and seeks to understand the role of research evidence in central
office decision making through an embedded case study address the following research questions:
How does research evidence inform central office curricular and instructional decisions in
instrumental, conceptual, political and symbolic ways?
What research evidence is used and valued in central office decision-making?
What factors shape research use in central office decision-making?
Research Methodology
Research was conducted in Hamilton School District, a highly diverse district that straddles urban
and suburban settings, during the 2006/7 school year. The district served about approximately
16,000 students in 30 schools, with a student population that was 50 percent minority (non-white),
more than 40 percent free/reduced lunch eligible (a measure of socio-economic status) and was
growing in the percentage of English-language learners. This site was selected based on its large
size relative to surrounding districts, the diversity of its student population, its tenuous accountability status and its willingness to participate.
This research is designed as an embedded case study (Yin, 1993) with the primary unit of analysis is the school district central office, and the sub-units of analysis are decisions within the school
district. Case study is an appropriate for exploring all three research questions because it enables
the study of complex social phenomena, and the embedded design permits an examination of
instrumental use of research evidence while controlling for important organizational and political
characteristics of the district that might influence decision making or evidence use.
The data analysed in the present study were collected as part of a research project focusing on
broader issues of decision-making processes and evidence use in school districts and have been
reanalysed to focus on processes specifically related to the use of research evidence. Noted earlier,
related literature broadly defines ‘research’ to include social science research, policy analysis and
evaluation. However, it is unclear what central-office decision makers considered to be ‘research’.
Therefore, for the purposes of this study, ‘research evidence’ is considered in its broadest sense and
Farley-Ripple: Research Use in School District Central Office Decision Making
791
includes any reference to a ‘study’, ‘report’, ‘research’ or ‘evaluation’, as well as ‘findings’,
‘results’ or ‘analyses’ from those sources of evidence. In direct dialogue with participants, the
researcher was careful not to define the term ‘research’ in order to prevent biasing responses;
rather, interviewees were asked what research resources they used as a means of unearthing their
own conceptions of this type of evidence.
Data include sustained observations of central office meetings, in-depth interviews and
document analysis. Observations of central office meetings included division meetings for
curriculum and instruction, professional development and school services as well as
inter-divisional meetings for the school improvement planning committee, strategic planning
council, textbook adoption committee, district administrative retreat and public school board
meetings. A total of 34 observations were conducted. Observations were focused on
references to evidence (broadly construed and informed by the literature reviewed earlier),
decisions with the potential to impact teaching and learning, and factors previously found
to influence use as reviewed above. However, notes were as comprehensive as possible so
as not to exclude other potentially valuable data from being considered. Observations revealed
the dynamics of school district decision making, the types of evidence used, and when and by
whom evidence was used.
In order to examine how individuals search for and interpret evidence and why certain forms of
evidence are included in decisions while others are excluded, in-depth interviews were used to
complement observations. The researcher conducted 19 interviews with district personnel and one
interview with a partner responsible for data management. Interviewees were selected based on
their role in district curricular and instructional decisions and their membership on pertinent
district-level committees. Interviews were semi-structured to ensure that the same questions were
asked of each interviewee but were also made relevant to the work of each administrator as well as
the content of observations conducted in committee meetings.
Document analysis produced contextual information about the structure and culture of Hamilton School District. Documents collected fall into three categories: contextual and background
information (including NCLB Report Cards, district profiles, school board agendas); documents
used or created as part of the decision-making process (including meeting agendas, PowerPoint
presentations, school improvement plans); and examples of evidence used by decision makers
(including departmental data reports, achievement summary reports). Documents were provided
at the discretion of and with permission of the superintendent, but no request for information was
ever denied.
All data were entered into NVivo for coding and analysis, except for hard copies of documents that, at the time, were not able to be coded electronically. After an initial read of all
the data, three decisions were identified for in-depth analysis to address the first research
question and were selected on the basis of the following criteria: (1) the timing of the decision
and the ability to follow the decision from beginning to completion; (2) the ability to triangulate data about the decision across multiple participants and/or observations; (3) the decisions’ potential to impact teaching and learning; and (4) the extent to which decisions were a
response to accountability policy and status. A preliminary coding framework was developed
based on existing literature to identify forms of evidence, use and factors influencing use.
Data were re-examined in an iterative process to and emerging patterns were coded for analysis. Coding and analysis of the three decisions was conducted in a fashion consistent with
cross-case synthesis, a technique which treats each individual case study as a separate study
and aggregates findings across a number of cases (Yin, 2003).
792
Educational Management Administration & Leadership 40(6)
The Research Context: Hamilton School District and the Three Decision Cases
Under NCLB, districts are rated annually on the basis of student assessment data. Between 2003
and 2007, Hamilton School District’s accountability ratings teetered back and forth between
meeting federal and state goals and being identified as in need of improvement. The district’s tenuous status created a sense of urgency among district personnel, and decision-makers attribute
many of the changes in district culture and practice to the high-stakes environment. When asked
about current challenges the district faces in terms of meeting the demands of accountability
policy, decision-maker responses were almost exclusively limited to curricular and instructional
issues. They claimed that meeting the needs of academically diverse students, providing appropriate professional development to improve instruction, and getting teachers to see both the need for
and their role in creating change as some of the difficulties the district encounters. These challenges, coupled with accountability ratings and the resulting sense of urgency, resulted in a cultural
shift toward what is described by district decision-makers as ‘outcome-oriented’, ‘streamlined’ or
‘business’-style with a renewed focus on instruction.
This new district culture is primarily driven by the strategic plan. Prior to 2006/7, HSD had no
such guiding document, so the existence of a plan in and of itself represents a significant change in
direction for the district. The strategic plan is ‘a call for change at levels of the organization, and
across all departments, all with the purpose of increasing the system’s effectiveness in a way that is
consistent with a clearly defined mission’. In practice, this cultural shift was evidenced in four
characteristics of the organization’s new practice: instruction as a priority, urgency, collaboration,
and efficiency and effectiveness.
Instruction. The Strategic Plan clearly illustrates the centrality of instruction for both the district
mission and for each and every department. As one decision maker stated, ‘Now there’s a
focus everybody understands, that we are looking at instructional practices.’
Urgency. Urgency, a term that emerged from a book discussed by the strategic planning council,
is expressed throughout the strategic plan and in planning council meetings, most explicitly
in statements about the need for change.
Collaboration and participation. The strategic plan demands ‘coordinated operations within and
across departments’, and decision makers clearly articulated that one of the biggest, and most
valuable, changes as a result of the cultural shift in the district has been improved communication and interaction between branches of the central office.
Efficiency and effectiveness. The twin goals of efficiency and effectiveness are evident in several district actions: the establishment of the office of research and evaluation; data-driven
decision making as one of the plan’s priorities; instructional technology initiatives for assessment and data use; the district’s relationship to an external data management organization;
and massive data collection efforts (including standardized assessment data, formative
assessment data, demographic and special service data, parent surveys, teacher surveys,
teacher qualification data, instructional evaluation tools, and professional development
workshop offerings, participation, and evaluations).
In the 2006/7 school year, Hamilton School District experienced significant pressure under state
and federal accountability policy, and decision makers believed that their greatest challenges in
meeting those demands were related to curriculum and instruction. In order to examine instrumental use of research evidence, three curricular and instructional decisions were examined: the
Farley-Ripple: Research Use in School District Central Office Decision Making
793
reorganization of the delivery of professional development, a district-wide high school textbook
adoption and the redesign and implementation of school improvement plans.
Decision 1: Overhauling professional development. During a number of school improvement
planning and strategic planning council meetings, as well as in departmental meetings, the director
of professional development mentioned concern for how professional development is delivered in
Hamilton School District. His concerns were primarily related to both the consistency of workshops and the appropriateness of workshop content in meeting the district’s instructional goals.
In a department meetings, he states:
We are going to change the way we do [professional development]. If we really want instruction to
change, we do something like 8 days a month, four schools a day. Each month, they’ll train people
in different strategies, such as differentiated instruction, data-driven decision-making, best practices,
etc. Everyone in the district would get the same thing, so administrators could walk around and say,
‘I know you’ve had training in . . . ’ It will make it a hundred times more effective. Teachers have
to attend meetings as it is in their contract. Now we can see if it is happening in classrooms, especially
with the drop-ins from the road tool. We’ll still need ELL3 and other special trainings as they have
different needs.
Having identified the problem, the director of professional development began developing a solution or plan of action. It appeared from observation and interviews that the development of this
plan occurred outside of the strategic planning or school improvement planning processes and was
constructed within the division of professional development. Informal conversations with other
division heads were a source of input in the plan as well. The director of professional development
ultimately put forth a proposal for changing the system of delivery in the district which was
adopted in April. Like the development of the plan, the adoption occurred outside of regular and
formal district meetings. The superintendent was the final authority on whether or not to adopt this
change, and once he expressed approval, it circulated to the various divisions in the district.
Decision 2: High school textbook adoption. The process of adopting a new textbook was
organized and supervised by the director of curriculum and instruction, and the committee
consisted of teachers from each high school, representing each grade level as well as special populations (such as special education and English-language learners). The process consisted of six
meetings which included an introductory meeting, four publisher presentations and one session
in which the decision would be made. The director of curriculum and instruction explained the
selection of publishers:
They are the top four. I mean they sell more textbooks than anybody else. . . . Because financially they
are going to give you a better deal. And be able to offer more incentives because for what we buy, we
get almost an equal amount back in perks. So we wanted, you know, to deal with somebody who is big
enough to do that. And if you were to go through the contents of what is in the book, you would find
that there is a lot of similarity in the stories and what not that are included.
A key feature of this process is the evaluation form teachers were to use to rate the product of each
publisher, which included a determination of whether ‘the instructional content is consistent with
research findings and child development’.
On the evening of the final presentation, the director of curriculum and instruction had tallied
the ratings for each of the publishers but would not to reveal the results until after a group
794
Educational Management Administration & Leadership 40(6)
discussion and would do so only if the conversation failed to result in a consensus. The teachers
debated each of the four options and many of the same concerns and opinions were expressed
during this conversation, resulting in a fairly extensive pro and con list for each publisher. At the
end of the discussion, the group decided on one publisher.
Decision 3: School improvement planning. Prior to 2006/7, school improvement plans were
produced for compliance purposes only, a ‘huge volume that would then sit on the shelf’. In light
of the school district’s status as in need of improvement in 2004/5, it was determined that the
school improvement plan ‘was not an effective way drive instruction or to improve instruction’.
Subsequently, the school improvement planning committee was established, reflecting what one
decision-maker articulated as ‘the need to streamline our school improvement process’. The format
of the plans as they were implemented for the 2006/7 school year included:
Academic needs identified at the district-level using state assessment data, school climate
needs identified through discipline data, and parent involvement needs determined by the
district through an unknown method.
A requirement to select three school objectives based on accountability requirements.
Selecting (from a specified list) a strategy for meeting each objective, paired with a timeline
and person responsible for implementing that strategy.
Selecting (from a specified list) a data measure for determining progress toward the objective.
Once school leaders completed their plans, school improvement committee members were
assigned to each school for ‘monitoring and mentoring’ during implementation of the plan. The
committee then held weekly meetings in which members share reports or data on SIP implementation and discuss any problems in the design of the plan or its implementation. Issues arose as a
result of changes in accountability policy, through discussions of routinely shared data or though
the mentoring and monitoring of schools. In response to the problems identified through
accountability policy, professional experience, and focus groups, the committee decided to make
a number of changes to the plan for the next year, including expanding the scope of the plan to
include additional grade levels and subject areas, and adding new options for instructional needs,
strategies and measures.
Findings and Discussion
An analysis of the three decisions and the role of research evidence in broader decision processes in
Hamilton School District permit a deep examination of the use – and absence of use – of research
in curricular and instructional decision making. In this section, evidence from Hamilton School
District is examined as it relates to the three research questions guiding this work.
Evidence of Use
Instrumental Use. Instrumental use, defined as directly informing a specific decision, did not occur
frequently in the decisions observed. The single instance of instrumental use occurred in the
professional development decision, and use occurred in both the problem identification and search
phases of the process. The director identified the professional development delivery as a problem
by using multiple sources of information, which included education research. In both interviews
and observations, he clearly stated that ‘the research says’ that professional development must
be systemic if it is to make a difference in instruction, most frequently referring to the research
Farley-Ripple: Research Use in School District Central Office Decision Making
795
literature on professional development. For example, in a department meeting, he stated, ‘If we
keep going with the way we are doing it, there will be no change. All the research out there says
professional development should be systemic.’ Other sources of information were used, including
school improvement plans, professional development evaluations and ‘common sense’. The director also appeared to use research evidence into the design of the new plan, explaining that the
research emphasizes systemic delivery as the means to achieve effective professional development. However, division heads providing additional input into the new plan did not make any reference to research evidence, suggesting that the director was alone in using research to inform this
decision.
Observations of instrumental research use in the other decisions were noticeably absent, even
when opportunities to use research arose. In the case of the high school textbook adoption, the
search for options did not include research or evaluation, as the selection of publishers was
driven by both experience and financial considerations. Additionally, only three publishers mentioned specifically mentioned how research influenced their product. Those that did linked
research to either design issues (for example, the size of the font or number of columns) or spoke
in general terms (for example, ‘the research has shown us what was wrong’ or ‘Trust me, the
research is there’). Further, in the committee’s subsequent conversations there appeared to be
widespread disregard for the value of the evaluation tool, observed as random assignment of
ratings, sharing or copying ratings and rating based on incorrect or little information. Dismissal
of the research-related criteria was observed specifically, including comments such as ‘how
would we know?’ and ‘how’s that going to affect teaching anyway?’ Teacher evaluations of
textbook presentations also appeared to be primarily concerned with issues drawn from their
classroom experiences. For example, one teacher was extremely concerned with reference
citation guidelines, another with homonyms, and another with acquiring a separate grammar
textbook. It did not appear that any concerns reflected district-wide data-based issues that needed
to be addressed through the textbook adoption. In the final discussion resulting in choice of textbook, no reference to research evidence was made.
In the school improvement planning process, the need for redesign was identified through anecdotal evidence of the lack of their implementation in Hamilton School District schools, rather than
through any evaluation. Within the new improvement plans, achievement or performance targets
were set based on policy goals, and instructional strategies for meeting those goals were identified
by the committee without explicit mention of research or evaluation of their effectiveness. As the
new plans were rolled out and problems were identified, on two occasions members of this committee discussed a need to focus on getting principals in classrooms. To incite action on this issue,
the Superintendent referred to the ‘Kentucky study’ as evidence that learning improves when leaders spend more time in classrooms. However, in adjustments made for upcoming years, no specific
action was taken on this issue.
Conceptual Use. Conceptual use describes gradual shifts in terms of policymakers’ awareness and a
re-orientation of their basic perspectives that may occur even if direct application of findings does
not. In the three curricular and instructional decisions, conceptual use is also evident only in the
professional development example. The director of professional development’s understanding
of effective delivery and how information should be disseminated to teachers and schools is heavily influenced by what, by his account, the ‘research says’. This new perspective, as discussed earlier, is reflected in his subsequent identification of a problem as well as in the design of the new
delivery strategy.
796
Educational Management Administration & Leadership 40(6)
Conceptual use was also observed as being part of the broader district decision process. A
second very pertinent example comes from the superintendent, who refers to the ‘Kentucky study’
in multiple strategic planning council and school improvement planning meetings as evidence that
led him to emphasize instructional over management leadership in the central office and in schools
– a feature of both the new strategic plan and the changing district culture.
It’s invaluable, because if you’re gonna truly use data, the research generates the data, and you heard
me say, ‘You’re all here. I’m not one into trying to figure out what works. If there’s something out there
. . . ’ I mean, we have, like, creative ideas that we generate on our own, but if a study shows, for example, like the Kentucky one did – which was very dramatic – the principal spent 75% of his or her time in
the classroom; performance results jumped 50–100%; that is incredible! Plus, it’s also logical. Because
what I just said earlier about us now monitoring what’s happening; we’re monitoring the principals to
see if they are indeed observing what they should be observing and monitoring what the teachers are
doing . . . if principals are doing that, there are things (that) will happen for kids, ‘cause they’re gonna
know whether the teachers are teaching what they’re supposed to even if they know how to teach;
whether classrooms are being managed properly and all that stuff.
While these examples provide evidence of conceptual use, they are the only ones directly
observed. This is likely due to the challenges in ascertaining conceptual use through observations
and interviews. By definition, such use is indirect and shapes decision-makers’ beliefs about an
issue – neither of which are obvious to an outside observer nor may even be unknown even to the
decision-makers themselves.
In an attempt to learn more about conceptual use, district administrators were asked if they
could recall an instance where they learned something from research or data that changed their
minds about an educational issue. Responses varied in terms of the type of evidence reported as
well as how that evidence was used conceptually. Some examples include:
Learning through doctoral work that any given educational outcome can be attributed to a complex set of independent variables, rather than a single factor.
Attending conferences or visiting websites to ‘keep fresh’ on relevant educational issues.
Considering a combination of research and observations of how other districts are working on
inclusion in special education to judge the district’s progress and need for change.
Broadly looking at all of the district-collected data to get a sense of the ‘big picture’.
‘It’s always happening; it’s just not one point. Do I think what we’re doing in curriculum –
looking at data, and all of a sudden there are a few points that I’ve changed my mind with.’
The impact of an anecdotal example of twins, one of whom picked up reading through whole
language while the other learned through phonics, on the philosophy that children learn
differently and no one solution will always work.
Learning how to effectively instruct students with diverse needs from personal classroom
experience grouping students hetero- and homogenously.
These illustrations of conceptual use represent a very extensive range of evidence as well as purposes. Decision makers were as likely to refer to research as they were to other forms (for example,
data or professional knowledge) – a significantly different pattern than found with instrumental use
where research played a very limited role. Furthermore, the purposes of use, even within the conceptual category, are diverse, ranging from influencing beliefs about student learning, to helping
decision-makers stay informed of current and emerging best practices. However, though there is
Farley-Ripple: Research Use in School District Central Office Decision Making
797
evidence of conceptual research, it is not possible to draw conclusions about the frequency or quality of such use.
Political and Symbolic Use. Like conceptual use, political and symbolic use can be difficult to identify
without specific knowledge of how evidence was obtained or knowledge of decision-makers’
intentions. As such, political uses of evidence were not often directly observed in Hamilton School
District. Examples of research used to secure support for a position in the three curricular and
instructional decisions were from the publishers of the language arts textbooks who do so in promoting the quality of their product. However, as research was not used as criteria in deciding
between textbooks, there is no apparent impact of political research use on this decision. Outside
of the three decisions, political use of research evidence was not observed, though examples of the
political use of other evidence, including quantitative data, were noted.
Symbolic use describes vague or general references to research, as well as invoking research
after a decision has already been made. Therefore, symbolic use is a sort of measure of how deeply
decision makers are engaging with evidence, as well as the level to which the evidence actually
informs the decision process as it occurs. As such, symbolic use can also describe the nature of
either instrumental or conceptual use. Data from the three curricular and instructional decisions
reveal that symbolic use of research occurred in the professional development decision and, to a
lesser extent, in the textbook adoption. The director of professional development’s frequent reference to what the ‘research says’ and difficulty articulating specific studies or findings are evidence
of this type of symbolic use. Similarly, presentations by textbook publishers who did refer to the
research, did so in vague and general ways: ‘Trust me. The research is there.’ Outside of the three
decisions, the phrase ‘research says’ was invoked, for example, in the need for a campaign to roll
out new data-use technology by the strategic planning council, and the term ‘research-based’ was
used to describe strategies presented at the district retreat. In these cases, there were no specific
details (for example, author, source, findings) cited or discussed.
Findings from Hamilton School District suggest that research evidence played a limited role in
school district curricular and instructional decision-making, whether in the three decision cases or
in broader district processes. Instrumental use was observed only in one decision, and this was later
shown to be largely symbolic. Opportunities for additional instrumental use existed in the other
decisions but did not result in use, the reasons for which are explored in the next section. Conceptual use was documented, and the various ways in which evidence informs decision maker perspectives is a promising indicator that research may have a meaningful yet indirect role in decision
making. However, difficulty in measuring and observing conceptual use prevent drawing significant conclusions about frequency or quality. Finally, though political uses were rarely observed,
there were multiple occasions in which symbolic use was observed in two of the three decisions
and in larger district processes.
Evidence on Research Evidence
The second research question is a broader question about the types of research resources central
office decision makers use or consider useful. In interviews and observations of district meetings,
including and beyond those used to analyse the three curricular and instructional decisions, administrators mentioned a number of sources of research (broadly defined). The researcher was careful
not to define the term ‘research’ in order to prevent biasing responses; rather, interviewees were
asked what research resources they valued as a means of unearthing their conceptions of research
798
Educational Management Administration & Leadership 40(6)
Table 2. Research resources mentioned in Hamilton School District
Frequency
Source referenced
High frequency
Professional periodicals
Professional organizations
Professional journals
Conferences
Advocacy, issue-based or policy organizations
Internet
Leadership books
News periodicals
Research organizations
Academic books
Academic journals
Specific research studies
Moderate frequency
Low frequency
as well as the tools they use to access it. Responses indicated that research resources ranged from
widely read newspapers to issue-based websites to academic research books. Some examples
include the Association for Supervision of Curriculum and Development (ASCD) website, the
state university’s Center for Applied Linguistics, state technology conference, School Teacher
(Lortie, 1975), Education Week, the state department of education and the Education Commission
of the States. In Table 2, I summarize the observed frequency of resource types.4
Noticeable omissions from this list are district program evaluations and education research clearinghouses (such as Education Resources Information Center or the What Works Clearinghouse). In no
interview did decision-makers mention formal evaluations of district programs. Rather, division heads
indicated that they were primarily responsible for evaluating their own programs in their area. The
director of curriculum and instruction indicated a desire to focus more on evaluation but that her department lacked sufficient staff to do the data collection and analysis. The lack of reference to clearinghouses, designed specifically to support the use of research evidence in education, indicates that
decision makers either do not value or do not know about those resources.
In short, when decision makers used research evidence in curricular and instructional decisions,
they referred primarily to practitioner or professionally oriented resources. Decision makers did
not make reference to specific articles, so further investigation of the type of research that informs
decision-making is not possible.
Evidence of Factors Shaping Use
Literature on knowledge utilization, decision making and evidence-based decision making offer
empirical evidence about what factors may shape what and how evidence is used, categorized
earlier as pertaining to characteristics of research, organizational context and decision makers.
Analysis of data from in Hamilton School District demonstrated that several of these factors are
influential in research use.
An immediately apparent pattern, noted earlier, is that decision-makers seem to prefer
practitioner oriented sources over academic publications. Administrators with whom I spoke
indicated that they frequently looked at resources pertaining specifically to their area of work. One
decision-maker discusses her choices for finding research:
Farley-Ripple: Research Use in School District Central Office Decision Making
799
ASCD is one of the ones, and I look at ASCD because I get it online every morning to see what’s
happening in the rest of the country, to find out what’s what. ED Week, but a lot of that is looking
at the front page, or the first couple pages, or whatever, because after you dig a little deeper.
Professional journals, School Leadership which is one that comes out through administrative professional organizations and things like that. So they would probably, and a lot of those are focused around
the same issues that we’re dealing with right now.
One explanation for the general preference for practitioner resources is suggested in the quote
above: relevance. These resources may address issues that are more germane to the problems
school districts currently face. In fact, knowledge utilization and evidence-based decisionmaking literature note that the lack of relevance is often a source for the disconnect between
research and practice, with academic work failing to address issues salient issues in a timely
manner or not addressing them at all.
Another facet of relevance influences not what research is used, but whether it is valued at all.
One decision maker points out limitation on using research to inform educational decisions:
You can have all the research. If you do not understand how a child learns, and how that particular child
learns and if you haven’t’ worked with that child and does the child understand how they learn, and do
you understand? And does the child know what to do to learn. If that piece isn’t there, nothing’s going
to change.
In this comment, relevance pertains not simply to whether research is on point with the educational
issue in question, but rather its appropriateness for the individual student. Here we not only see
characteristics of research – in this case, relevance – as having an impact but also decisionmaker values as well.
Evidence-based decision-making literature also identifies accessibility as a factor, and while
district level decision-makers identified a number of mediums by which they access research, they
also noted a major impediment: time. Given the complex roles and multiple responsibilities born
by district administrators, it is no surprise that few decision makers were able to search, read, and
employ education research in their day-to-day work. However, several administrators acknowledge the value of such research and would ‘like to make it a greater priority’:
Because I think sometimes in terms of the organization side, other districts that are equally large or
larger have dealt with some of the issues of policies and whatnot and so just taking the time and say
‘Well instead of recreating the wheel, if I did some more research, or put aside time on this particular
topic to research it, I might be saving myself work in the long run.’
There are, however, some means of circumventing the problem of time. One district decision
maker emphasizes the value of technology, namely the Internet, in enabling her access to education
research:
But quite frankly technology and doing web searches is many times much more efficient than having to
going through all these pages and what not, so that that would be a primary source to get information.
She continues, recognizing that using the Internet has its problems, specifically expressing concern
for the ability to find research to support any position on an issue:
800
Educational Management Administration & Leadership 40(6)
. . . I believe that with the access that we have to the web, that you can take a position on almost anything and you can either affirm it, or disclaim it. It’s out there, and I guess the one that comes to mind is
about standards based Math programs, and there are some ultraconservative websites that think they’re
terrible. And then you can go to these other websites, and they think they’re wonderful. So it’s sort of
like, whatever your opinion is, we now have a resource to validate it. That’s where we are, and a lot of
that I think it was always there, but it’s more accessible now because of technology.
The consequences of this downside of technology may be confusion about which strategies or programs are, in fact, effective, as well as misuse of research to achieve preferred outcomes – a form
of political use.
In addition to technology, the collaborative nature of district culture, as well as the roles and
experiences of some decision makers, have created a common practice of information-sharing
across the central office. For instance, the director of research and evaluation reads and summarizes research for the superintendent, who like other administrators ‘doesn’t have the time to
read all these studies’, and other decision makers mentioned occasions on which she also provided
them with research relevant to their areas of practice. In part this information-sharing is built into
the structure of the organization (it is a responsibility of the division of research and evaluation),
but it is also a function of decision-makers’ personal experiences. Many decision makers had completed or are currently working toward their doctorate degree, and several acknowledged the
impact of their participation in these programs on their familiarity with research resources.
Whether a product of organizational structure or decision-maker experience, the practice of
information sharing is fostered through the district culture of collaboration. For instance,
decision-makers also mentioned occasions in which they shared information when they
encountered research relevant to someone else’s practice.
From this discussion, two sets of factors must be teased out. First are factors that support and
encourage research use in central office decision making. The strongest case can be made for
organizational culture, most notably the emphasis on collaboration which supports the sharing and
dissemination of research, and the organizational structure, which created a position in which
research use is part of the job description. It also appears that the wide range of mediums by which
research can be communicated – including practitioner and professional publications as well as the
role of technology – truly make research accessible to even the busiest of district administrators.
Finally, it appears that decision-maker skills and experiences can support research use, particularly
as it pertains to both their role in the organization and the opportunities to engage with research in
higher education.
The second set of factors consists of those that constrain or even prevent research use in decision
making. Chief among these is lack of time. Most decision makers recognized the value of going to
the research for instrumental purposes, but often had no time. However, the missed opportunities to
use research in the textbook and school improvement planning decisions do not seem to suggest
time was an issue. For the textbook, organizational resource constraints – financial in particular
– set the parameters for which publishers the committee would consider. Teachers’ disregard for
the evaluation tool and the ultimate dismissal of the tool altogether, however, suggests decision
makers had other evidence on which to base their decision – their own experience and judgment.
Members of the school improvement planning committee appear to have used this same evidence
when not explicitly utilizing research evidence to inform the set of instructional strategies available
in the new plan. Therefore, it may be that research use is constrained not by the absence of time but
rather the availability of alternative, perhaps preferable, evidence. The choice to use anecdotal or
Farley-Ripple: Research Use in School District Central Office Decision Making
801
professional judgment over research is one, it could be argued, that reflects decision-maker values
and the entrenchment of certain practices within the organization.
Conclusion and Implications
This analysis reveals several important conclusions about research use in Hamilton School District,
with meaningful implications for research, policy and practice. First, evidence shows few direct, or
instrumental, uses of research in district-level curricular and instructional decision making. This
translates to missed opportunities for districts to benefit from available research that potentially
informs problem identification, offers lessons learned on program implementation, provides
criteria on which to base choice among solutions, and perhaps most importantly, presents a range
of solutions available to address educational problems. This last point warrants additional
emphasis. Without the instrumental use of research in the search for or design of curricular and
instructional solutions, districts are likely to reinvent the wheel each time they address a problem.
Given financial, human resource and time constraints, as well as the wealth of educational research
produced to date, such practice is both inefficient and unnecessary.
Second, observed conceptual uses of research are promising. They suggest that educational
research does indeed impact decision making, if not in ways that are easily documented. However,
research establishes that such use: ‘consists of something other than the accumulation of new information . . . conceptual use is a formative process in which evidence is acted on by the user. It is
sorted, sifted, and interpreted; it is transformed into implications and translated into inferences’
(Kennedy, 1984: 225). Furthermore, Kennedy (1984) argues that once the evidence is interpreted
and inference is drawn, it is no longer the evidence but rather those subjective interpretations that
inform decisions. It is with hesitation that we should consider conceptual use of research in districts
like Hamilton to be evidence of influence. Rather, significant research attention needs to be paid to
understanding the mediating relationship between conceptual use, ‘working knowledge’, and the
professional judgment by which many educational decisions are made.
The third conclusion pertains to symbolic use of evidence, observed primarily in the form of
‘research says’ statements. Symbolic use appears to capture the depth of decision maker engagement with research and, subsequently, the level to which the evidence actually informs the decision
process. Symbolic use of research is therefore highly consequential for the potential impact of
research on policy. Utilization of research in this way can confuse our understanding of
evidence-based decision making. Whereas it appears that educational research is informing
decision, it is not necessarily the case. Rather, symbolic use may mask a lack of individual understanding of the research or its use for compliance or appearance only. The result is uninformed, and
possibly inappropriate, decisions that affect children.
Findings presented here suggest that symbolic use may be a decision-maker capacity issue,
requiring a revision of pre-service and in-service preparation of educational leaders. Evidence also
points to organizational capacity. For instance, a lack of time to search for or access relevant
research may mean that decision-makers find supporting evidence after the decision has been
made, as this is less time consuming that searching for and considering all evidence up front.
Similarly, this lack of time may result in the many instances of ‘research says’. Time is identified
as the most critical factor, though mechanisms for circumventing the time issue have been noted
(for example, the Internet, information-sharing practices). Still, it may be that in facilitating
research use, these mechanisms actually support symbolic use. For example, relying on others for
identification and interpretation of research is problematic, for the reasons identified in the
802
Educational Management Administration & Leadership 40(6)
conclusions about conceptual use. Similarly, using the internet as a tool for simplifying access is
not without consequence, as a decision maker noted above, for how that information is used
(symbolically or politically). Again, the relationship between these issues and the practice of
research use in education call for significant additional research.
Ultimately, these findings expose a weakness in current conceptualization of ‘use’ as well as in its
measurement. It is apparent that operationalizing ‘use’ as reference to research evidence is insufficient in documenting its role in decision-making. Such simple measurement neglects the complexity
of the process of use, which includes identification, interpretation, incorporation, and application of
research findings. A more sophisticated understanding and operational definition of use is needed if
we are to fully understand the impact of research decisions that shape teaching and learning.
A fourth conclusion that emerges from the evidence is that decision makers expressed a
preference for practitioner-oriented research resources. These practices reveal that decision makers
engage in, even prefer, what one might consider ‘technologically local’ search, defined by
Rosenkopf and Almeida (2003) as occurring when decision makers use sources that offer expertise
or information similar to their own knowledge. A critical reason for the continued gap between
research and practice is that central office decision makers use a limited set of resources which may
not include all the pertinent or available literature. As such, researchers seeking to change the educational discourse, inform and improve practice, and ultimately bridge the gap must reconsider two
dimensions of their practice: (1) audience and dissemination; and (2) relevance to current needs.
To effectively ‘get the word out’, researchers should consider publication in resources likely to be
used by decision-makers – which includes print but in today’s context, may include alternative
media as well – and incentives (for example, academic advancement) should be aligned to this
goal. Additionally, decision makers highlighted relevance and timeliness of research as reasons for
preferring professionally oriented publications, suggesting that literature found in other sources are
not particularly salient to their needs. Whether or not this is in fact true is not of significance.
Rather, if decision-makers perceive that, in general, educational research has no application to
them, then efforts to prove otherwise should be made if research is to truly inform practice.
The findings of this study suggest directions for research and practice laid out above but are also
consequential to the prospects for educational change. Observations of Hamilton School District
reveal practice of ‘local’ search, limited instrumental use of research, unclear conceptual use, and
frequent symbolic use. What does this mean? These practices may mean that (1) many potential
solutions or directions for improving student learning are not identified or acknowledged and
(2) the research that is identified may offer solutions that are not likely to differ substantially from
current practice. If decision makers continue to use the same ‘toolbox’ to solve problems, an
opportunity for meaningful change is passed over.
An important consideration, as evidenced in this analysis, is the impact of the organization – in
this case the school district central office – on research use. If we are to draw conclusions about the
relationship between research use, decision making, and the prospects for meaningful educational
change, then identifying the factors that support use, and subsequently opportunities for change
and improvement, is critical. Most important is the identification of malleable factors, that is, those
which we can shape through policy and practice. Decision making in Hamilton School District
reveals that the structure and the culture of the organization influence research use. In particular,
decision-maker collaboration, information-sharing practices, and designated responsibilities for
summarizing and disseminating research and evaluation create opportunities for decision makers
to utilize research in their curricular and instructional decision making. An important direction for
additional research, then, is to more deeply and systematically explore the structural and cultural
Farley-Ripple: Research Use in School District Central Office Decision Making
803
characteristics organizations that influence research use in education. Such an endeavor will enable
practitioners and policymakers to cultivate the positive factors and eliminate the negative, thereby
enhancing their prospects for research-informed change.
In conclusion, the purpose of this analysis is to examine how research evidence is used in
practice in school district central offices. Limited to the case of Hamilton School District, these
findings may not generalize to the larger practices of central offices in the USA or elsewhere. However, this discussion unpacks decisions likely to be made in any educational context and reveals
behaviors, constraints and consequences related to the use of research – a practice increasingly
expected of educational systems worldwide. In this sense the findings are instructive for efforts
with the US as well as other national efforts. Finally, it is not the intent of the study to judge Hamilton or other similar school districts, but rather by bringing to light current practice, it creates transparency and an opportunity to improve. As such, this analysis serves as a call to action for both
research and practice to further understand and improve the role of research evidence in school
district decision-making for the improvement of teaching and learning for all.
Notes
1.
2.
3.
4.
http://www.ed.gov/policy/elsec/guid/standardsassessment/nclbdataguidance.doc
http://www.ed.gov/policy/elsec/guid/designingswpguid.doc
ELL is a typical reference to English-language learners.
Frequency by the number of individuals referencing these sources as well as the number of times they
appeared in discussions I observed. Low frequency indicates it was only mentioned by one or two administrators, mid frequency indicates the source was mentioned by a few administrators and/or emerged in
district meetings, and high frequency indicates it was mentioned by most or all administrators and/or figured prominently in district meetings. More detailed quantification of the references is deemed inappropriate because such figures would be misleading: for instance, determining whether to groups discussed
particular resources as a single reference or reference for each individual, with no ability to determine
whether each would have identified the reference independently or not.
References
Birkeland S, Murphy-Graham E and Weiss CH (2005) Good reasons for ignoring good evaluation: The case
of the drug abuse resistance education (D.A.R.E.) program.
Bransford JD, Stipek DJ, Vye NJ, et al. (2009) The Role of Research in Educational Improvement. Cambridge, MA: Harvard Education Press.
Caplan N, Morrison A and Stambaugh RJ (1975) The Use of Social Science Knowledge in Policy Decisions
at the National Level. Ann Arbor, MI: The Institute for Social Research.
Coburn CE (2001) Collective sensemaking about reading: how teachers mediate reading policy in their
professional communities. Educational Evaluation and Policy Analysis 23(2): 145–170.
Coburn CE and Talbert JE (2006) Conceptions of evidence use in school districts: mapping the terrain.
American Journal of Education, 112(4): 469–495.
Coburn CE, Toure J and Yamashita M (2009) Evidence, interpretation, and persuasion: instructional decision
making in the district central office. Teachers College Record 11(4): 1115–1161.
Cooper A, Levin B and Campbell C (2009) The growing (but still limited) importance of evidence in education policy and practice. Journal of Educational Change 10: 159–171.
Corcoran T, Fuhrman SH and Belcher CL (2001) The district role in instructional improvement. Phi Delta
Kappan 83(1): 78–84.
804
Educational Management Administration & Leadership 40(6)
Datnow A and Castellano M (2003) Leadership and success for all. In: Murphy J and Datnow A (eds)
Leadership for School Reform: Lessons From Comprehensive School Reform Designs. Thousand Oaks,
CA: Corwin Press, 187–208.
Datnow A, Park V and Wohlstetter P (2007) Achieving with Data: How High-Performing School Systems Use
Data to Improve Instruction for Elementary Students. Los Angeles, CA: University of Southern California,
Rossier School of Education, Center on Educational Governance.
David JL (1981) Local uses of title I evaluations. Educational Evaluation and Policy Analysis 3(1): 27–39.
Elmore RF and Burney D (1997) Investing in Teacher Learning: Staff Development and Instructional
Improvement in Community School District #2, New York City. New York: National Commission on
Teaching and America’s Future and the Consortium for Policy Research in Education.
Fillos RM and Bailey WJ (1978) Survey of Delaware Superintendents’ KPU. Newark, DE: University of
Delaware.
Gross B, Kirst M, Holland D, et al. (2005) Got you under my spell? How accountability policy is changing and
not changing decision making in high schools. In: Holding High Hopes: How High Schools Respond to
State Accountability Policies. Philadelphia, PA: Consortium for Policy Research in Education, 43–80.
Hannaway J (1989) Managers Managing: The Workings of an Administrative System. New York: Oxford University Press.
Honig MI (2003) Building policy from practice: central office administrators’ roles and capacity in collaborative policy implementation. Educational Administration Quarterly 39(3): 292–338.
Honig MI and Coburn C (2008) Evidence-based decision-making in school district central offices: toward a
policy and research agenda. Educational Policy 22(4): 578–608.
Honig MI, Lorton JS and Copland MA (2009) Urban district central office transformation for teaching and
learning improvement: beyond a zero-sum game. Yearbook of the National Society for the Study of Education 108(1): 56–83.
Honig MI, Copland MA, Lorton JA, et al. (2010) Central Office Transformation for Districtwide Teaching
and Learning Improvement: A Report to the Wallace Foundation. Seattle, WA: The Center For Teaching
and Policy, University of Washington.
Huberman M (1990) Linkage between researchers and practitioners: a qualitative study. American Educational Research Journal 27(2): 363–391.
Kean M (1980) Research and Evaluation in Urban Educational Policy, No 67. ERIC/CUE Diversity Series.
Kennedy MM (1982a) Evidence and decision. In: Kennedy MM (ed.) Working Knowledge and Other Essays.
Cambridge, MA: Huron Institute, 59–103.
Kennedy MM (1982b) Working knowledge. In: Kennedy MM (ed.) Working Knowledge and Other Essays.
Cambridge, MA: Huron Institute, 1–28.
Kennedy MM (1984) How evidence alters understanding and decisions. Educational Evaluation and Policy
Analysis 6(3): 207–226.
Kerr KA, Marsh JA, Ikemoto GS, et al. (2006) Strategies to promote data use for instructional improvement:
actions, outcomes, and lessons from three urban districts. American Journal of Education 112(4):
496–520.
Levin B, Sa C, Cooper A, et al. (2009) Research Use and its Impact in Secondary Schools. Toronto: CEA/
OISE Collaborative Mixed Methods Research Project Interim Report.
Light D, Honey M, Henze J, et al. (2005) Linking Data and Learning: The Grow Network Study. New York:
Education Development Center, Inc.
Lortie D (1975) Schoolteacher. Chicago, IL: University of Chicago Press.
MacIver MA and Farley-Ripple EN (2008) Bringing the District back in. Alexandria, VA: Educational
Research Service.
Farley-Ripple: Research Use in School District Central Office Decision Making
805
March JG (1994) A Primer on Decision Making. New York: The Free Press.
Marsh JA, Kerr KA, Ikemoto GS, et al. (2005). The Role of the District in Fostering Instructional
Improvement: Lessons From Three Urban Districts Partnered with the Institute for Learning. Santa
Monica, CA: RAND Corporation.
Massell D (2000) The District Role in Building Capacity: Four Strategies. Philadelphia, PA: Consortium for
Policy Research in Education, University of Pennsylvania. CPRE Policy Brief No. RB-32.
Massell D (2001) The theory and practice of using data to build capacity: state and local strategies and their
effects. In: Furhnman SH (ed.) From the Capitol to the Classroom: Standards-Based Reform in the States.
Chicago, IL: University of Chicago Press, 148–169.
Maynard RA (2006) Presidential address. Evidence-based decision-making: what will it take for the decision
makers to care? Journal of Policy Analysis and Management 25(2): 249–265.
McLaughlin MW and Talbert J (2003) Reforming Districts: How Districts Support School Reform. Seattle,
WA: University of Washington.
Meyer J and Scott WR (1983) Organizational Environments: Ritual and Rationality. Beverly Hills, CA: SAGE.
Oh CH (1997) Issues for the new thinking of knowledge utilization: introductory remarks. Knowledge and
Policy 10(3): 3–10.
Organisation for Economic Co-operation and Development (OECD) (2007) Knowledge Management: Evidence and Education: Linking Research and Practice. Paris: OECD.
Porter RW and Hicks I (1997) Knowledge utilization and the process of policy formation: towards a framework for action. In: Chapman DW, Mahlck LO and Smulders AEM (eds) From Planning to Action:
Government Initiatives for Improving School-Level Practice, 32–67) Paris: UNESCO International Institute for Educational Planning.
Reichardt R (2000) The State’s Role in Supporting Data-Driven Decision-Making: A View of Wyoming.
Aurora, CO: Mid-Continent Research for Education and Learning.
Rich RF and Oh CH (1993) The utilization of policy research. In: Nagel S (ed.) Encyclopedia of Policy Studies. New York: Marcel Dekker, 69–94.
Roberts JME and Smith SC (1982) Instructional Improvement: A System-Wide Approach. Philadelphia, PA:
Research for Better Schools.
Rosenkopf L and Almeida P (2003) Overcoming local search through alliances and mobility. Management
Science 49(6): 751–766.
Rowan B (1986) Rationality and Reality in Instructional Management: Results from a Survey of Districts in
the Far West. San Francisco, CA: Far West Laboratory for Educational Research and Development.
Spillane JP (1998) State policy and the non-monolithic nature of the local school district: organizational and
professional considerations. American Educational Research Journal 35(1): 33–63.
Spillane JP (2000) Cognition and policy implementation: district policymakers and the reform of mathematics
education. Cognition and Instruction 18(2): 141–179.
Spillane JP and Thompson CL (1997) Reconstructing conceptions of local capacity: the local education
agency’s capacity for ambitious instructional reform. Educational Evaluation and Policy Analysis
19(2): 185–203.
Supovitz JA (2006) The Case for District-Based Reform: Leading, Building, and Sustaining School
Improvement. Cambridge, MA: Harvard Education Press.
Supovitz JA and Klein V (2003) Mapping a Course for Improved Student Learning: How Innovative Schools
Systematically Use Student Performance Data to Guide Improvement. Philadelphia, PA: Consortium for
Policy Research in Education, University of Pennsylvania.
Wayman JC and Stringfield S (2006) Technology-supported involvement of entire faculties in examination of
student data for instructional improvement. American Journal of Education 112(4): 549–571.
806
Educational Management Administration & Leadership 40(6)
Wayman J, Cho V and Johnston MT (2007) The Data-Informed District: A District-Wide Evaluation fo Data
Use in the Natrona County School District. Austin, TX: University of Texas.
Weiss CH and Bucuvalas MJ (1977) The challenge of social research to decision making. In: Weiss CH (ed.)
Using Social Research in Public Policymaking. Lexington, MD: Lexington Books, 213–230.
Weiss CH and Bucuvalas MJ (1980) Social Science Research and Decision-Making. New York: Columbia
University Press.
Weiss CH, Murphy-Graham E and Birkeland S (2005) An alternate route to policy influence: how evaluations
affect D.A.R.E. American Journal of Evaluation 26(1): 12–30.
West RF and Rhoton C (1994) School district administrators’ perceptions of educational research and barriers
to research utilization. ERS Spectrum 12(1): 23–30.
WestEd (2002) Improving Districts: Systems that Support Learning. San Francisco, CA: WestEd.
Yin R (2003) Case Study Research: Design and Methods. Thousand Oaks, CA: SAGE.
Biographical Note
Elizabeth Farley-Ripple is an Assistant Professor in the School of Education at the University of
Delaware. She earned her PhD in Education Policy from the University of Pennsylvania. Her
research interests lie in the area of evidence-based decision making, leadership and education
policy.