Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Integrating and Advancing Policy & Program Implementation Research Jodi Sandfort Associate Professor, Humphrey School of Public Affairs, University of Minnesota Stephen Roll Doctoral candidate, Glenn School of Public Affairs, The Ohio State University Stephanie Moulton Associate Professor, Glenn School of Public Affairs, The Ohio State University October 23, 2013 Paper presented at the Association for Public Policy Analysis and Management annual conference, Washington, DC. November 7-9, 2013 Within our field, the topic of policy implementation has a complex and controversial intellectual history (deLeon & deLeon, 2002; Klijn, 2005; O’Toole, 2004). On the one hand, the implementation of public policies and programs can be viewed as a component of mainstream public administration and management research. Topics explored such as human resources, budgeting practices, performance measurement, or privatization strategies provide insights about how agencies contribute to (or deter from) successful policy outcomes(Brown, Potoski, & Van Slyke, 2006; Fredrickson & Fredrickson, 2007; Moynihan, 2008). Yet, the advent of a new generation of public policy schools in the 1970s proclaimed implementation as a “new” topic of scholarly exploration; rather than starting from the organization as the unit of analysis, this stream of scholarship started with a specific policy, isolating the managerial dimensions of policy or program implementation in relation to other causal factors (Bardach 1977; Mazmanian and Sabatier 1989; Pressman and Wildavsky 1973). During the 1980s-1990s, there was significant attention to trying to develop a generalizable model of policy and program implementation (DeLeon 1999; Goggin et al. 1990; O’Toole 1993). However, this scholarly attention has faded in the last ten to fifteen years within the core disciplines of public affairs and political science. At the same time, the growing demand by policymakers for adoption of evidence-based practices in education, health care delivery, and children & youth services has caused research on implementation in these services to grow. Saetren (2005) offered some evidence of the growth of this trend; during the period 1985-2003, 72% of the nearly 2,500 scholarly publications referencing “policy implementation” were published in journals outside of public administration, policy and political science. In this paper, we embark upon a similar investigation, trying to learn more about the state of research on policy and program implementation through a systematic analysis of scholarly articles published over the last ten years. We deepen the analysis by applying an insight gleaned from the prior public affairs scholarship (Lynn, Heinrich, and Hill 2001; Berman 1981; Hill and Hupe 2008; Robichau and Lynn Jr. 2009; Kiser & Ostrom, 1982; Van Meter & Van Horn,1975 ); studying program and policy 1|Page implementation requires a multi-level analysis because the field, organizational, and frontline settings have unique impact on shaping the implementation process. In this paper, we describe the scope, approach, and focus of policy and program implementation research. In our analysis, we pay particular attention to trying to whether or not existing studies reflect the reality that implementation takes place at these multiple levels. While acknowledged conceptually in the literature, we know less about the extent to which implementation research reports findings relevant to multiple levels. Promisingly, we find that one in four studies produces findings that cross multiple levels of the implementation system, and nearly half contribute findings relevant to the specific program under study. However, representation across levels is not equal and our analysis suggests that a greater attention to and integration of organizational and frontline factors into studies of policy and program implementation may be warranted. The Study of Program and Policy Implementation Reviews of previous literature on policy implementation often start with Pressman and Wildavsky's book, Implementation, in 1973. However, scholarship informing implementation began years before in fields such as public administration, political science, sociology and economics. While not explicitly focused on a specific policy or program as the unit of analysis, these fields have provided insights about public sector management, political systems, institutions and exchange systems critical to an understanding of policy implementation. A classic example from sociology is Phillip Selznik’s (Selznick 1949) analysis of the Tennessee Valley Authority (TVA), where he observed that the goals and outcomes of the federal economic development initiative were shaped substantially by the local implementation context where cooptation by local leaders occurred. While implementation questions had been previously explored in other fields, the Great Society programs and subsequent growth of government interventions in the 1960s and 1970s spawned 2|Page increasing attention to policy as the unit of analysis. Policy analyses were launched in an effort to document the effectiveness (or ineffectiveness) of government interventions and thus secure (or eliminate) continued funding. In the process of analyzing problems and putting forward policy solutions, policy scholars realized that solutions needed to be implemented and, thus, investigated this process (Allison 1972; Easton 1979; Pressman and Wildavsky 1973). This shift both altered the unit of analysis, but also signified an important normative change. Conventional public administration was seen to focus on bureau politics and process, without offering much relevance to public service delivery being carried out by different instruments and through an array of institutions. Political scientists weathered their share of critique from the emerging implementation scholars as well. Sequential linear models that depicted implementation as merely a phase in the policy process was challenged (Sabatier and JenkinsSmith 1993). As traditional public administration and political science approaches were questions, new schools of public policy and public affairs were launched in many major universities (Lynn 1996) with new attention paid to policy and program implementation.1 The first generation of implementation research was predominately case studies taking a “topdown” approach to understanding and improving implementation. For example, Pressman and Wildavsky’s (1973) book provided a case study of the local implementation of a federal economic development program in Oakland, CA. One of their key insights was the ‘complexity of joint action’- that the multitude of actors with different missions and timelines created implementation challenges, and that common approaches to policy design were too indirect, requiring too much negotiation among diverse actors. Their prescriptions for improvement, therefore, were tied to better policy design from policymakers at the top to structure the local implementation context. Others offered similar top-down advice for better policy design (Bardach 1977; Mazmanian and Sabatier 1989; VanMeter and VanHorn 1 Attention to this topic continues to be strong in this school, according to topics offered in the curriculum. In our current scan of courses in fifty-two public policy and public affairs programs throughout the global (including the Top 20 programs in the United States) the majority (60%) have courses directly related to policy and program implementation. 3|Page 1975; Sabatier and Mazmanian 1980), stressing the importance of clear goals, limiting scope of change, and restricting the number of actors to improve the likelihood of effectiveness. As might be expected, other scholars pushed back against the top down approach, emphasizing that it was not technically feasible nor politically viable for policymakers to comprehensively structure implementation (Berman 1978, 1981; Elmore 1979-80; Lipsky 1980). For example, Berman (1978; 1981) observed wide variation in the same policy in different local (micro-implementation) environments, requiring policy designs that could be adapted to local conditions to prevent policy failure, particularly for unclear technologies. These “bottom-up” scholars often called for a mapping of the local implementation context and relationships between actors (Elmore 1979; 1982; Hjern and Porter, 1981) that could then describe the incentive structures and behaviors on the ground. Rather than an either/or approach, contingencies were offered where top down designs might be more applicable when technology and goals are clear and the environment is tightly coupled; otherwise, bottom-up might be most appropriate (Elmore 1982; Berman 1978; Sabatier 1986). By the late 1980s, a plethora of variables were identified at the top (policy design) and bottom (context) that might affect implementation outcomes; however, there was no comprehensive framework or theoretical approach by which to make sense of the variables (Goggin 1986; O'Toole 1986). Thus, the next phase in policy implementation research was marked by frameworks and techniques to integrate factors affecting implementation, and specifying when certain types of factors would be more (or less) important (Goggin et al. 1990; Matland 1995; Rothstein 1998; Sabatier 1988; Schneider and Ingram 1990). Perhaps the most comprehensive framework was proposed by Matland (1995), who analyzed conditions under which implementation success should be driven by fidelity to a policy design (which would then lead to intended policy outcomes), versus evolving from the implementation process (e.g., through experimentation) in order to achieve positive outcomes. 4|Page The focus on implementation within the core disciplines of public affairs began to dissipate in the 1990s, leading some to conclude that interest in the subject had declined (DeLeon 1999), or that the focus was no longer useful with more precise research questions, constructs and methods to analyze complex systems (O’Toole 2000). However, research related to implementation continues to evolve. Three trends are particularly worth mentioning. First, some scholars pushed to focus on the coordinating mechanisms or policy tools (Schneider and Ingram 1990; Salamon 2002). These authors focus on these coordinating mechanisms as the unit of analysis (e.g. contracts, grants, subsidies, incentives) rather than a policy or program; they seek to identify systematic variation in these tools of government, to inform their appropriate use in different situations (Salamon 2002). While heuristically this approach has merit in explaining the different “levers” involved in implementation processes, research has demonstrated that there continues to be much variation in outcomes that cannot be explained by the tools (Blair 2002; Romzek and Johnston 2002; Sandfort, Selden, and Sowa 2008; Twombly and Boris 1999). Further, the choice of a particular tool is endogenous to the implementation environment. The mechanisms used coordinate actions are not only specified by the technology, but also by power distributions (e.g. Matland 1995) and the institutional environment (e.g. Berman 1980; Elmore 1982). As an endogenous variable, it is not sufficient to primarily focus on the coordinating mechanisms to describe or predict implementation outcomes. Thus, while the tools approach helps clarify one dimension of the implementation context (coordination), it does not sufficiently replace the broader implementation research agenda. Second, while many scholars stressed implementation systems are multi-level, multi-actor (e.g. Berman 1978; Hall and O’Toole 2000), more mainstream public management moved from concern of governmental agencies to consider multi-level governance and networks (Agranoff and McGuire 2003; Frederickson 2005; Lynn et al. 2001; Milward and Provan 2003). The focus of this scholarship is broad and less focused on specific policies or programs than original implementation research. However, the 5|Page recognition of implementation taking place within multi-level systems complicates some of the prior frameworks that have sought to make sense of the implementation environment. For example, although conflict and ambiguity are critical dimensions affecting implementation outcomes (Matland, 1995), this factors exist not only in policy formation but also within implementing organizations and at front-lines. For implementation research to be useful, effort is needed to more appropriately integrate the multiple levels within the system; our analysis of the existing scholarly research over the last ten years moves us in that direction. A final trend relevant is the blossoming of implementation studies in other research fields (Nilsen et al. 2013; Saetren 2005). In areas such as medicine, community psychology, early childhood development, youth and family programs, and education, researchers have made considerable inroads in developing models and methods for studying implementation. These investigations have modest scope. But a new section of the American Psychological Association, a new journal Implementation Science, and biennial conference sponsored by the Global Implementation Initiative bespeaks the growth of this area of research. In the introductory volume to Implementation Science the editors’ clarified their charge (Eccles & Mittman, 2006:1): “Implementation research is the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of … services and care.” Like the original policy and program implementation studies, however, many models and theories have developed; A recent analysis identified sixty-one different models being used to explore innovation dissemination and implementation (Tabak et al. 2012). Recent reviews are seeking to develop integrative frameworks or conceptual models (Durlak and DuPre 2008; Fixsen et al. 2005; Greenhalgh et al. 2004; Meyers, Durlak, and Wandersman 2012). Yet, there is recognition that scant attention is paid to the policy environment; while it is widely recognized as significant, only 13 percent of the models Tabak and colleagues’ catalogue incorporate policy activities. In the widely used ‘Consolidated 6|Page Framework for Implementation Research,’ it is merely referred to as the “outer setting” (Damschroder et al. 2009). In spite of this fairly limited conception, a number of federal agencies in the Departments of Education, Health and Human Services, and Veterans Affairs are investing in the implementation science approach to scale evidence-based interventions because, for policy makers interested in affecting outcomes, knowledge about implementation is essential. In summary, issues of policy and program implementation are ever more pressing today than they were in the 1970s. In this analysis, we are probing the paradox which Saetren (2005) articulates – public affairs scholars believe interest in implementation research has faded and yet there is a thriving research field exploring implementation questions important for public and nonprofit managers and policy makers. Rather than a discrete body of literature within public affairs or political science, research on implementation today is more heterogeneous and spread across a variety of fields of study. One way to make sense of this diverse literature is to classify the findings across this literature according to the level of analysis, probing the extent to which implementation research is focused on the frontlines, organizations, and/or policy systems. Multi-Level Framework Our analysis is informed by this notion that significant policy and program implementation activities occur in differentiated levels of a larger policy system (Berman 1981; Hill and Hupe 2008; Robichau and Lynn Jr. 2009; Sabatier and Jenkins-Smith 1993). A multi-level framework is appropriate. Implementation influences results at the frontlines, where the policy system interacts with the target population, such as children, tax payers, employers, or business owners. At this micro-level, many factors may be significant, such as target groups’ composition & attitudes, staff background and experiences, and things which structure their interaction such paperwork or technology (Lipsky 1980; Sandfort 2003; Watkins-Hayes 2009). These interactions are directly shaped by other factors at the organizational level. Service organizations’ resources, structures, cultures, and competing 7|Page programmatic responsibilities often determine the most prudent way the agency responds to policy and program implementation pressures(Lin 2000; Spillane 1998). Authorizing agencies’ also shape implementation through the administrative rules adopted, funding instruments selected, and performance definitions (Berman 1978; Wilson 1991) . The organizational-level factors are found at the mezzo-level of implementation systems. Finally, at the policy field level, other macro-level factors come into focus (Milward and Wamsley 1984; Sandfort 2010; Stone and Sandfort 2009; Weible, Sabatier, and McQueen 2009).2 The institutions, laws, tools used to address this type of issue in the past shape the way implementation problems and solutions are understood. The networks among professionals, power concentration, and resources are significant. To explore the significance a multi-level system approach throughout the existing research, we categorize research findings as relevant to deepening understanding at the micro - meso, and/or macro-levels in our analysis. By differentiating among these levels, we move closer to understanding the state of research about policy and program implementing and pointing towards new avenues for renewed investigations within public affairs. Research Questions and Methodology Our central research question is, “What is the scope, approach, and focus of the scientific field of policy and program implementation research?” We are specifically interested in how the multi-level framework suggested by policy and program implementation researchers is appearing in the extant literature. To explore this question, we based our approach loosely on that undertaken by Saetren (2005) who examined the development of policy implementation research from 1933-2003. We drew our sample from 2004-2013 journals listed in the Expanded Social Science Citation Index in the Web of Science, which covers over 8,500 major journals. From a potential population of over 14 million articles over the years relevant to our study, we included any articles which included “policy implementation” 2 Other scholars have referred to this level as “policy subsystem.” 8|Page and “program implementation” in the title, abstract, or author supplied key words using HistCite software.3 In the course of our subsequent coding, we found 436 articles that – while including the key terms in their title, abstract, or keywords – were not really focused on the topic. This is undoubtedly because of the more conventional use of the term “implementation.” For example, such as articles focused on implementation of computer programs or those mentioning “implementation” informally as descriptor. We omitted these articles, as well as negligible number (4) of articles which did not include an abstract, from our analysis for a total of 1,375 articles in our full sample. We begin our review of this literature with a bibliometric analysis, exploring the concentration of publication venues as an indicator of scholarly communication in a field. In Saetren’s (2005) treatment of the scholarly implementation field, he includes policy, public administration, and political science journals in his sample as “core” public affairs venues. We first adopted his approach in the coding in our sample and found similar trends to what he reported ten years ago: in our analysis, 9 percent of the studies were published in his “core journals” for public affairs compared to 12 percent in 2003, 14 percent were in his “near-core” compared to 16 percent, and 77 percent were in non-core journals compared to 72 percent. Yet we wanted to strengthen this analysis on a number of key dimensions. First, we utilize a more objective assessment of journals relying upon the Web of Knowledge (ISI) to identify journals officially classified as “public administration” or “political science” to constitute our notion of “public affairs core” journals. Second, because of our understanding of the larger trends of implementation scholarship, we wanted to explore the growing attention to micro-implementation dynamics found in implementation science. As a result, we identified a list of ten journals most frequently identified as 3 Initially, we sought to identify sources using a criteria that included the words “policy” or “program” and the word “implementation” (as well as variations on these words) anywhere in the abstract, title, or keywords. This initial scan yielded ~25,000 articles, a number too large to reasonably manage. 9|Page significant outlets in numerous literature reviews (Durlak and DuPre 2008; Fixsen et al. 2005; Meyers et al. 2012) designating them as “implementation science core” journals.4 Given the larger scholarly trends, we also want to do more detailed analysis of the full sample of articles. To enable closer coding, we utilize Nvivo software to differentiate the abstracts across a number of different key elements. We developed the descriptive coding described and adjusted it after an initial review of 30 percent of the articles in our sample, to accurately and comprehensively describe the sample. While we did all coding from the abstracts, occasionally the keywords and titles of the articles were used to inform the coding when the abstract was unclear. Consistent with our interest in multi-level framework, the findings in each article are coded as relevant to “program-specific,” “front line,” “organizational,” “policy-field” levels, or “general implementation” findings. Program findings include any findings which are directly related to the evaluation of a specific program or policy in terms of its outcomes or impacts. These are likely the least generalizable outside of the context of the specific study. Front-line factors are those relevant to understanding frontline-staff, clients, or their interactions, as well as any findings relevant to understanding how these dynamics impact policy. Organizational factors are those characteristics that are significant in the implementation process, such as managerial characteristics, culture, capacity, resources, or facilities. Policy field factors are those which have large-scale implications for policy, networks, or the general structure of the policy system. General implementation factors are those which focus on the improvement of implementation knowledge, either on a general level or within the context of a nation or state. We determined our code of findings in relation to the focus and/or the design of the study. By this standard, a study using a survey to assess teachers’ responses to the implementation of a mentoring program would be coded as presenting “front-line” findings; if the 4 They are: American Journal of Community Psychology; Health Education and Behavior; American Journal of Evaluation; Health Education Research; Journal of Primary Prevention; Prevention Science; Implementation Science; Journal of Community Psychology; Children & Youth Services Review; American Journal of Public Health. 10 | P a g e abstract also mentions potential broader implications for the education field it would not be coded policy field-level as those claims would be largely speculative and beyond the scope of the actual analysis within the study. There were a number of other, descriptive dimensions of interest. We coded articles for their geographic focus to be able to describe the sites of investigation. We coded specific policy and program content areas by health, education, environment, social welfare, general implementation, crime, agriculture, city and regional planning, energy, transportation, science and technology, food, international development, monetary policy, international relations, private sector implementation, and miscellaneous. While most of these areas are relatively clear, articles coded as “general implementation” are those that did not focus on a specific content area, but rather the topic of implementation itself. For example, a study which focused on how American federalism inhibits efficient policy implementation would fit into the “general implementation” code. Furthermore, in interdisciplinary content areas, we base our coding on the dependent variable or focus of the study. For example, while a school vaccination program may happen in an educational setting, it is coded as “health” rather than “education” since the outcome of interest is student vaccines. While we allowed abstracts to be coded into multiple content areas, most fell within a single area. We also are interested in number of target populations expected to have some substantial presence in the implementation literature: children, disabled, elderly, medically vulnerable, parents and families, racial and ethnic groups, and those in poverty. Not all articles in the sample are coded to a specific target population and some were coded in more of these categories (most of the target population definitions are not mutually exclusive).5 We also examine research type differentiating 5 The key words we searched for were often synonyms for the target population, such as “aged” or “senior for the elderly target, or “youth” or “adolescent” for the child target. Where appropriate, we also included a number of specific words associated with the targets based on an initial review of the abstract sample. For the medically vulnerable target, these included references to specific diseases or ailments that are commonly referenced in the 11 | P a g e “conceptual” and “empirical studies.” Conceptual studies focus on implementation without any accompanying data or specific implementation case. Empirical studies were those that used some source of data or specific real-world instances of policy or program implementation as their focus; we further subdivide these based on their methods of analysis and research design. The methods include “quantitative,” “qualitative,” and “mixed” studies that use both qualitative and quantitative approaches. Qualitative methods were identified by an abstract referencing any research design that focused on the collection of generally unstructured data: Open-ended surveys, interviews, ethnographies, and content analyses were common approaches coded as qualitative research. Quantitative methods were identified by references to surveys, positivist research designs or statistical analysis; typical quantitative designs involved experiments, secondary data analysis, or closed-ended surveys. The criterion for a study being included in one of these frames was an explicit reference to some qualitative or quantitative data. A substantial number of the empirical abstracts (32%) do not include any explicit reference to qualitative or quantitative methods, and are coded as unclear. From our review, this group is largely comprised of descriptive case studies without any apparent rigorous study design; we drop these cases for our exploration of the scientific focus of this literature. We also pulled mention of specific “research designs” including case studies, experimental designs, content analyses, ethnographies, literature reviews6, and “research methods,” including interviews, surveys, secondary data analyses, observations, modeling or simulation. Studies could fit into multiple code frames if they included several different research methods. Finally, we were also interested in the degree to which implementation articles were focused on evaluating or analyzing specific public policies, and coded the abstracts based on whether they made explicit reference to a particular public policy as a focus of their analysis. literature (e.g. HIV, malaria, or malnourishment), while for the racial or ethnic minorities we searched for specific groups (e.g. African-American or Latino/Latina). 6 Studies were only coded as having a literature review design if the literature review was the only focus. 12 | P a g e Through this detailed review of the sample, we discovered that a significant portion (25%) of the articles did not meet the basic standards of scientific rigor. They were neither conceptual nor empirical, but rather most often provided general case description without any explicit methodological frame or design. In our analysis, while we focus the first results on the full sample of 1,375 articles which include these descriptive accounts, our more in-depth analysis of research approach and findings focus on 1033 articles that apply basic social science methodology. Results & Discussion To update our understanding of the major contours of policy and program implementation research, we unpack our research question in stages, first describing the scope and focus of all published research. We then limit the sample to only those studies using structured research methods to further explore the findings across the multiple levels of policy and program implementation. What is the scope and focus of published research on policy and program implementation? In Saetren’s (2005) article, he documents 3,523 research articles published in the field in the seventy year period he studies, seventy-percent between 1985 and 2003. The final five year period covered in his sample reveals significant growth in numbers of articles published. In the ten years since 2003, our sample includes 1,375 articles, further evidence of the expansion. Taken together, there is a sizable body of research literature exploring policy and program implementation in scholarly journals. Together with the other indicators noted earlier, such as the development of new journals and research associations, this evidence suggests a robust field of inquiry. To get a deeper understanding of the scope of the field, we sought to understand the distribution of publications by outlet. Table 1 presents the comparison between publications outlets; twelve percent of the articles are found in core public affairs journals and six percent in implementation 13 | P a g e science core journals. It is notable le th that none of the identified journals we classify as implementation imp science core overlap with the ISI pub public affairs core journals; this suggests that this trend rend of implementation scholarship is develo veloping parallel to the extant public affairs research. ch. When you adjust for the much broader the pool off pub public affairs journals (over 190 are included in the ISI classification), it also is striking that the ten journals als d designated at as implementation science core are e capturing ca a comparably larger amount of articles icles focused on policy and program implementation. n. Table 1 Geographic focus on the top topics being studied is another dimension of research rch scope. s Table 2 reports on the geographic focus of p program and policy implementation research; while hile the majority of studies explore conditions in the Uni United States and North America, this literature is certainly cert international. Twenty-one percent nt o of articles focus on European contexts (predominant nantly Western Europe), twelve percent focus in Asia Asia, eight percent in Africa, and another eight percent cent on international policy or programs. In the last ten n ye years, then, this literature reflects concern for implem plementation dynamics around the globe. Table 2: 14 | P a g e As noted above, unlike more ore general organizational management studies, imple plementation is not focused on changing the operations ons of networks, organizations, and frontline operation tions but upon doing so in relation to particular problems ms found in topical areas. It focuses in particular conte ontent areas. Table 3 illustrates the distribution of our ur sa sample across 17 distinct content categories. Health alth is the most common, with nearly half of the sam sample (48 percent) coming from this arena. Educatio tion (19%), environment (11%), and social welfa elfare (9%) are the next most common focus areas. A handful h of papers concentrate on the other areas. Wh While Saetren’s assessment of 1933-2003 publications ions (2005) also finds health, education, environment, and social welfare as the most common topical areas, as, health h comprised only 24% of the sample during that at p period, with a comparable number of articles focuse cused on education. This suggests the growth of impleme ementation studies focused in health care, again consis nsistent with the more general picture painted earlier rlier about the focus of the implementation science research rese stream. Table 3: 15 | P a g e Another dimension of scope ope is whether or not the research includes mention n of a specific public policy including legislation, executive utive orders or agency mandates. Historically, conceptu eptualizations of policy implementation take as thee st starting point enacted public policies. However, other thers have rightly argued that much of policy implemen mentation takes place on the ground, with ongoing evolution evo of programs. Thus, we explore is thee ex extent to which published research on policy and program prog implementation includes explicit me mention of a particularly public policy in the abstract ct or o keywords, (suggesting that formal policy is cent central to the analysis). Of those mentioning specific ic policies, po we further identify the level of governm rnment for the policy (federal, state, local, or internatio ational levels). Of the articles in our sample, nearly a qu quarter of the abstracts included mention a particula icular policy. Those articles which focused on agriculture ture or the environment had the most focus on specific cific public policies, with over 40% of the abstracts coded oded in each field making reference to specific policies. ies. Abstracts in areas with a relatively heavy program gram-specific focus, such as health or education, have ve a smaller proportion of studies dedicated to ex explicit policy evaluations, with only 23% of educatio ation abstracts and 16 | P a g e 14% of health abstracts referencing explicit policies. Social welfare abstracts fall in the middle of the aforementioned fields, with almost 30% of social welfare articles making explicit mention of public policies. Because policy and program implementation is fundamentally about making change in some condition, we also were interested in research focused on target groups. While the possible targets groups are diverse, in this description we focus on children, disabled, elderly, medically vulnerable, parents/family, the poor, and racial or ethnic minority groups. Thirty-four percent of the articles on program and policy implementation focused their attention on services reaching one of these groups. Most common were medically vulnerable (15%), followed by children (13%) and parents/family (7%), not surprising in research in which the majority of studies focus in either health or education.7 As the history of this scholarly field and recent trends suggests, there are apt to be a diverse approach to conducting research about policy and program implementation. We wanted to describe this diversity in our sample. We first differentiated between conceptual studies and those working with some type of data or information. A small portion of the articles (6%) were conceptual; a typical conceptual study might investigate the potential impact of implementing a policy tool, such as the potential carbon tax schemes might have in influencing international trade dynamics. We then explored in more detail the research approach among the empirical studies. Dominated by the health care content area, the most prevalent research methods were quantitative (35%), using surveys or secondary data and statistical analysis methods; in fact, experimental designs, population surveys, and secondary data analysis are the most comment research methods used in the whole sample. Qualitative research methods, interviews or focus groups and ethnographic methods, are found in twenty percent of the articles, and are particularly use in education, environment, and social welfare topics where they are deployed as frequently as quantitative methods. A small number of 7 Interesting, only four percent focus on programs/policy assisting the poor or three percent on racial or ethnic minority groups. And even smaller number focuses on the elderly (2%) or disabled (1%). 17 | P a g e articles, six percent in the whole sample, use a research approach which mixes qualitative and quantitative approaches. Significantly, a sizable group of program and policy implementation articles did not clearly specify a standard research method or design in the article abstracts. The most sizable of this were one in four articles (24.8%) that did not deploy systematic research methods at all but merely described a particular case of policy and program implementation. This result raises concern about the rigor of a sub-group of these publications; some topics with low concentration of policy and program implementation studies– agriculture, city and regional planning, international development and international relations, science and technology, transportation – had high prevalence of such descriptive case-based articles. In contrast, health and education – where fifty-eight percent of all articles were published – had low incidence of these types of these articles. This suggests that basic standards of social science are expected in these areas in which larger numbers of scholars are investigating program and policy implementation. In the rest of our analysis and exploration of the multi-level framework, we drop out these articles and focus on those applying conventional social science designs and methods (n=1,033). Practically, this omission increases the overall significance of research concentrating on health, as over half of the remaining rigorous articles focus on that content area. For that reason, we present this analysis by content area, to illuminate where there are important distinctions that might get obscured. As Table 4 illustrates, the approaches vary significantly across these areas, suggesting field-based research practices. For example, it is interesting to see that while experimental designs are used relatively frequently in health and crime, it is not common in many other topics. Although case studies are a common approach, their relative proportion in any field varies between none to one in two studies of private business. Because the codes are not exclusive (eg. each study could use multiple methods), each cell of this table should be understood in reference to the total number of research articles focused 18 | P a g e in that content area. For example, le, 220 percent of the studies of policy and program implementation impl focused on agriculture are case studi tudies, seven percent are content analysis, and seven en percent p are systematic literature reviews. Thirte irteen percent involve interviews, twenty-seven percen rcent analyze secondary data, and none use system stematic observation. Table 4 (continued on next page) 19 | P a g e These results provide context for scholars conducting implementation research in particular content areas, helping to illuminate the dominant research design and method used in the existing literature. They also highlight the significant variation in research approaches being used throughout the field. As noted earlier, while experimental designs are the most common research design, they are being used only in selected fields; in fact, none of the research methods we coded for are being utilized across in all of the content areas of research. In response to this variation, some implementation science researchers are creating shared collections of data and measures, such as that being developed by the Seattle Implementation Research collaborative, to bring more alignment on these issues.8 What is the focus of the scientific field of policy and program implementation research? To probe more deeply the manifestation of a multi-level framework for implementation research, we explored study findings relevant to four different levels. Given that implementation occurs at multiple levels, and that scholars have long recognized this phenomenon, it would be desirable for research to reflect this multi-level reality in the report of their results to inspire informed implementation practice. Overall, 24 percent of the studies in our sample of rigorous studies present findings relevant to multiple-levels of implementation systems. Almost half of the studies, 49 percent of the sample of rigorous studies, report program findings directly related to a particular program or policy and its results. This finding challenges a criticism often made of implementation research that it loses site of ultimate results, either in system change or target group conditions. Almost four hundred of the studies (37 percent) also include findings related to the macro-policy field level, issues relevant networks, large scale policy or general structure of the system. Smaller numbers of articles include findings relevant to frontline conditions (19 percent) or organizational factors (16 percent) such as 8 See http://www.seattleimplementation.org/wp-content/uploads/2011/08/SIRC_IRP-Update_2013.pdf retrieved on September 10, 2013. 20 | P a g e managerial characteristics, culture, or capacity. Like all dimensions of this analysis, though, there is important variation in these applications by content area. Table 5 presents the level of findings by selected content areas (those with more than 100 publications). As is true throughout this analysis, the majority of studies cluster in health and education where program-specific findings predominant. As noted earlier, considerable implementation research is now focused on examinations of particular health or education-related interventions. Many social welfare studies (39%) also report findings relevant to particular interventions. While such findings hold substantive implications, the results are often not particularly generalizable to other implementation questions. For example, relatively low proportions of implementation studies in education, environment, health care, or social welfare report results in terms of organizational-level factors -managerial actions, organizational culture or capacity, or resources. However, this level of implementation is substantively important in structuring the very terms of policy and program delivery. We also are interested in whether or not findings are reported across the multi-level framework. At least one in five studies in these content areas report multi-level findings in their abstracts. The descriptive statistics also suggest that education and health fields are dominated by program specific implementation studies, whereas environmental studies report policy-field, macro results (chi-squared analysis reveals statistically different categories of comparison). Social Welfare implementation studies appear to report more diverse array of findings (non-statistically significant different); furthermore, 38 percent of social welfare implementation studies report findings that cross the program, frontline, organizational, or policy field level. Table 5 21 | P a g e As noted earlier, there are re si significant variations in the research approach used ed across in this literature. Tables 6 presents informa rmation about these differences, again by considering ing the results presented in the articles in relation on tto a multi-level framework. While some article abstracts abst were ambiguous in relation to their resear search approach, our analysis highlights that researcher chers are deploying particular methods to generate type ypes of results in their studies of implementation. Among Amo the studies reporting program-specific findings, gs, for example, quantitative methods clearly predomin ominate. More particularly 39 percent of program--specific results come from use of experimental desig esigns, again likely concentrated in the health area. In ccontrast, studies presenting findings relevant to the frontline or organizational levels of practice are p predominantly relying upon qualitative (or mixed) d) research re approaches. Among studies reportin orting findings relevant to frontline conditions, 29 perc ercent use case studies and 25 percent rely upon inte interview methods. Among studies reporting organiza nizationally-relevant findings, 29 percent use either interv terviews or surveys. Findings related to macro- policy licy field f conditions are split between quantitative and dq qualitative approaches, but also have reported their eir research approach more ambiguously in articl rticle abstracts. Table 6 22 | P a g e Overall, this analysis highlights the diverse research being conducted about policy and program implementation over the last ten years. While it seems a vibrant area of scholarship, it is notable by how distinct content areas pursue unique methods of research with findings targeted at different levels of analysis. This raises questions about the overall orientation towards building generalizable knowledge about implementation processes useful for professionals operating at distinct levels within a larger implementation system. In our future analysis of these themes, we plan to explore in more detail articles seeking to provide evidence of the multiple levels within implementation systems. Although this population level analysis suggests some themes, we are interested in the degree to which the particular implementation dynamics of resource adequacy, coordination, culture, service technology and change are addressed and explored. Conclusions Our analysis clearly reinforces Saetren (2005)’s conclusion that scholarly research has moved from the traditional disciplines of political science, public administration, and public policy and now is more concentrated in content areas, such as health and education, concerned with examining implementation dynamics in relation to program-level results. It is a research community exploring policy and program implementation around the world using a wide array of research designs and methods. Specifically, we find that in the 10 year period between 2003 and 2013, only 12 percent of published articles on “policy implementation” or “program implementation” are within the 190 journals classified as Public Affairs journals by the ISI Web of Science. Further, new journals focused specifically on “implementation science” have been growing, with research relevant to policy and program implementation more generally. And, scholarship is by no means limited to the U.S; while the U.S. is the most common setting (representing 39 percent of articles), Europe is close behind (one in five studies), with Asia also representing more than 12 percent of published studies. 23 | P a g e The primary contribution of our analysis, however, is to describe the extent to which findings from research on policy and program implementation inform a multi-level framework of governance. Researchers in public affairs have acknowledged that implementation takes place within a multi-level system including front lines, organizations, and policy fields (Berman 1978; Hill and Hupe 2008; Lynn et al. 2001; Robichau and Lynn Jr. 2009). Criticisms of policy implementation research suggest that it is either too broad, without policy specific relevance, or that it is too narrow, focused so singularly on a particular program and, thus, not generalizable outside of the particular context. To the extent that policy implementation research produces findings at various levels, particularly multiple levels within the same analysis, such research may be able to provide policy specific relevance while still contributing to generalizable knowledge. Promisingly, we find that one in four studies produces findings that cross multiple levels of the governance framework, and nearly half contribute findings relevant to the specific program under study. However, while more than one-third of studies contribute findings at the policy field level of analysis, the organization and frontline levels comprise only 19 and 16 percent of findings respectively. This suggests that a greater integration of organizational and frontline factors into studies of policy and program implementation may be warranted. Finally, our analysis also explores contributions to multi-level implementation findings by policy fields and methodological approach. As might be expected, findings from environmental policy studies are concentrated at the policy field level (76 percent), while findings from the health field are concentrated at the program level (65 percent). This makes sense from a practical perspective; most environmental policies involve numerous field level actors to bring about change, while health policies and programs may be implemented on a smaller scale within a specific clinic or health facility. However, cross-fertilization can occur when environmental policy researchers model some of the front-line and organizational factors, and health policy researchers consider policy field specific factors. Our analysis suggests that there is more room for growth in this regard. In terms of methodology, we find that while 24 | P a g e program level findings are more likely to result from quantitative analyses (67 percent), organizational, frontline and policy findings are more likely to employ qualitative methods. For example, nearly half (47 percent) of studies contributing organizational level findings employ qualitative methods. This suggests that while quantitative analysis may be more feasible (and desirable) for program evaluations, contributing to an understanding of organizational and front-line factors may require a qualitative or mixed methods approach. Thus, while a push for quantitative, experimental research may be beneficial to produce relevant program findings, multiple methods—including qualitative analyses-- are likely needed to inform more generalizable findings across policy levels. While our study offers exploratory insights, it is important to keep in mind the limitations. Our study is based on a bibliometric and content analysis of published abstracts, employing the terms “policy implementation” or “program implementation.” Our findings could reflect, as some suggest that the term “implementation” has become less popular within public policy and management (Nilsen et al. 2013). There are likely numerous studies published in public affairs journals that contribute insights to implementation, but do not employ the term when describing their central focus in published abstracts. However, the purpose of our analysis is not to identify all relevant findings but rather to catalogue the scope and focus of published research that is explicitly where implementation characterizes its core focus. Further, our review only includes scholarly articles published in academic peer-review outlets although valuable implementation studies are conducted by research firms under government contracts. However, our approach provides a most systematic way to analyze the published and peerreviewed research. The past decade has witnessed a steady flow of research engaging the topics of policy implementation and program implementation. This research is not concentrated within public affairs or a specific field of study, but is rather spread across different disciplines and policy fields. In this paper, engage a multi-level framework provides to sort study findings across levels, identifying the extent to 25 | P a g e which the research findings are program specific, or occur at the frontlines, organizational or field level. We find evidence of findings across levels, with variation by methodology or program type. Future research is needed to better unpack the substantive contributions of findings across levels. References Agranoff, Robert, and Michael McGuire. 2003. Collaborative Public Management: New Strategies for Local Governments. Washington, DC: Georgetown University Press. Allison, Graham. 1972. Essence of decision. Explaining the Cuban Missile Crisis. Boston, MA: Little Brown. Bardach, E. 1977. The Implementation Game. Cambridge: MIT Press. Berman, P. 1981. “Educational Change: An Implementation Paradigm.” Pp. 253–86 in Improving Schools: Using what we know, edited by R Lehming and M Kane. Beverly Hills, CA: Sage. Berman, Paul. 1978. The Study of Micro and Macro Implementation of Social Policy. Rand. Blair, Robert. 2002. “Policy Tools Theory and Implementation Networks: Understanding State Enterprise Zone Partnerships.” Journal of Public Administration Research and Theory 12(2):161–90. Damschroder, Laura J. et al. 2009. “Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science.” Implementation science 4:50. DeLeon, Peter. 1999. “The missing link revisted: contemporary implementation research.” Policy Studies Review1 16(3/4):311–38. Durlak, Joseph a, and Emily P. DuPre. 2008. “Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation.” American journal of community psychology 41(3-4):327–50. Easton, D. 1979. A systems analysis of political life. Chicago: University of Chicago Press. Elmore, Richard F. n.d. “Backward Mapping : Implementation Research and Policy Decisions.” Political Science Quarterly 94(4):601–16. Fixsen, Dean L. et al. 2005. Implementation Research: A Synthesis of the Literature. Tampa, Fl. Frederickson, George. 2005. “Whatever Happened to Public Administration? Governance, Governance Everywhere.” Pp. 282–304 in The Oxford Handbook of Public Management. Goggin, M. L. 1986. “The ‘Too Few Cases/Too Many Variables’ Problem in Implementation Research.” Political Research Quarterly 39(2):328–47. Goggin, M. L., A. O. M. Bowman, J. P. Lester, and L. J. O’Toole. 1990. “Studying the Dynamics of Public Policy Implementation: A Third-Generation Approach.” Pp. 181–97 in, edited by D J Palumbo and D J Calista. New York: Greenwood Press. Greenhalgh, Trisha, Glenn Robert, Fraser MacFarlane, Paul Bate, and Olivia Kyriakidou. 2004. “Diffusion of innovations in service organizations: systematic review and recommendations.” The Milbank quarterly 82(4):581–629. Hall, T. E., and Laurence O’Toole. 2000. “Structures for Policy Implementation: An Analysis of National Legislation, 1965-1966 and 1993-1994.” Administration Society 31(6):667–86. Hill, Michael, and Peter Hupe. 2008. Implementing Public Policy: An Introduction to the Study of Operational Governance. 2nd ed. Sage Publications. Lin, Ann Chih. 2000. Reform in the Making: The Implementation of Social Policy in Prison. Princeton, New Jersey: Princeton University Press. Lipsky, Michael. 1980. Street-Level Bureaucracy: Dilemmas of the Individual in Public Services. New York, NY: Russel Sage Foundation. 26 | P a g e Lynn, Laurence E. 1996. Public Management as Art, Science and Profession. Chatham, New Jersey: Chatham House. Lynn, Laurence E. Jr, Carolyn J. Heinrich, and Carolyn J. Hill. 2001. Improving Governance: A new logic for empirical research. Washington, D.C.: Georgetown University Press. Matland, Richard E. 1995. “Synthesizing the Implementation Literature : The Ambiguity-Conflict Model of Policy Implementation.” Journal of Public Administration Research & Theory 5(2):145–74. Mazmanian, D., and P. Sabatier. 1989. Implementation and Public Policy. Lanham: University Press of America. Meyers, Duncan C., Joseph a Durlak, and Abraham Wandersman. 2012. “The quality implementation framework: a synthesis of critical steps in the implementation process.” American journal of community psychology 50(3-4):462–80. Milward, H. Brinton, and Keith G. Provan. 2003. “Managing Networks Effectively.” Milward, H. Brinton, and Gary L. Wamsley. 1984. “Policy Subsystems, Networks, and the Tools of Public Management.” Pp. 3–25 in Public Policy Formation, edited by Robert Eyestone. Boulder: Westview Press. Nilsen, Per, Christian Ståhl, Kerstin Roback, and Paul Cairney. 2013. “Never the twain shall meet?--a comparison of implementation science and policy implementation research.” Implementation science 8(63). O’Toole, Laurence J. Jr. 1986. “Policy Recommendations for Multi-Actor Implementation: An Assessment of the Field.” Journal of Public Policy 6(3):181–210. O’Toole, Laurence J. Jr. 1993. “Interorganizational Policy Studies: Lessons Drawn from Implementation Research.” Journal of Public Administration and Research Theory 3(22001 implementation course pack 1043):232–51. O’Toole, Laurence J. Jr. 2000. “Research on Policy Implementation : Assessment and Prospects.” Journal of Public Administration Research & Theory 10:263–88. O’Toole, Laurence J. Jr. 2004. "The Theory-Practice Issue in Policy Implementation Research. Public Administration 82(2):309-329. Pressman, J., and A. Wildavsky. 1973. Implementation: How Great Expectations in Washington are Dashed in Oakland or Why it’s Amazing that Federal Programs Work at All. Berkeley: University of California Press. Robichau, Robbie Waters, and Laurence E. Lynn Jr. 2009. “The Implementation of Public Policy: Still the Missing Link.” Policy Studies Journal 37(1):21–36. Romzek, Barbara S., and Jocelyn M. Johnston. 2002. “Effective Contract Implementation and Management: A Preliminary Model.” Journal of Public Administration Research and Theory 12(3):423–53. Rothstein, B. 1998. Just institutions matter: the moral and political logic of the universal welfare state. Sabatier, Paul a. 1988. “An advocacy coalition framework of policy change and the role of policyoriented learning therein.” Policy Sciences 21(2-3):129–68. Sabatier, Paul A., and Hank C. Jenkins-Smith. 1993. Policy Change and Learning: An Advocacy Coalition Approach. edited by Paul A Sabatier and Hank C Jenkins-Smith. Boulder: Westview Press. Sabatier, Paul, and Daniel Mazmanian. 1980. “The Implementation of Public Policy: A Framework of Analysis.” Policy Studies Journal 8(4):538–60. Saetren, Harald. 2005. “Facts and Myths about Research on Public Policy Implementation: Out-ofFashion, Allegedly Dead, But Still Very Much Alive and Relevant.” Policy Studies Journal 33(4):559– 82. Salamon, Lester, ed. 2002. The Tools of Government: A Guide to the New Governance. Oxford, United Kingdom: Oxford University Press. 27 | P a g e Sandfort, Jodi R. 2003. “Exploring the Structuration of Technology within Human Service Organizations.” Administration & Society 34(6):605–31. Sandfort, Jodi R. 2010. “Nonprofits within Policy Fields.” Journal of Policy Analysis & Management 29(3):637–44. Sandfort, Jodi, Sally Coleman Selden, and Jessica Sowa. 2008. “Do the Tools Used by Government Influence Organizational Performance?: An Examination of Early Childhood Education Policy Implementation.” American Review of Public Administration 38(4):412–38. Schneider, Anne, and Helen Ingram. 1990. “Behavioral Assumptions of Policy Tools.” The Journal of Politics 52(2):510–29. Selznick, Phillip. 1949. TVA and the grass roots: A study of politics and organization. Berkeley, California: University of California Press. Spillane, James P. 1998. “A Cognitive Perspective on the Role of the Local Educational Agency in Implementing Instructional Policy: Accounting for Local Variability.” Educational Administration Quarterly 34(1):31–57. Stone, Melissa, and Jodi R. Sandfort. 2009. “Building a Policy Fields Framework to Inform Research in Nonprofit Organizations.” Nonprofit and Voluntary Sector Quarterly 38(6):1054–75. Tabak, Rachel G., Elaine C. Khoong, David a Chambers, and Ross C. Brownson. 2012. “Bridging research and practice: models for dissemination and implementation research.” American journal of preventive medicine 43(3):337–50. Twombly, Eric, and Elizabeth Boris. 1999. “The Use of Vouchers in the Provision of Human Services and the Potential Implications for Nonprofit Human Service Organizations.” Paper prep. VanMeter, DS, and CE VanHorn. 1975. “The policy implementation process a conceptual framework.” Administration & Society 6(4):445–88. Watkins-Hayes, Celeste. 2009. The New Welfare Bureaucrats. Chicago, Illinois: University of Chicago. Weible, Christopher M., Paul a. Sabatier, and Kelly McQueen. 2009. “Themes and Variations: Taking Stock of the Advocacy Coalition Framework.” Policy Studies Journal 37(1):121–40. Wilson, James Q. 1991. Bureaucracy: What Government Agencies Do and Why They Do It. New York: Basic Books. 28 | P a g e