The Science of Science Policy: A Handbook
2/5
()
About this ebook
Basic scientific research and technological development have had an enormous impact on innovation, economic growth, and social well-being. Yet science policy debates have long been dominated by advocates for particular scientific fields or missions. In the absence of a deeper understanding of the changing framework in which innovation occurs, policymakers cannot predict how best to make and manage investments to exploit our most promising and important opportunities.
Since 2005, a science of science policy has developed rapidly in response to policymakers' increased demands for better tools and the social sciences' capacity to provide them. The Science of Science Policy: A Handbook brings together some of the best and brightest minds working in science policy to explore the foundations of an evidence-based platform for the field.
The contributions in this book provide an overview of the current state of the science of science policy from three angles: theoretical, empirical, and policy in practice. They offer perspectives from the broader social science, behavioral science, and policy communities on the fascinating challenges and prospects in this evolving arena. Drawing on domestic and international experiences, the text delivers insights about the critical questions that create a demand for a science of science policy.
Related to The Science of Science Policy
Related ebooks
Ethics, Qualitative And Quantitative Methods In Public Health Research Rating: 0 out of 5 stars0 ratingsBeyond Regulations: Ethics in Human Subjects Research Rating: 0 out of 5 stars0 ratingsPublic Universities and Regional Growth: Insights from the University of California Rating: 0 out of 5 stars0 ratingsThe Comparative Method: Moving Beyond Qualitative and Quantitative Strategies Rating: 5 out of 5 stars5/5Research Universities and the Public Good: Discovery for an Uncertain Future Rating: 4 out of 5 stars4/5A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences Rating: 4 out of 5 stars4/5Evaluation Foundations Revisited: Cultivating a Life of the Mind for Practice Rating: 0 out of 5 stars0 ratingsAn Introduction to Science and Technology Studies Rating: 0 out of 5 stars0 ratingsRunning Randomized Evaluations: A Practical Guide Rating: 3 out of 5 stars3/5Bodies in Flux: Scientific Methods for Negotiating Medical Uncertainty Rating: 0 out of 5 stars0 ratingsWhen Europe meets Bismarck: How Europe is used in the Austrian Healthcare System Rating: 0 out of 5 stars0 ratingsGender, Science and Technology: Perspectives from Africa: Perspectives from Africa Rating: 0 out of 5 stars0 ratingsSeven Rules for Social Research Rating: 4 out of 5 stars4/5Social Quality Theory: A New Perspective on Social Development Rating: 0 out of 5 stars0 ratingsRational Lives: Norms and Values in Politics and Society Rating: 0 out of 5 stars0 ratingsSocial Learning: An Introduction to Mechanisms, Methods, and Models Rating: 0 out of 5 stars0 ratingsAfrica Through Structuration Theory: Outline of the FS (Fear and Self-scrutiny) Methodology of Ubuntu Rating: 0 out of 5 stars0 ratingsIntegral Europe: Fast-Capitalism, Multiculturalism, Neofascism Rating: 0 out of 5 stars0 ratingsNetwork Society: How Social Relations rebuild Space(s) Rating: 0 out of 5 stars0 ratingsBounded Rationality and Politics Rating: 2 out of 5 stars2/5God's Laboratory: Assisted Reproduction in the Andes Rating: 0 out of 5 stars0 ratingsContingent Kinship: The Flows and Futures of Adoption in the United States Rating: 0 out of 5 stars0 ratingsLife Out of Sequence: A Data-Driven History of Bioinformatics Rating: 4 out of 5 stars4/5A Machine to Make a Future: Biotech Chronicles Rating: 0 out of 5 stars0 ratingsWicked, Incomplete, and Uncertain: User Support in the Wild and the Role of Technical Communication Rating: 0 out of 5 stars0 ratingsPDQ Epidemiology Rating: 0 out of 5 stars0 ratingsEthics of care and conscientious objection Rating: 0 out of 5 stars0 ratingsTrained Capacities: John Dewey, Rhetoric, and Democratic Practice Rating: 0 out of 5 stars0 ratingsGale Researcher Guide for: The Global Justice Movement Rating: 0 out of 5 stars0 ratingsThe New Social Question: Rethinking the Welfare State Rating: 3 out of 5 stars3/5
Business Development For You
Vivid Vision: A Remarkable Tool for Aligning Your Business Around a Shared Vision of The Rating: 4 out of 5 stars4/5Good to Great: Why Some Companies Make the Leap...And Others Don't Rating: 4 out of 5 stars4/5Ultralearning: Master Hard Skills, Outsmart the Competition, and Accelerate Your Career Rating: 4 out of 5 stars4/5Capital in the Twenty-First Century Rating: 4 out of 5 stars4/5The Bezos Letters: 14 Principles to Grow Your Business Like Amazon Rating: 5 out of 5 stars5/5Robert's Rules of Order: A comprehensive guide to Robert’s Rules of Order Rating: 5 out of 5 stars5/5Summary of Graham Cochrane's How to Get Paid for What You Know Rating: 0 out of 5 stars0 ratingsThe Hard Truth About Soft Skills: Soft Skills for Succeeding in a Hard Wor Rating: 3 out of 5 stars3/5How to Start a Business for Beginners: A Complete Guide to Building a Successful & Profitable Business Rating: 5 out of 5 stars5/5Built to Last: Successful Habits of Visionary Companies Rating: 4 out of 5 stars4/5The 30 Laws of Flow: Timeless Principles for Entrepreneurial Success Rating: 5 out of 5 stars5/5How to Become a Life Coach Rating: 5 out of 5 stars5/5Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist Rating: 4 out of 5 stars4/5The $100 Startup by Chris Guillebeau: Summary and Analysis Rating: 4 out of 5 stars4/5Reboot: Leadership and the Art of Growing Up Rating: 5 out of 5 stars5/5The E-Myth Contractor: Why Most Contractors' Businesses Don't Work and What to Do About It Rating: 4 out of 5 stars4/5The Rise of the Rest: How Entrepreneurs in Surprising Places are Building the New American Dream Rating: 5 out of 5 stars5/5The 22 Immutable Laws of Branding: How to Build a Product or Service into a World-Class Brand Rating: 4 out of 5 stars4/5Nolo’s Guide to Single-Member LLCs: How to Form & Run Your Single-Member Limited Liability Company Rating: 0 out of 5 stars0 ratingsGenerating Business Referrals Without Asking: A Simple 5 Step Plan to a Referral Explosion Rating: 4 out of 5 stars4/5Entrepreneurship: How to Start and Grow Your Own Business Rating: 5 out of 5 stars5/5The Moneyless Man: A Year of Freeconomic Living Rating: 4 out of 5 stars4/5Strengths Finder 2.0 | Summary Rating: 0 out of 5 stars0 ratingsSummary of Elaine Pofeldt's The Million-Dollar, One-Person Business Rating: 0 out of 5 stars0 ratingsBuy, Grow, Exit: The ultimate guide to using business as a wealth-creation vehicle Rating: 4 out of 5 stars4/5Rocket Fuel Rating: 0 out of 5 stars0 ratings
Reviews for The Science of Science Policy
1 rating0 reviews
Book preview
The Science of Science Policy - Julia I. Lane
The Science of Science Policy
A Handbook
Edited by Kaye Husbands Fealing, Julia I. Lane, John H. Marburger III, and Stephanie S. Shipp
STANFORD BUSINESS BOOKS
An Imprint of Stanford University Press
Stanford, California
Stanford University Press
Stanford, California
©2011 by the Board of Trustees of the Leland Stanford Junior University. All rights reserved.
No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or in any information storage or retrieval system without the prior written permission of Stanford University Press.
Special discounts for bulk quantities of Stanford Business Books are available to corporations, professional associations, and other organizations. For details and discount information, contact the special sales department of Stanford University Press. Tel: (650) 736-1782, Fax: (650) 736-1784
Any opinion, finding, and conclusion or recommendation expressed in this book are those of the authors and do not necessarily reflect the institutions they represent.
Printed in the United States of America on acid-free, archival-quality paper
Library of Congress Cataloging-in-Publication Data
The science of science policy : a handbook / edited by Kaye Husbands Fealing . . . [et al.].
p. cm.
Includes bibliographical references and index.
ISBN 978-0-8047-7078-1 (cloth : alk. paper)
1. Science and state—United States. I. Fealing, Kaye Husbands.
Q127.U6S3189 2011
338.9′26—dc22 2010035621
Typeset by Westchester Book Group in 10/15 Minion Pro
E-book ISBN: 978-0-8047-8160-2
Contents
Acknowledgments
1 Editors’ Introduction
Kaye Husbands Fealing, Julia I. Lane, John H. Marburger III, and Stephanie S. Shipp
2 Why Policy Implementation Needs a Science of Science Policy
John H. Marburger III
PART ONE: The Theory of Science Policy
Editors’ Overview
3 Politics and the Science of Science Policy
Harvey M. Sapolsky and Mark Zachary Taylor
4 Sociology and the Science of Science Policy
Walter W. Powell, Jason Owen-Smith, and Laurel Smith-Doerr
5 The Economics of Science and Technology Policy
Richard B. Freeman
6 A Situated Cognition View of Innovation with Implications for Innovation Policy
John S. Gero
7 Technically Focused Policy Analysis
M. Granger Morgan
8 Science of Science and Innovation Policy: The Emerging
Community of Practice
Irwin Feller
9 Developing a Science of Innovation Policy Internationally
Fred Gault
PART TWO: Empirical Science Policy—Measurement and Data Issues
Editors’ Overview
10 Analysis of Public Research, Industrial R&D, and Commercial Innovation : Measurement Issues Underlying the Science of Science Policy
Adam B. Jaffe
11 The Current State of Data on the Science and Engineering Workforce, Entrepreneurship, and Innovation in the United States
E. J. Reedy, Michael S. Teitelbaum, and Robert E. Litan
12 Legacy and New Databases for Linking Innovation to Impact
Lynne Zucker and Michael Darby
13 A Vision of Data and Analytics for the Science of Science Policy
Jim Thomas and Susan Albers Mohrman
PART THREE: Practical Science Policy
Editors’ Overview
14 Science Policy: A Federal Budgeting View
Kei Koizumi
15 The Problem of Political Design in Federal Innovation Organization
William B. Bonvillian
16 Science Policy and the Congress
David Goldston
17 Institutional Ecology and the Social Outcomes of Scientific Research
Daniel Sarewitz
18 Science Policy in a Complex World: Lessons from the European Experience
Janez Potočnik
Contributors
Index
Acknowledgments
The editors would like to acknowledge the contributions of Bill Valdez, who provided valuable insights and contributed greatly to the editing and structure of this book. We are also grateful to the National Science and Technology Committee’s NSTC’s Science of Science Policy Interagency group, whose continued support of and engagement with the very real practical issues have been critical to the revitalization of the field.
We would also like to recognize the invaluable assistance of the editorial staff at Stanford University Press, Margo Beth Crouppen and Jessica Walsh, as well as of Laura Yerhot, who did an enormous amount of work to help us prepare the manuscript.
Jim Thomas passed away suddenly while the book was being produced. We are deeply saddened by the loss of a visionary researcher, a colleague, and a friend. We are honored that his vision of the application of Visual Analytics to the Science of Science Policy is part of this book.
The Science of Science Policy
1
Editors’ Introduction
Kaye Husbands Fealing, Julia I. Lane, John H. Marburger III, and Stephanie S. Shipp
1. Introduction
Federally funded basic and applied scientific research has had an enormous impact on innovation, economic growth, and social well-being—but some has not. Determining which federally funded research projects yield results and which do not would seem to be a subject of high national interest, particularly since the government invests more than $140 billion annually in basic and applied research. Yet science policy debates are typically dominated not by a thoughtful, evidence-based analysis of the likely merits of different investments but by advocates for particular scientific fields or missions. Policy decisions are strongly influenced by past practice or data trends that may be out of date or have limited relevance to the current situation. In the absence of a deeper understanding of the changing framework in which innovation occurs, policymakers do not have the capacity to predict how best to make and manage investments to exploit the most promising and important opportunities.
This lack of analytical capacity in science policy sits in sharp contrast to other policy fields, such as workforce, health, and education. Debate in these fields is informed by the rich availability of data, high-quality analysis of the relative impact of different interventions, and often computational models that allow for prospective analyses. The results have been impressive. For example, in workforce policy, the evaluation of the impact of education and training programs has been transformed by careful attention to issues such as selection bias and the development of appropriate counterfactuals. The analysis of data about geographic differences in health care costs and health care outcomes has featured prominently in guiding health policy debates. And education policy has moved from a spend more money
and launch a thousand pi lot projects
imperative to a more systematic analysis of programs that work and that could promote local and national reform efforts.
Each of those efforts, however, has benefited from an understanding of the systems that are being analyzed. In the case of science policy, no such agreement currently exists. Past efforts to analyze the innovation system and the effect that federal research has on it have typically focused on institutions (federal agencies, universities, companies, etc.) and/or outputs (bibliometrics, patents, funding levels, production of PhDs, etc.). Absent is a systems-level construct that those institutions and outputs function within and a failure to understand that science and technology innovations are created not by institutions but by people, often working in complex social networks. This social dynamic, as well as the complex system-level interactions that result, is the subject of increasing academic scrutiny. Science magazine recently devoted a special section to complex systems and networks
and referenced studies that examined complex socioeconomic systems, meta-network analysis, scale-free networks, and other analytical techniques that could be used to understand the innovation system.¹
There is no fundamental reason why it is impossible to develop a science policy infrastructure that is similarly grounded in evidence and analysis as the workforce, health, and education domains. It is true that it is difficult: the institutional and political environment is complex, and the scientific discovery process is noisy and uncertain. Yet scientists should be excited, not deterred, by interesting but hard problems. And the history of the scientific advancement of other policy fields, with their studies of equally complex, noisy, and uncertain processes, is evidence that such efforts can succeed. Indeed, an interdisciplinary and international community of practice is emerging to advance the scientific basis of science policy through the development of data collection, theoretical frameworks, models, and tools. Its advocates envision that they can make future policy decisions based on empirically validated hypotheses and informed judgment.
There are fundamental reasons why it is becoming critical to develop such an evidence basis. One is that the White House is requiring agencies to do so: the joint Office of Management and Budget (OMB)/Office of Science and Technology Policy (OSTP) R&D Priorities memo issued in preparation for the FY2011 budget asks agencies to develop outcome-oriented goals for their science and technology activities, establish procedures and timelines for evaluating the performance of these activities, and target investments toward high-performing programs. Agencies should develop ‘science of science policy’ tools that can improve management of their research and development portfolios and better assess the impact of their science and technology investments. Sound science should inform policy decisions, and agencies should invest in relevant science and technology as appropriate.
²
Another is the looming imperative to document the impact of the nearly $20 billion in R&D investments embodied in the 2009 American Recovery and Reinvestment Act (ARRA). As Kei Koizumi points out in his chapter:
Policymakers and evaluators can demonstrate easily the short-term economic effects of highway projects, of which there are billions of dollars worth in the Recovery Act; miles of asphalt poured, construction jobs created, and dollars introduced into local economies are well developed and easily produced measures for these investments. But what are the similar indicators for R&D investments?
Finally, the federal budget environment is likely to be extremely competitive for the foreseeable future. For a case to be made that investments in science have value relative to investments in education, health, or the workforce, an analytical and empirical link has to be made between those investments and policy-relevant outcomes. It is likely that that link will need to be made at multiple levels, since the macro link between R&D investments and economic growth is less convincing given the international evidence provided by the Japanese and Swedish experience.³
The federal agencies have begun to respond in two ways. One is to advance the theoretical and empirical research frontier through investigator-initiated research and new data collection. The second is to develop a federal community of practice among the seventeen science agencies involved in funding and administering science research.
In the former case, by mid-2010, the National Science Foundation’s (NSF) Science of Science & Innovation Policy (SciSIP) program has made over ninety awards to social scientists and domain scientists. Ten of these are explicitly to use the ARRA stimulus as a way to examine the impact of science investments. The SciSIP program, through the Division of Science Resources Statistics, is also investing in the development and collection of new surveys to better inform the biennial Science and Engineering Indicators that are the basis for many policy decisions. This includes the new Business R&D Innovation Survey, which involves a complete redesign of the collection of R&D data, as well as the collection of innovation data.
In the second case, the National Science and Technology Council (NSTC) established, under the Social, Behavioral and Economic Sciences Subcommittee of the Committee on Science, a federal interagency task group on the Science of Science Policy interagency task group (SOSP ITG). This task group produced a road map for federal investments⁴ and held a major international conference to highlight the findings in that road map.
Both the SciSIP program and the SOSP subcommittee have worked to foster a community of practice in a number of ways. The interagency group has organized major annual workshops on the implementation of science policy. A flourishing Listserv for the exchange of ideas and information has been established. And a new SOSP ITG/SciSIP website has been developed,⁵ which has begun to provide an institutional basis for the development of a community of practice.
Of course, SOSP will not solve all science policy problems. It is intended to provide an intellectual framework upon which to make decisions. Indeed, as Goldston notes in his chapter:
Science of Science Policy research will never be definitive, and Congress certainly always would and should draw on more than social science results in making its decisions. But there is plenty of room to improve the current state of affairs. In other areas of policy—macroeconomics, health care, environmental protection, to name a few—there is at least a semblance of an ability to project the outputs that will result from a given set of inputs, and a range of studies to draw on in discussing what has worked and what has failed. Reaching a similar level of understanding for science policy would be a welcome change, if hardly a panacea.
2. What the Science of Science Policy Entails
One of the aims of recent science of science policy activities is to develop the evidentiary basis for decision making by policy practitioners. There is also an organic development or reshaping of frameworks that pushes the boundaries of discovery in several fields and disciplines. While some debate whether the science of science policy is itself a discipline, there is wide agreement that there is a coalescing community of practice, which Feller, in his chapter, describes as a distributed association of policymakers (public and private) and researchers in a variety of fields and disciplines. This community is interdisciplinary and includes economics, engineering, the history of science, operations research, physics, political science, psychology, and sociology—and this list is not exhaustive.⁶
Federal science investments are driven by a political context, so the insights provided by political scientists are critical. Sapolsky and Taylor argue in their chapter that
governments support the advancement of science and technology (S&T) mostly through their support of specific missions such as defense or health, and it is the politics of these missions, and the many contextual goals of government, that determines the rate and direction of its research and development investments. Governments can also affect the supply and demand conditions for science and technology outside the budgetary process via regulatory regimes, anti-trust, taxes, standards, etc.
Understanding the institutional and sociological environment is also critical, which is why sociologists make an important contribution. Powell, Owen-Smith, and Smith-Doerr indicate in their chapter that the sociological science of science policy will theorize the link between the origins and later trajectories of social systems that will provide guidance for policymakers eager to intervene.
The economics of science policy is evolving beyond the initial constructs of macroeconomic linkages of inputs and productivity outcomes. Recent models utilize network analysis, bibliometric tools, and behavioral models to uncover latent relationships between the levels and rates of new scientific discoveries and the financial, human capital, organizational, and infrastructural inputs. While these models have historically made important contributions to policy decisions, Feller, Jaffe, and Freeman each caution in this volume that there is a need to understand the limitations of incentive structures and the requirement for careful empirical analysis to understand the system of scientific knowledge creation. Morgan, in his chapter, describes several systems modeling approaches, some of which originate outside of the social sciences. This migration and synthesis of ideas is precisely what creates a dynamic community of practice.
One area of the science of science policy that is often overlooked is that conceptualization of scientific development at the cognitive level. This very micro-examination of science policy is an emerging field, with collaboration between psychologists and engineers. Both disciplines are eager to understand the elements of the creative process. Gero describes frameworks that are used to understand creative cognitive processes, which may lead to new ideas that are marketable—innovation.
And, of course, science investments are ultimately predicated on contributing to innovation. Gault’s chapter connects the work on the understanding of the science system to the need for work on delivering value to the market in the form of new goods and services and contributing to economic growth and social welfare.
3. The Need for the Handbook
Our review of the science policy curricula and syllabi in major research programs suggests that the emerging field lacks a cornerstone document that describes the current state of the art from both a practitioner and an academic point of view.
This handbook is intended to fill this gap by providing in-depth, scholarly essays authored by leading scientists and policy practitioners. We recognize that the field has multiple dimensions, and as such, this book is divided into three sections: theoretical issues, data and measurement, and policy in practice. Each author has been asked to provide a survey of a different aspect of the field, based on his or her domain expertise, which explores the plausible foundations of an evidence-based platform for science policy. The interdisciplinary nature of such a platform is evident from the nature of the questions asked by the authors: What are the essential elements of creativity and innovation, and how can they be defined to serve a truly scientific approach to policy? How can the technical workforce be quantified and modeled—what is its likely future, and how does it respond to the multiple forces that could be targets of policy? What is the impact of globalization on creativity and productivity in the science and engineering fields? What are the optimal roles of government and private investments in R&D, and how do their different outcomes influence R&D and innovative activities? As such, the contributors span a variety of disciplines, including economics, sociology, psychology, and political science.
It is worth noting that this handbook focuses on the science of science policy, which we feel is an understudied and underresearched area. There has been a great deal more research on the science of innovation policy, although, inevitably, some of that research is alluded to in different chapters. In addition, the focus is on U.S. federal science policy. We recognize that there are vibrant and important research areas that study both business R&D investments and regional science and innovation policies. And while managers of large research enterprises, such as Microsoft, and state agencies face substantial resource allocation decisions, our sense is that these decisions are fundamentally different from those in the federal science arena. And, although the science of science policy has garnered important attention on the international stage, it is impossible to do full justice to the complexity of the international issues—that deserves another volume in its own right
4. Concluding Goals
We hope that this handbook will contribute to the overarching goal for science policy, namely, the development of common, high-quality data resources and interpretive frameworks, a corps of professionals trained in science policy methods and issues, and a network of high-quality communication and discussion that can encompass all science policy stakeholders.
⁷ As such, the purpose of the book is to provide
1. an overview of the current state of the science of science policy in four key social science areas: economics, sociology, political science, and psychology;
2. a perspective from the broader social and behavioral science community on the interesting scientific challenges and opportunities in this emerging field;
3. a review of the empirical—measurement and data—challenges inherent in describing and assessing the scientific enterprise; and
4. a perspective from the federal science and policy community on the critical science policy questions that create the demand for a science of science policy.
Notes
1. Science, July 24, 2009, pp. 405–432.
2. M-09-27, Memorandum for the Heads of Executive Departments and Agencies, August 4, 2009.
3. Julia Lane, Assessing the Impact of Science Funding,
Science 5 (June 2009), vol. 324, no. 5932, pp. 1273–1275, DOI: 10.1126/science.1175335.
4. The Science of Science Policy: A Federal Research Roadmap,
November 2008.
5. See scienceofsciencepolicy.net
6. For example, all of these areas are represented among the SciSIP awardees; see www.scienceofsciencepolicy.net/scisipmembers.aspx
7. See Marburger, Chap. 2, in this book.
2
Why Policy Implementation Needs a Science of Science Policy
John H. Marburger III
1. Introduction
My perspective on the needs of science policymakers was strongly influenced by my experience as science advisor to the president and director of the Office of Science and Technology Policy (OSTP) during the two terms of President George W. Bush’s administration (2001–2009). Watching policy evolve during an era of dramatic economic change, deep political divisions, and high-visibility science issues made me aware of weaknesses in the process that will not be remedied by simple measures. Science policy studies will illuminate the problems and undoubtedly propose options for addressing them but will not by themselves solve them. The growth of science policy studies as an academic discipline nevertheless provides intriguing opportunities for steady improvement in the management of the national science and technology enterprise.
2. Science Policy Challenges in the Executive Branch
The structure and responsibilities of the OSTP have changed very little since the office was established by Congress in 1976, and its basic form had been in place since President Eisenhower appointed the first full-time science advisor in 1957. The advisors have always played two roles: (1) advising the president and his other senior policy officials on all technical matters that reach the executive level, and (2) coordinating, prioritizing, and evaluating science and technology programs throughout the executive branch of government. A third responsibility is obvious but rarely discussed: the responsibility, shared with many others, of seeing policies through to their successful implementation. Each of these roles and responsibilities has its challenges, but none is more difficult than the third. The nature of this difficulty severely constrains strategies to overcome it, but the science of science policy
movement is a potentially powerful tool for policy implementation. To appreciate why requires some background on the federal decision-making machinery. Science policy has many dimensions, but here I will focus specifically on the implementation of initiatives that require authorization and appropriations by Congress.
Policies are guides to action. Therefore strategies for implementation are nearly always embedded in policy proposals, often implicitly. The actions required to transform a policy idea into a desired result occur in stages. The early stages are successive expansions of the group of agents and stakeholders whose endorsement is needed to launch the initiative. Later stages focus on the management of the program, feedback of information about its success or failure to the policy level, and subsequent policy actions responsive to the feedback. Together these stages comprise the natural cycle of planning, implementation, evaluation, and improvement that applies to all systematic efforts to accomplish defined objectives. Science projects normally occur within a larger framework administered by an organization or a governmental agency. My interest here is in frameworks for science and technology programs at the highest policy level and the early-stage actions required to launch them.
The complexity of the U.S. federal science establishment is notorious. The executive branch carries out the business of government through a large number of departments and agencies, many of which today have a research arm and an employee in the role of chief scientist. Nineteen of these organizations are designated by an executive order as members of the interagency National Science and Technology Council (NSTC), managed by the OSTP. Twenty-five of them participate in the interagency National Nanotechnology Initiative (NNI), and thirteen each participate in the Global Change Research Program (GCRP) and the Networking and Information Technology Research and Development (NITRD) program.¹ Among the fifteen departments and fifty-six In de pen dent Establishments and Government Corporations
listed in the current edition of the U.S. Government Manual, only one, the National Science Foundation (NSF), is fully devoted to the conduct of science, including research fellowships and science education.² All of the other science organizations are therefore embedded within larger departments in which they compete with other functions for money, space, personnel, and the attention of their department secretary or administrator. Two of the largest science agencies, NASA (National Aeronautics and Space Administration) and NSF, do not report to a cabinet-level administrator and rely on the OSTP to make their case in White House policy-making processes.
The dispersion of research through such a large number of agencies was a weakness already recognized in the 1940s by Vannevar Bush, who urged consolidation into a single basic research agency.³ That effort led ultimately to the creation of the National Science Foundation in 1950, but the consolidation included only a small fraction of the then-existing federal research portfolio. The bureaus that became the National Institutes of Health (NIH), NASA, the Department of Energy (DOE), and the Department of Defense (DOD) research entities remained separate and today have research budgets comparable to or greater than the NSF. Many smaller science agencies within cabinet departments, such as Commerce, Agriculture, and Interior, also remained separate. The challenge of managing multiple science enterprises in the executive branch motivated the development of bureaucratic machinery in the 1950s to avoid duplication, fill gaps, and preserve capabilities serving multiple agencies.⁴ The White House Office of Management and Budget (OMB) has the greatest authority in this role, and the OSTP works with the OMB to establish priorities and develop programs and budgets for the science and technology portions of all of the agencies.
The OMB itself is divided into five relatively in de pen dent divisions (four during my service), each of which manages a significant portion of the overall science and technology activity. ⁵ This creates a challenge within the executive branch for initiatives that cut across the major science agencies. The NIH, NSF, DOE, and DOD research budgets are each developed in separate OMB divisions. Budget officials work hard to protect their in dependence, and they attempt to insulate their decisions from other White House policy offices. Major policy decisions are made through a deliberative process among White House and cabinet officials—always including the OMB—that narrows issues and choices for ultimate action by the president. Only a few issues, however, can receive such high-level attention, and most decisions about science and technology policy are negotiated within agencies and among the various White House policy offices.
This executive branch machinery is complex but reasonably well defined and understood by the bureaucracy. However, it is not always well understood by the political appointees in each agency whose tenure is often less than a single four-year presidential term. Their involvement in the process adds to the element of randomness always present in agency responsiveness to presidential direction, but the political appointees also reduce the impedance mismatch between the volatile political leadership and the cultural inertia of the bureaucracy. Designing high-level policies within the executive branch so they will actually be implemented requires detailed knowledge of the political and bureaucratic cultures of the agencies that will be responsible for implementation. Because these cultures depend on personal qualities of the agency leadership and specific historical tracks to the present state of the agencies, policy analysis and design are not well defined. For this and other reasons common to complex organizations, the behavior of the agencies is not sufficiently predictable to guarantee that a policy, once launched by presidential directive or executive order, will follow an anticipated trajectory. The usual government remedy for the consequences of this uncertainty is to establish stronger coordinating organizations at the top, such as national coordinating offices (e.g., for the NNI and the NITRD), czars,
or presidential commissions. The OSTP itself has become increasingly effective in this role through refinement of the NSTC structure over several administrations. Notwithstanding these arrangements, the agency line management is on the job continually and has more resources than the relatively small executive office of the president to influence the action environment. I had this phenomenon in mind when I described the origins of the Bush administration’s vision for space exploration to the 2009 Augustine Committee
that reviewed NASA’s human space flight plans: "[T]he final [space] policy document was a compromise between contrasting policy perspectives offered by NASA and by the White House policy advisors. In subsequent presentations to Congress and to the public, NASA representatives emphasized the NASA view of the Vision, which began to appear even during the policy formation process through leaks to the media serving the space community."⁶
3. Legislative Impact on Science Policy Implementation
The legislative branch has no executive machinery to resolve the random forces that influence its own operations. It responds to the president’s budget proposals with two dozen very in de pen dent appropriations subcommittees, as well as a large number of authorizing committees and subcommittees. No organization, such as the OMB or the OSTP, monitors or attempts to enforce policy consistency across the hundreds of bills passed in each Congress, much less the thousands of bills that are introduced and debated. Offices such as the Congressional Budget Office (CBO), the General Accountability Office (GAO), and the Congressional Research Ser vice (CRS) are informational only and have no authority over the 535 members of Congress. These offices are influential, however, through the quality and perceived objectivity of the information and analyses they produce. The CBO analyses are particularly successful in fostering a consensus on the financial aspects of legislative proposals.
Legislative funding for science and technology programs originates in nine of the twelve different appropriations subcommittees in each chamber, for none of which is science the sole or even the majority category of funding. The big five
science agencies—NIH, NSF, DOE, NASA, and DOD—are funded by four different appropriations subcommittees. Each subcommittee has its own staff, whose members’ voices are more influential than many executive branch policymakers in establishing priorities and programs among the executive agencies. And each subcommittee is a target for its own army of advocates, lobbyists, and activist individuals whose influence is difficult to trace but highly significant.⁷ Sections of bills are often drafted by lobbyists or constituents of a subcommittee member or the chairperson. The subcommittees are substantially stovepiped,
with little incentive to coordinate action except on highly visible multiagency issues such as climate change or energy policy. The authorization and appropriations bills give surprisingly specific and sometimes conflicting direction to agencies, substantially and routinely invading the president’s constitutional prerogative to manage the executive branch.
This complex and unpredictable field of action leads to in efficiencies and perverse distributions of resources that create a continual irritant, if not a threat, to America’s otherwise very strong research and development (R&D) enterprise. One extreme example is the tortuous history of the initiative to enhance U.S. economic competitiveness in the second Bush administration. In 2005 a wide consensus developed within the U.S. science and technology community that U.S. economic competitiveness was threatened by neglect of the nation’s innovation ecology.
⁸ The president responded with the American Competitiveness Initiative
(ACI) in his 2006 budget proposal to Congress.⁹ The 110th Congress authorized its own response (largely consistent with but more generous than the ACI) in the America COMPETES Act of 2007 (ACA).
¹⁰ Congress, to the great surprise and consternation of the community, failed to fund the program because of a stalemate with the president regarding his insistence that the total budget (not just for R&D) not exceed his top line. For three years the initiative languished until the Bush administration expired and the 111th Congress substantially funded the initiative, and much more, along with the American Recovery and Reinvestment Act of 2009. In subsequent budget submissions, the Obama administration has generally supported the main provisions of the ACI and the ACA. During the final Bush administration years, science budgets reflected the priorities of the appropriations committees, not the executive branch and its scientific advisory panels. Politics played a dominant role in this saga, but other factors also were significant, including the fact that the ACI and, to a lesser extent, the ACA identified explicit priorities. Major science agencies such as NASA and the NIH were not included in the initiative. Consequently, Congress was somewhat insulated from criticism for its failure to act because important science constituencies excluded from the initiative remained silent. During this period Congress continued to add large earmarked amounts to the R&D budgets, but few were in the prioritized programs.
In the longer run the competitiveness campaign
resulted in important changes in the pattern of appropriations for science, as did the earlier campaign to double the NIH budget in the 1990s. In both cases the late stages played out in a new administration affiliated with a different political party, which suggests that a sufficiently broad, bipartisan campaign can succeed regardless of which party is in power. Such broad consensus is difficult to achieve, which is why it usually occurs only in the face of a national crisis: notable federal R&D funding spikes occurred during World War II and after the 1957 Soviet Sputnik launch, while others followed the oil embargo in the 1970s and perceived cold war urgencies (e.g., President Reagan’s Strategic Defense Initiative) in the 1980s. The NIH and competitiveness initiatives were not propelled by similarly dramatic events, but champions for both campaigns developed cases based on disturbing trends,
including lagging rates of R&D investment compared to other countries, discouragement or lack of preparation of potential young scientists, and shortsighted abandonment of basic research in favor of applied research and development.¹¹ These and similar arguments were taken up by advocacy organizations of which some, such as Research America¹² and the Task Force on the Future of American Innovation,¹³ were formed for the purpose. Prominent figures were recruited, op-eds were written, conferences and summits
were held, and, ultimately, government responded. In the absence of a precipitating event, the advocacy communities worked to create a sense of national crisis to motivate the process.
4. Toward a Firmer Foundation for Science Policy
I too participated in the competitiveness campaign in my official capacity, encouraging an early report by the President’s Council of Advisors for Science and Technology in 2002, overseeing the OSTP’s role in crafting the ACI, and giving many supporting speeches. My direct experience with basic and applied research programs in diverse fields over four decades convinced me of the importance of these goals, but I was uneasy regarding the case put forward by the advocacy community. I thought the disturbing trends
needed attention, but I was not convinced either that they would lead to the feared consequences or that the proposed remedies would work as advertised. On some issues, such as the status and future of the scientific workforce, there were deep uncertainties (Do we have too many scientists/engineers in field X, or too few?
).¹⁴ My policy speeches from 2005 and thereafter expressed my frustration over the inadequacy of data and analytical tools commensurate with science policymaking in a rapidly changing environment.¹⁵
Given the complex and unpredictable systems of executive branch agencies and congressional subcommittees, no deployment of czars, commissions, or congressional offices will guarantee that rational and coherent policy proposals will be implemented. Congress departs reliably from the status quo only in response to widely perceived national crises, or when impressed with a broad consensus among multiple constituencies. If the consensus is created by advocacy alone, then there is no assurance that the proposed solution will achieve the desired end, even if the problems it addresses are real. Moreover, advocacy-based consensus has never reached across all of the fields of technical endeavor that draw funding from the overall R&D pot. Attempts to prioritize among fields or agencies are extremely rare and never well received by the scientific community. Past campaigns focused on selected fields that were perceived to be relevant to the crisis at hand and ignored the others.
Can a sustained science policy consensus develop that is strong enough to influence the government machinery and wide enough to encompass all of the disparate but linked technical endeavors that federal funds support? There is some hope. The National Academies (NAS) offer high-quality advice in every relevant technical field, and the products of the National Research Council (NRC) carry substantial weight with all of the actors in the complex process described earlier. The NRC reports avoid blind advocacy but are nevertheless assembled by teams of scientists and engineers, nearly always in response to a specific narrow charter negotiated by the agency requesting, and funding, the study. Even a report as focused as Gathering Storm
avoided specificity regarding the relative importance of different fields of science. When NAS president Frank Press urged his colleagues in 1988 not to leave key priority decisions to Congress, he was roundly criticized by his own community.¹⁶ The only source of high-level policy analysis that is relatively free of the biases of advocacy and self-interest is the community of social scientists and others who analyze science and technology policy as an academic field of study. And it is in the growth of this community and its products that the greatest hope lies for developing rational and objective policy perspectives that all parties to the national process can draw upon. The collective endeavor of this community is what I understand to be the science of science policy.
I became acutely aware of the inadequacy of available science policy tools following the terrorist attacks of September 11, 2001. These actions sparked a strong patriotic response in the science and engineering communities. Along with the desire to respond aggressively to terrorism came a wave of uneasiness about the impact on science of demands for increased homeland security. These included an immediate tightening of visas for students and visiting scientists, regulations on handling select agents,
or substances of likely interest to terrorists, concern over the release even of nonclassified research results that might assist terrorism, and the possible diversion of funds from existing science programs to new efforts related to homeland security. All of this was new to the nation’s technical communities, and sorting out the issues and options consumed huge amounts of time in studies and meetings. Among many other policy questions I was asked at the time was one raised by the National Science Board (NSB).
The NSB’s Task Force on National Workforce Policies for Science and Engineering invited me to address its June 2002 meeting on the topic Impact of Security Policies on the Science and Engineering Workforce.
This was a reasonable request given the nature of the new policies, but it created a dilemma for me. Although I had no information on which to estimate the impact, the prevailing wisdom in the academic community was that it would be negative. I could think of reasons to reinforce that conclusion, but I was also aware of the complexity of the technical workforce picture that was changing rapidly because of globalization and profound development in China and India. Should I speculate? My extensive experience in research university and national laboratory administration gave me the confidence to offer an opinion. Or should I point to the much larger issue of our ignorance about such impacts and what we would need to remove it? To quote directly from my notes for that meeting:
The fact is, I do not know what the impact of security policies will be on the science and engineering workforce. Part of the reason for this—the least important part—is that the security policies are in a state of flux. Another part is that the impact will be psychological as well as instrumental, and psychology is not part of our predictive model. The most important factor, however, is that there is no reliable predictive model for workforce response to any particular driving force, such as a change in policy affecting student visas.
If there are such models, they seem to be implicit in the types of data we collect and the manner we choose to portray them. When I see graphs and tables relating to workforce, I have the impression they are answers to questions whose significance is either so well known to experts that no further discussion is required, or so completely buried in history that no further discussion is possible. I understand the need to collect the same data year after year so comparisons can be made and changes depicted accurately in the course of time. But I am not at all confident that the right questions are being asked or answered to provide guidance for action. We have workforce data that I do not understand how to use, and we have workforce questions whose answers would seem to require more than merely data.
My idea at the time was that the National Science Board, which oversees the production of the important Science and Engineering Indicators report,¹⁷should consider the task of building a new workforce model that might make it possible to answer questions such as the one they asked me: What do we expect from a technical workforce model?
My response to the board follows:
I know what I expect from a model. I expect it to give policy guidance. I want to be able to assess the impact of a change of policy on the technical workforce. . . . What is the impact of a student loan forgiveness program? Of a scholarship program? Of a change in the compensation structure for researchers, faculty members, technical staff? Of an increase in sponsored research funds in some field? Of a change in graduation rates in certain fields among certain sociological groups? Ask all these questions with respect to area of technical skill, and with respect to the nation in which the changes are postulated to occur. It must be a global model, because the workforce we are speaking of has global mobility. It must take into account the effect of incentives, and the correlation of this effect with sociological parameters.
Above all, the model cannot be simply an extrapolation based on historical time-series data. The technical workforce is responding to factors that are changing too rapidly to be captured by historical data. And yet the model does not have to predict everything with perfect accuracy. What we need is the ability to estimate specific effects from specific causes under reasonable assumptions about the future. . . . Does it make sense for us to launch a project to model the global work-force with the aim of producing policy guidance? We need an action-oriented workforce project that seeks to define the technical workforce problem in a broad way, and to exploit the power of modern information technology to produce tools for policy guidance.¹⁸
I knew at the time that this was not a task the National Science Board was prepared to undertake, but I wanted to signal my concern about an issue that seemed to threaten the credibility of all policy advice.