Manual de Oslo - 2018-3
Manual de Oslo - 2018-3
Manual de Oslo - 2018-3
Part III. Methods for collecting, analysing and reporting statistics on business
innovation
This chapter provides guidance on methodologies for collecting data on business innovation,
based on the concepts and definitions introduced in previous chapters. The guidance is aimed
at producers of statistical data on innovation as well as advanced users who need to
understand how innovation data are produced. While acknowledging other potential sources,
this chapter focuses on the use of business innovation surveys to collect data on different
dimensions of innovation-related activities and outcomes within the firm, along with other
contextual information. The guidance within this chapter covers the full life cycle of data
collection, including setting the objectives and scope of business innovation surveys; identifying
the target population; questionnaire design; sampling procedures; data collection methods
and survey protocols; post-survey data processing, and dissemination of statistical outputs.
9.1. Introduction
9.1. This chapter provides guidance on methodologies for collecting data on business
innovation. As noted in Chapter 2, methodological guidance for the collection of data on
innovation is an essential part of the measurement framework for innovation. Data on
innovation can be obtained through object-based methods such as new product announcements
on line or in trade journals (Kleinknecht, Reijnen and Smits, 1993), and from expert assessments
of innovations (Harris, 1988). Other sources of innovation data include annual corporate
reports, websites, social surveys of employee educational achievement, reports to regional,
national and supranational organisations that fund research and experimental development
(R&D) or innovation, reports to organisations that give out innovation prizes, university
knowledge transfer offices that collect data on contract research funded by firms and the
licensing of university intellectual property, business registers, administrative sources, and
surveys of entrepreneurship, R&D and information and communication technology (ICT)
use. Many of these existing and potential future sources may have “big data” attributes,
namely they are too large or complex to be handled by conventional tools and techniques.
9.2. Although useful for different purposes, these data sources all have limitations.
Many do not provide representative coverage of innovation at either the industry or national
level because the data are based on self-selection: only firms that choose to make a product
announcement, apply for R&D funding, or license knowledge from universities are
included. Information from business registers and social, entrepreneurship, and R&D
surveys is often incomplete, covering only one facet of innovation. Corporate annual
reports and websites are inconsistent in their coverage of innovation activities, although
web-scraping techniques can automate searches for innovation activities on documents
posted on line and may be an increasingly valuable source of innovation data in the future.
Two additional limitations are that none of these sources provide consistent, comparable
data on the full range of innovation strategies and activities undertaken by all firms, as
discussed in Chapters 3 to 8, and many of these sources cannot be accurately linked to other
sources. Currently, the only source for a complete set of consistent and linkable data is a
dedicated innovation survey based on a business register.
9.3. The goal of a business innovation survey is to obtain high-quality data on innovation
within firms from authoritative respondents such as the chief executive officer or senior
managers. A variety of factors influence the attainment of this goal, including coverage of
the target population, the frequency of data collection, question and questionnaire design
and testing, the construction of the survey sample frame, the methods used to implement
the survey (including the identification of an appropriate respondent within the surveyed
unit) and post-survey data processing. All of these topics are relevant to national statistical
organisations (NSOs) and to international organisations and researchers with an interest in
collecting data on innovation activities through surveys and analysing them.
9.4. Business innovation surveys that are conducted by NSOs within the framework of
national business statistics must follow national practices in questionnaire and survey
design. The recommendations in this chapter cover best practices that should be attainable
by most NSOs. Surveys implemented outside of official statistical frameworks, such as by
international organisations or academics, will benefit from following the recommendations
in this chapter (OECD, 2015a). However, resource and legal restrictions can make it
difficult for organisations to implement all best practices.
9.5. The decision on the types of data to collect in a survey should be taken in
consultation with data users, including policy analysts, business managers and consultants,
academics, and others. The main users of surveys conducted by NSOs are policy makers and
policy analysts and consequently the choice of questions should be made after consultations
with those government departments and agencies responsible for innovation and business
development. Surveys developed by academics could also benefit from consultations with
governments or businesses.
9.6. The purpose(s) of data collection, for instance to construct national or regional
indicators or for use in research, will largely influence survey methodology choices. The
sample can be smaller if only indicators at the national level are required, whereas a larger
sample is necessary if users require data on sub-populations, longitudinal panel data, or
data on rare innovation phenomena. In addition, the purpose of the survey will have a strong
influence on the types of questions to be included in the survey questionnaire.
9.7. This manual contains more suggestions for questions on innovation than can be
included in a single survey. Chapters 3 to 8 and Chapter 10 recommend key questions for
collection on a regular basis and supplementary questions for inclusion on an occasional
basis within innovation survey questionnaires. Occasional questions based on the supplementary
recommendations or on other sections of the manual can be included in one-off modules
that focus on specific topics or in separate, specialised surveys. The recommendations in
this chapter are relevant to full innovation surveys, specialised surveys, and to innovation
modules included in other surveys.
9.8. This chapter provides more details on best practice survey methods than previous
editions of this manual. Many readers from NSOs will be familiar with these practices and
do not require detailed guidance on a range of issues. However, this edition is designed to
serve NSOs and other producers and users of innovation data globally. Readers from some
of these organisations may therefore find the details in this chapter of value to their work.
In addition to this chapter, other sources of generic guidelines for business surveys include
Willeboordse (ed.) (1997) and Snijkers et al. (eds.) (2013). Complementary material to this
manual’s online edition will provide relevant links to current and recent survey practices and
examples of experiments with new methods for data collection (http://oe.cd/oslomanual).
9.9. The chapter is structured as follows: Section 9.2 covers the target population and
other basic characteristics of relevance to innovation surveys. Questionnaire and question
design are discussed in section 9.3. A number of survey methodology issues are discussed
in the subsequent sections including sampling (section 9.4), data collection methods
(section 9.5), survey protocol (section 9.6) and post-survey processing (section 9.7). The
chapter concludes with a brief review of issues regarding the publication and dissemination
of results from innovation surveys (section 9.8).
Statistical unit
9.16. A statistical unit is an entity about which information is sought and for which
statistics are ultimately compiled; in other words, it is the institutional unit of interest for
the intended purpose of collecting innovation statistics. A statistical unit can be an observation
unit for which information is received and statistics are compiled, or an analytical unit
which is created by splitting or combining observation units with the help of estimations or
imputations in order to supply more detailed or homogeneous data than would otherwise
be possible (UN, 2007; OECD, 2015b).
9.17. The need to delineate statistical units arises in the case of large and complex
economic entities that are active in different industry classes, or have units located in
different geographical areas. There are several types of statistical units according to their
ownership, control linkages, homogeneity of economic activity, and their location, namely
enterprise groups, enterprises, establishments (a unit in a single location with a single
productive activity), and KAUs (part of a unit that engages in only one kind of productive
activity) (see OECD [2015b: Box 3.1] for more details). The choice of the statistical unit
and the methodology used to collect data are strongly influenced by the purpose of
innovation statistics, the existence of records of innovation activity within the unit, and the
ability of respondents to provide the information of interest.
9.18. The statistical unit in business surveys is generally the enterprise, defined in the
SNA as the smallest combination of legal units with “autonomy in respect of financial and
investment decision-making, as well as authority and responsibility for allocating resources
for the production of goods and services” (EC et al., 2009; OECD, 2015b: Box 3.1).
9.19. Descriptive identification variables should be obtained for all statistical units in the
target population for a business innovation survey. These variables are usually available
from statistical business registers and include, for each statistical unit, an identification
code, the geographic location, the kind of economic activity undertaken, and the unit size.
Additional information on the economic or legal organisation of a statistical unit, as well
as its ownership and public or private status, can help to make the survey process more
effective and efficient.
Reporting units
9.20. The reporting unit (i.e. the “level” within the business from which the required
data are collected) will vary from country to country (and potentially within a country),
depending on institutional structures, the legal framework for data collection, traditions,
national priorities, survey resources and ad hoc agreements with the business enterprises
surveyed. As such, the reporting unit may differ from the required statistical unit. It may
be necessary to combine, split, or complement (using interpolation or estimation) the
information provided by reporting units to align with the desired statistical unit.
9.21. Corporations can be made up of multiple establishments and enterprises, but for
many small and medium-sized enterprises (SMEs) the establishment and the enterprise are
usually identical. For enterprises with heterogeneous economic activities, it may be necessary
for regional policy interests to collect data for KAUs, or for establishments. However,
sampling establishments or KAUs requires careful attention to prevent double counting
during data aggregation.
9.22. When information is only available at higher levels of aggregation such as the
enterprise group, NSOs may need to engage with these units to obtain disaggregated data,
for instance by requesting information by jurisdiction and economic activity. This will
allow better interoperability with other economic statistics.
9.23. The enterprise group can play a prominent role as a reporting unit if questionnaires
are completed or responses approved by a central administrative office. In the case of
holding companies, a number of different approaches can be used, for example, asking the
holding company to report on the innovation activities of enterprises in specific industries,
or forwarding the questionnaire, or relevant sections, to other parts of the company.
9.24. Although policy interests or practical considerations may require innovation data
at the level of establishments, KAUs, and enterprise groups, it is recommended, wherever
possible, to collect data at the enterprise level to permit international comparisons. When
this is not possible, careful attention is required when collecting and reporting data on
innovation activities and expenditures, as well as linkage-related information, that may not
be additive at different levels of aggregation, especially in the case of MNEs. Furthermore,
innovation activities can be part of complex global value chains that involve dispersed
suppliers and production processes for goods and services, often located in different
countries. Therefore, it is important to correctly identify whenever possible statistical units
active in global value chains (see Chapter 7) in order to improve compatibility with other
data sources (such as foreign investment and trade surveys).
limited to surveys in only a few countries. Any ongoing efforts should provide better
guidance for innovation measurement in the future.
9.31. A number of economic activities are not generally recommended for data collection
by business innovation surveys and should be excluded from international comparisons of
business innovation. From an international comparison perspective, sections O (Public
administration), P (Education), Q (Human health and social work), R (Arts, entertainment
and recreation) and division 94 of section S (Membership organisations) are not recommended
for inclusion because of the dominant or large role of government or private non-profit
institutions serving households in the provision of these services in many countries.
However, there may be domestic policy demands for extending the coverage of national
surveys to firms active in these areas, for example if a significant proportion of units active
in this area in the country are business enterprises, or if such firms are entitled to receive
public support for their innovation activities.
9.32. Other sections recommended for exclusion are dominated by actors engaged in
non-market activities and therefore outside the scope of this manual, namely section T
(Households) and section U (Extraterritorial bodies).
Unit size
9.33. Although innovation activity is generally more extensive and more frequently
reported by larger firms, units of all sizes have the potential to be innovation-active and
should be part of the scope of business innovation surveys. However, smaller business
units, particularly those with higher degree of informality (e.g. not incorporated as companies,
exempt from or not declaring some taxes, etc.), are more likely to be missing from statistical
business registers. The relative importance of such units can be higher in countries in earlier
stages of development. Comparing data for countries with different types of registers for
small firms and with varying degrees of output being generated in the informal economy
can therefore present challenges. An additional challenge, noted in Chapter 3, stems from
adequately interpreting innovation data for recently created firms, for which a substantial
number of activities can be deemed to be new to the firm.
9.34. Therefore, for international comparisons, it is recommended to limit the scope of
the target population to comprise all statistical business units with ten or more persons
employed and to use average headcounts for size categories. Depending on user interest
and resources, surveys can also include units with fewer than ten persons employed,
particularly in high technology and knowledge-intensive service industries. This group is
likely to include start-ups and spin-offs of considerable policy interest (see Chapter 3).
relationships between innovation activities and outcomes. Relevant outcomes include changes
in productivity, employment, exports and revenue.
9.39. Selected innovation questions may be added occasionally to other surveys to assist
in improving, updating and maintaining the innovation survey frame.
9.46. The quality advantages of short observation periods and the potential interpretation
advantages of longer observation periods may be combined through the construction of a
longitudinal panel linking firms in consecutive cross-sectional innovation surveys (see
subsection 9.4.3 below). For example, if the underlying data have a one-year observation
period, the innovation status of firms over a two- (three-) year period can be effectively
calculated from data for firms with observations over two (or three) consecutive annual
observation periods. Additional assumptions and efforts would be required to deal with
instances where repeated observations are not available for all firms in the sample, for
example due to attrition, or the use of sampling methods to reduce the burden on some
types of respondents (e.g. SMEs). A strong argument in favour of a longitudinal panel
survey design is that it enhances the range of possible analyses of causal relationships
between innovation activities and outcomes (see subsection 9.4.3 below).
9.47. Observation periods that are longer than the frequency of data collection can affect
comparisons of results from consecutive surveys. In such cases, it can be difficult to
determine if changes in results over time are mainly due to innovation activities in the non-
overlapping period or if they are influenced by activities in the period of overlap with the
previous survey. Spurious serial correlation could therefore be introduced as a result.
9.48. At the time of publication of this manual, the observation period used by countries
varies between one and three years. This reduces international comparability for key
indicators such as the incidence of innovation and the rate of collaboration with other
actors. Although there is currently no consensus on what should be the optimal length of
the generic observation period (other than a three-year maximum limit), convergence towards
a common observation period would considerably improve international comparability. It is
therefore recommended to conduct, through concerted efforts, additional experimentation
on the effects of different lengths for the observation period and the use of panel data to
address interpretation issues. The results of these experiments would assist efforts to reach
international agreement on the most appropriate length for the observation period.
9.55. Data quality can be improved by reducing respondent fatigue and by maintaining a
motivation to provide good answers. Both fatigue and motivation are influenced by question
length, but motivation can be improved by questions that are relevant and interesting to the
respondent. The latter is particularly important for respondents from non-innovative units,
who need to find the questionnaire relevant and of interest, otherwise, they are less likely
to respond. Therefore, all questions should ideally be relevant to all units in all industries
(Tourangeau, Rips and Rasinski, 2000).
9.56. “Satisficing” refers to respondent behaviours to reduce the time and effort required
to complete an online or printed questionnaire. These include abandoning the survey before it
is completed (premature termination), skipping questions, non-differentiation (when respondents
give the identical response category to all sub-questions in a question, for example
answering “slightly important” to all sub-questions in a grid question), and speeding
through the questionnaire (Barge and Gelbach, 2012; Downes-Le Guin et al., 2012). The
main strategies for minimising satisficing are to ensure that the questions are of interest to
all respondents and to minimise the length of the questionnaire. Non-differentiation can be
reduced by limiting the number of sub-questions in a grid to no more than seven (Couper
et al., 2013). Grid questions with more than seven sub-questions can be split into several
subgroups. For instance, a grid question with ten sub-questions could be organised around
one theme with six sub-questions, and a second theme with four.
Filters
9.60. Filters and skip instructions direct respondents to different parts of a questionnaire,
depending on their answers to the filter questions. Filters can be helpful for reducing
response burden, particularly in complex questionnaires. Conversely, filters can encourage
satisficing behaviour whereby respondents answer “no” to a filter question to avoid
completing additional questions.
9.61. The need for filters and skip instructions can be minimised, for instance by
designing questions that can be answered by all units, regardless of their innovation status.
This can provide additional information of value to policy and to data analysis. However,
filters are necessary in some situations, such as when a series of questions are only relevant
to respondents that report one or more product innovations.
9.62. The online format permits automatic skips as a result of a filter, raising concerns
that respondents who reply to an online questionnaire could provide different results from
those replying to a printed version which allows them to see skipped questions and change
their mind if they decide that those skipped questions are relevant. When both online and
printed questionnaires are used, the online version can use “greying” for skipped questions
so that the questions are visible to respondents. This could improve comparability with the
printed version. If paradata – i.e. the data about the process by which surveys are filled in
– are collected in an online survey (see section 9.5 below), each respondent’s path through
the questionnaire can be evaluated to determine if greying has any effect on behaviour, for
instance if respondents backtrack to change an earlier response.
Question order
9.63. A respondent’s understanding of a question can be influenced by information
obtained from questions placed earlier in the questionnaire. Adding or deleting a question
can therefore influence subsequent answers and reduce comparability with previous surveys
or with surveys conducted in other jurisdictions.
9.64. Questions on activities that are relevant to all units regardless of their innovation
status should be placed before questions on innovation and exclude references to innovation.
This applies to potential questions on business capabilities (see Chapter 5).
9.65. Wherever possible, questions should be arranged by theme so that questions on a
similar topic are grouped together. For instance, questions on knowledge sourcing activities
and collaboration for innovation should be co-located. Questions on the contribution of
external actors to a specific type of innovation (product or business process) need to be
located in the section relating to that type of innovation.
9.4. Sampling
9.77. The frame population should be based on the reference year of the innovation
survey. Changes to units during the reference period can affect the frame population,
including changes in industrial classifications (ISIC codes), new units created during the
period, mergers, splits of units, and units that ceased activities during the reference year.
9.78. NSOs generally draw on an up-to-date official business register, established for
statistical purposes, to construct the sample frame. Other organisations interested in conducting
innovation surveys may not have access to this business register. The alternative is to use
privately maintained business registers, but these are often less up to date than the official
business register and can therefore contain errors in the assigned ISIC industry and number
of employed persons. The representativeness of private registers can also be reduced if the
data depend on firms responding to a questionnaire, or if the register does not collect data
for some industries. When an official business register is not used to construct the sampling
frame, survey questionnaires should always include questions to verify the size and sector
of the responding unit. Units that do not meet the requirements for the sample should be
excluded during data editing.
Stratified sampling
9.81. A simple random sample (one sampling fraction for all sampled units of a target
population) is an inefficient method of estimating the value of a variable within a desired
confidence level for all strata because a large sample will be necessary to provide sufficient
sampling power for strata with only a few units or where variables of interest are less
prevalent. It is therefore more efficient to use different sampling fractions for strata that are
determined by unit size and economic activity.
9.82. The optimal sample size for stratified sample surveys depends on the desired level
of precision in the estimates and the extent to which individual variables will be combined
in tabulated results. The sample size should also be adjusted to reflect the expected survey
non-response rate, the expected misclassification rate for units, and other deficiencies in
the survey frame used for sampling.
9.83. The target sample size can be calculated using a target precision or confidence level
and data on the number of units, the size of the units and the variability of the main variables
of interest for the stratum. The variance of each variable can be estimated from previous
surveys or, for new variables, from the results of a pilot survey. In general, the necessary
sample fraction will decrease with the number of units in the population, increase with the
size of the units and the variability of the population value, and increase with the expected
non-response rate.
9.84. It is recommended to use higher sampling fractions for heterogeneous strata (high
variability in variables of interest) and for smaller ones. The sampling fractions should be
100% in strata with only a few units, for instance when there are only a few large units in
an industry or region. The size of the units could also be taken into consideration by using
the probability proportional to size (pps) sampling approach, which reduces the sampling
fractions in strata with smaller units. Alternatively, the units in each stratum can be sorted
by size or turnover and sampled systematically. Different sampling methods can be used
for different strata.
9.85. Stratification of the population should produce strata that are as homogeneous as
possible in terms of innovation activities. Given that the innovation activities of units differ
substantially by industry and unit size, it is recommended to use principal economic activity
and size to construct strata. In addition, stratification by region can be required to meet
policy needs. The potential need for age-based strata should also be explored.
9.86. The recommended size strata by persons employed are as follows:
small units: 10 to 49
medium units: 50 to 249
large units: 250+.
9.87. Depending on national characteristics, strata for units with less than 10, and 500 or
more persons employed can also be constructed, but international comparability requires
the ability to accurately replicate the above three size strata.
9.88. The stratification of units by main economic activity should be based on the most
recent ISIC or nationally equivalent industrial classifications. The optimal classification level
(section, division, group or class) largely depends on national circumstances that influence the
degree of precision required for reporting. For example, an economy specialised in wood
production would benefit from a separate stratum for this activity (division 16 of section
C, ISIC Rev.4), whereas a country where policy is targeting tourism for growth might
create separate strata for division 55 (Accommodation) of section I, for division 56 (Food
services) of section I, and for section R (Arts, entertainment and recreation). Sampling strata
should not be over-aggregated because this reduces homogeneity within each stratum.
9.94. Four main methods can be used to conduct surveys: online, postal, computer-
assisted telephone interviewing (CATI), and computer-assisted personal interviewing (CAPI
or face-to-face interviewing). Online and postal surveys rely on the respondent reading the
questionnaire, with a visual interface that is influenced by the questionnaire layout. CATI
and face-to-face surveys are aural, with the questions read out to the respondent, although
a face-to-face interviewer can provide printed questions to a respondent if needed.
9.95. The last decade has seen a shift from postal to online surveys in many countries.
Most countries that use an online format as their primary survey method also provide a
printed questionnaire as an alternative, offered either as a downloadable file (via a link in
an e-mail or on the survey site) or via the post.
9.96. The choice of which survey method to use depends on costs and potential differences
in response rates and data quality. Recent experimental research has found few significant
differences in either the quality of responses or in response rates, between printed and
online surveys (Saunders, 2012). However, this research has mostly focused on households
and has rarely evaluated surveys of business managers. Research on different survey
methods, particularly in comparison to online formats, is almost entirely based on surveys
of university students or participants in commercial web panels. It would therefore be
helpful to have more research on the effects of different methods for business surveys.
practice protocol consists of posting a cover letter and a printed copy of the questionnaire
to the respondent, followed by two or three mailed follow-up reminders to non-respondents
and telephone reminders if needed.
9.98. Postal surveys make it easy for respondents to quickly view the entire questionnaire
to assess its length, question topics, and its relevance. If necessary, a printed questionnaire
can be easily shared among more than one respondent, for instance if a separate person
from accounting is required to complete the section on innovation expenditures (see section
9.6 below on multiple respondents). A printed questionnaire with filter questions requires
that respondents carefully follow instructions on which question to answer next.
time data, such as the time required to respond to specific questions, sections, or to the
entire survey (Olson and Parkhurst, 2013). Paradata can be analysed to identify best
practices that minimise undesirable respondent behaviour such as premature termination or
satisficing, questions that are difficult for respondents to understand (for instance if
question response times are considerably longer than the average for a question of similar
type), and if late respondents are more likely than early ones to speed through a
questionnaire, thereby reducing data quality (Belfo and Sousa, 2011; Fan and Yan, 2010;
Revilla and Ochoa, 2015).
9.104. It is recommended to collect paradata when using online surveys in order to
identify issues with question design and questionnaire layout.
9.109. The survey protocol consists of all activities to implement the questionnaire,
including contacting respondents, obtaining completed questionnaires, and following up
with non-respondents. The protocol should be decided in advance and designed to ensure
that all respondents have an equal chance of replying to the questionnaire, since the goal is
to maximise the response rate. Nonetheless, the optimum survey protocol is likely to vary
by country.
9.6.4. Non-response
9.114. Unit non-response occurs when a sampled unit does not reply at all. This can occur
if the surveying institute cannot reach the reporting unit or if the reporting unit refuses to
answer. Item non-response refers to the response rate to a specific question and is equal to
the percentage of missing answers among the responding units. Item non-response rates are
frequently higher for quantitative questions than for questions using nominal or ordinal
response categories.
9.115. Unit and item non-response are only minor issues if missing responses are randomly
distributed over all units sampled and over all questions. When unit non-responses are
random, statistical power can be maintained by increasing the sampling fraction. When
item non-responses are random, simple weighting methods can be used to estimate the
population value of a variable. However, both types of non-response can be subject to bias.
For example, managers from non-innovative units could be less likely to reply because they
find the questionnaire of little relevance, resulting in an overestimate of the share of innovative
units in the population. Or, managers of innovative units could be less likely to reply due
to time constraints.
9.123. The non-response survey questionnaire must be short (no more than one printed
page) and take no more than two to three minutes to complete. The key questions should
replicate, word for word, “yes or no” questions in the main survey on innovation outputs
(product innovations and business process innovations) and for some of the innovation
activities (for instance R&D, engineering, design and other creative work activities, etc.).
If not available from other sources, the non-response survey needs to include questions on
the unit’s economic activity and size.
9.124. Non-response surveys are usually conducted by CATI, which provides the
advantage of speed and can obtain high response rates for a short questionnaire, as long as
all firms in the sample have a working contact telephone number. The disadvantage of a
CATI survey as a follow-up to a postal or online survey is that short telephone surveys in
some countries could be more likely than the original survey to elicit positive responses for
questions on innovation activities and outputs. The experience in this regard has been
mixed, with different countries obtaining different results. More experimental research on
the comparability of business survey methods is recommended.
9.125. Data processing involves checks for errors, imputation of missing values and the
calculation of weighting coefficients.
Relational checks
9.130. These evaluate the relationship between two variables and can identify hard and
soft errors. Hard errors occur when a relationship must be wrong, for instance if percentages
do not sum to 100% or if the number of reported persons employed with a tertiary education
exceeds the total reported number of persons employed. Other relational checks identify
soft errors where a response could be wrong. For instance, a unit with ten persons employed
could report EUR 10 million of innovation expenditures. This is possible, but unlikely.
9.141. Innovation surveys are used to produce tables of innovation statistics and indicators
and in econometric analyses of a variety of topics concerning innovation. The production
of statistics and indicators requires using population weights to produce representative
results for the target population. Most innovation surveys use a probability sample for many
strata. Surveys can create two types of errors for indicators: random errors due to the
random process used to select the units, and systematic errors containing all non-random
errors (bias). The probability of random errors should be provided with the results by
including the confidence intervals, standard errors and coefficients of variation where
applicable. Confidence limits span the true but unknown values in the survey population
with a given probability. If possible, data quality reports should also provide an evaluation
of non-random errors.
References
Barge, S. and H. Gehlbach (2012), “Using the theory of satisficing to evaluate the quality of survey
data”, Research in Higher Education, Vol. 53/2, pp. 182-200.
Belfo, F.P. and R.D. Sousa (2011), “A web survey implementation framework: evidence-based design
practices”, conference paper for the 6th Mediterranean Conference on Information Systems,
MCIS 2011, Limassol, 3-5 September, http://aisel.aisnet.org/mcis2011/43/.
Cirera, X. and S. Muzi (2016), “Measuring firm-level innovation using short questionnaires: Evidence
from an experiment”, Policy Research Working Papers, No. 7696, World Bank Group.
Couper, M.P. et al. (2013), “The design of grids in web surveys”, Social Science Computer Review,
Vol. 31/3, pp. 322-345.
Downes-Le Guin, T. et al. (2012), “Myths and realities of respondent engagement in online surveys”,
International Journal of Market Research, Vol. 54/5, pp. 613-633.
Dykema, J. et al. (2013), “Effects of e-mailed versus mailed invitations and incentives on response rates,
data quality, and costs in a web survey of university faculty”, Social Science Computer Review,
Vol. 31/3, pp. 359-370.
EC et al. (2009), System of National Accounts 2008, United Nations, New York,
https://unstats.un.org/unsd/nationalaccount/docs/sna2008.pdf.
Fan, W. and Z. Yan (2010), “Factors affecting response rates of a web survey: A systematic review”,
Computers in Human Behavior, Vol. 26/2, pp. 132-139.
Galesic, M. and M. Bosnjak (2009), “Effects of questionnaire length on participation and indicators of
response quality in a web survey”, Public Opinion Quarterly, Vol. 73/2, pp. 349-360.
Galindo-Rueda, F. and A. Van Cruysen (2016), “Testing innovation survey concepts, definitions and
questions: Findings from cognitive interviews with business managers”, OECD Science, Technology
and Innovation Technical Papers, OECD Publishing, Paris, http://oe.cd/innocognitive.
Harkness, J.A. et al. (eds.) (2010), Survey Methods in Multicultural, Multinational, and Multiregional
Contexts, Wiley Series in Survey Methodology, John Wiley & Sons, Hoboken.
Harris, R.I.D. (1988), “Technological change and regional development in the UK: Evidence from the
SPRU database on innovations”, Regional Studies, Vol. 22/5, pp. 361-374.
Hoskens, M. et al. (2016), “State of the art insights in capturing, measuring and reporting firm-level
innovation indicators”, paper for the OECD Blue Sky 2016 Forum, www.oecd.org/sti/069%20-
%20Measuring%20innovation_ECOOM%20August%202016.pdf.
Kleinknecht, A., J.O.N. Reijnen and W. Smits (1993), “Collecting literature-based innovation output
indicators: The experience in the Netherlands”, in New Concepts in Innovation Output Measurement,
Palgrave Macmillan, London, pp. 42-84.
Millar, M.M. and D.A. Dillman (2011), “Improving response to web and mixed-mode surveys”, Public
Opinion Quarterly, Vol. 75/2, pp. 249-269, https://doi.org/10.1093/poq/nfr003.
OECD (2015a), Recommendation of the OECD Council on Good Statistical Practice, OECD, Paris,
www.oecd.org/statistics/good-practice-toolkit/Brochure-Good-Stat-Practices.pdf.
OECD (2015b), Frascati Manual 2015: Guidelines for Collecting and Reporting Data on Research and
Experimental Development, The Measurement of Scientific, Technological and Innovation Activities,
OECD Publishing, Paris, http://oe.cd/frascati.
Olson, K, and B. Parkhurst (2013), “Collecting paradata for measurement error evaluations”, in
Improving Surveys with Paradata: Analytic Uses of Process Information, John Wiley & Sons,
Hoboken, pp. 43-72.
Revilla, M. and C. Ochoa (2015), “What are the links in a web survey among response time, quality and
auto-evaluation of the efforts done?”, Social Science Computer Review, Vol. 33/1, pp. 97-114,
https://doi.org/10.1177/0894439314531214.
Saunders, M.N.K. (2012), “Web versus mail: The influence of survey distribution mode on employees’
response”, Field Methods, Vol. 24/1, pp. 56-73.
Snijkers, G. and D.K. Willimack (2011), “The missing link: From concepts to questions in economic
surveys”, paper presented at the 2nd European Establishment Statistics Workshop (EESW11),
Neuchâtel, Switzerland, Sept. 12-14.
Snijkers, G. et al. (eds.) (2013), Designing and Conducting Business Surveys, Wiley Series in Survey
Methodology, John Wiley & Sons, Hoboken.
Tourangeau, R., L.J. Rips and K. Rasinski (2000), The Psychology of Survey Response, Cambridge
University Press, Cambridge.
UN (2008), International Standard Industrial Classification of All Economic Activities (ISIC),
Revision 4, United Nations, New York,
https://unstats.un.org/unsd/publications/catalogue?selectID=396.
UN (2007), Statistical Units, United Nations, New York,
http://unstats.un.org/unsd/isdts/docs/StatisticalUnits.pdf.
Wilhelmsen, L. (2012), “A question of context: Assessing the impact of a separate innovation survey and
of response rate on the measurement of innovation activity in Norway”, Documents, No. 51/2012, Statistics
Norway, Oslo, www.ssb.no/a/english/publikasjoner/pdf/doc_201251_en/doc_201251_en.pdf.
Willeboordse, A. (ed.) (1997), Handbook on Design and Implementation of Business Surveys, Eurostat,
Luxembourg, http://ec.europa.eu/eurostat/ramon/statmanuals/files/Handbook%20on%20surveys.pdf.
Willis, G.B. (2015), Analysis of the Cognitive Interview in Questionnaire Design, Oxford University
Press, Oxford.
Willis, G.B. (2005), Cognitive Interviewing: A Tool for Improving Questionnaire Design, SAGE
Publications.
Zhang, X.C. et al. (2017), “Survey method matters: Online/offline questionnaires and face-to-face or
telephone interviews differ”, Computers in Human Behavior, Vol. 71, pp. 172-180.
10.1. Introduction
10.1. The object approach to innovation measurement collects data on a single, “focal”
innovation (the object of the study), in contrast to the subject approach, which focuses on
the firm and collects data on all its innovation activities (the subject) (see Chapter 2). The
main purpose of the object approach is not to produce aggregate innovation statistics but to
collect data for analytical and research purposes. The method can also provide useful
information for quality assurance purposes on how respondents interpret questions on
innovation and whether they over-, under- or misreport innovation.
10.2. The object method can identify focal innovations through expert evaluations, or through
announcements of innovations in trade publications (Kleinknecht and Reijnen, 1993; Santarelli
and Piergiovanni, 1996; Townsend, 1981) or online sources (company websites, reports,
investor announcements, etc.). An alternative method of using the object method is to incorporate
the object approach within a subject-based innovation survey. In addition to questions on all
of the firm’s innovation activities, a module of questions can focus on a single innovation.
DeBresson and Murray (1984) were the first to use a version of this method as part of an
innovation survey in Canada. More recently, this approach has been used in business enterprise
surveys, for instance by Statistics Canada and the Japanese Statistical Office, academic
researchers in Australia (O’Brien et al., 2015, 2014) and the United States (Arora, Cohen and
Walsh, 2016), and in surveys of innovation in the Government sector (Arundel et al., 2016).
10.3. The inclusion of the object method within a subject-based innovation survey has
several advantages over the use of experts or announcements to identify focal innovations.
First, it can obtain information on a focal innovation for a representative sample of all
innovative firms, whereas other methods will be prone to self-selection biases. Second, it
can collect data on all types of innovations. Using experts or announcements to identify
innovations will produce a bias towards successful product innovations. Third, it can collect
information on innovations that are new to the firm only, or not sufficiently novel to be
reported on line or in trade journals. It is therefore recommended, where cost-effective, to
collect data on a focal innovation through representative surveys.
10.12. The first option has several advantages. The question is usually well understood by
respondents and the innovation is memorable, which ensures that respondents can answer
questions about it. In addition, the most important innovation is relevant to many areas of
research, such as on the factors that lead to success. Leaving the first option open to all
types of innovations can collect useful data on the types of innovations that firms find
important. It can also identify innovation inputs that are likely to be of high value to a firm.
For instance, a respondent could give a moderate importance ranking to universities as a
source of knowledge for all innovation activities, but the use of this source for its most
important innovation would indicate that the value of knowledge from universities could
vary by the type of innovation.
10.13. The second option requires respondents to have a good knowledge of the development
cost for different innovations. The third and fourth options are a variant of the first option,
but limited to either product or business process innovations and therefore will not be
relevant to firms that did not introduce an innovation of that type. The fifth option is useful
for research that requires a random selection of all types of innovations.
10.14. Unless there are good research reasons for using a different option, the first option
is recommended because it is better understood by respondents and is relevant to all firms.
Furthermore, the first option is useful for research into the types of innovations with the
largest expected economic benefits to the firm. These results can be used to construct aggregate
indicators by industry, firm size, or other firm characteristic on the types of innovations
(i.e. product or business process innovations) that respondents find of greatest economic
value to their firm.
10.15. Cognitive testing shows that respondents are able to identify their most important
innovation as defined by its actual or expected contribution to the firm’s economic performance.
For small and medium-sized enterprises (SMEs), there is usually one innovation that stands
out from all others. Respondents from firms with many different innovations (often, but
not always large firms) can find it difficult to identify a single innovation that stands out in
comparison with the rest, but this does not affect their ability to select a single innovation
and answer subsequent questions about it. Respondents from firms with many innovations
are still likely to find it easier to answer questions on a focal innovation than to summarise
results for multiple innovations.
10.16. If resources permit, written information in an open-ended description of the most
important innovation can be coded and analysed to assess how respondents interpret
questions on the types of innovation and the novelty of the innovation (Arundel, O’Brien
and Torugsa, 2013; Cirera and Muzi, 2016; EBRD, 2014). This requires written information
to be coded by experts, but text mining software tools can significantly reduce coding
costs. Textual data on novelty can also be used to estimate if respondents understood the
questionnaire definition of an innovation (Bloch and Bugge, 2016).
10.18. Subject-based innovation surveys that include an object-based module should place
such module after all other innovation questions in order to ensure that respondents do not
confuse questions about all innovation activities with questions limited to a focal innovation.
10.25. Some of these questions could ask for data on activities before the observation
period, such as the question on calendar months or total expenditures, but this is only likely
to be relevant for major innovations.
the contribution of internal and external actors to the development of the focal
innovation, in order to identify potential success factors (subsection 10.3.4)
an outcome measure such as the innovation sales share for a focal product innovation
or cost savings from a focal business process innovation (subsection 10.3.6).
10.38. Supplementary topics for data collection using an object-based module include:
use of IP rights for the focal innovation (subsection 10.3.3)
obstacles to innovation (subsection 10.3.5)
use of government support policies (subsection 10.3.5).
References
Arora, A., W.M. Cohen and J.P. Walsh (2016), “The acquisition and commercialization of invention in
American manufacturing: Incidence and impact”, Research Policy, Vol. 45/6, pp. 1113-1128.
Arundel, A. et al. (2016), “Management and service innovations in Australian and New Zealand
universities: Preliminary report of descriptive results”, Australian Innovation Research Centre
(University of Tasmania) and LH Martin Institute (University of Melbourne).
Arundel, A., K. O’Brien and A. Torugsa (2013), “How firm managers understand innovation:
Implications for the design of innovation surveys” in Handbook of Innovation Indicators and
Measurement, Edward Elgar, Cheltenham, pp. 88-108.
Bloch, C. and M. Bugge (2016), “Between bricolage and breakthroughs – Framing the many faces of
public sector innovation”, Public Money & Management, Vol. 36/4, pp. 281-288.
Cirera, X. and S. Muzi (2016), “Measuring firm-level innovation using short questionnaires: Evidence
from an experiment”, Policy Research Working Papers, No. 7696, World Bank Group.
DeBresson, C. and B. Murray (1984), “Innovation in Canada – A retrospective survey: 1945-1978”,
Cooperative Research Unit on Science and Technology (CRUST), New Westminster.
EBRD (2014), Transition Report 2014: Innovation in Transition, European Bank for Reconstruction and
Development, London.
Kleinknecht, A. and J.O.N. Reijnen (1993), “Towards literature-based innovation output indicators”,
Structural Change and Economic Dynamics, Vol. 4/1, pp. 199-207.
O’Brien, K. et al. (2015), “New evidence on the frequency, impacts and costs of activities to develop
innovations in Australian businesses: Results from a 2015 pilot study”, report to the Commonwealth,
Department of Industry, Innovation and Science, Australian Innovation Research Centre (University
of Tasmania), Hobart, www.utas.edu.au/__data/assets/pdf_file/0009/772857/AIRC-Pilot-survey-
report-for-DIS_Dec_2015.pdf.
O’Brien K, et al. (2014), “Lessons from high capability innovators: Results from the 2013 Tasmanian
Innovation Census”, Australian Innovation Research Centre (University of Tasmania), Hobart.
Santarelli, E. and R. Piergiovanni (1996), “Analyzing literature-based innovation output indicators: The
Italian experience”, Research Policy, Vol. 25/5, pp. 689-711.
Townsend, J. (1981), “Science innovation in Britain since 1945”, SPRU Occasional Paper Series,
No. 16, Science Policy Research Unit (SPRU), University of Sussex, Brighton.
Chapter 11. Use of innovation data for statistical indicators and analysis
This chapter provides guidance on the use of innovation data for constructing indicators
as well as statistical and econometric analysis. The chapter provides a blueprint for the
production of innovation indicators by thematic areas, drawing on the recommendations
in previous chapters. Although targeted to official organisations and other users of
innovation data, such as policy analysts and academics, the guidance in this chapter
also seeks to promote better understanding among innovation data producers about
how their data are or might be used. The chapter provides suggestions for future
experimentation and the use of innovation data in policy analysis and evaluation. The
ultimate objective is to ensure that innovation data, indicators and analysis provide
useful information for decision makers in government and industry while ensuring that
trust and confidentiality are preserved.
11.1. Introduction
11.1. Innovation data can be used to construct indicators and for multivariate analysis of
innovation behaviour and performance. Innovation indicators provide statistical information
on innovation activities, innovations, the circumstances under which innovations emerge,
and the consequences of innovations for innovative firms and for the economy. These
indicators are useful for exploratory analysis of innovation activities, for tracking innovation
performance over time and for comparing the innovation performance of countries, regions,
and industries. Multivariate analysis can identify the significance of different factors that
drive innovation decisions, outputs and outcomes. Indicators are more accessible to the
general public and to many policy makers than multivariate analysis and are often used in
media coverage of innovation issues. This can influence public and policy discussions on
innovation and create demand for additional information.
11.2. This chapter provides guidance on the production, use, and limitations of innovation
indicators, both for official organisations and for other users of innovation data, such as
policy analysts and academics who wish to better understand innovation indicators or
produce new indicators themselves. The discussion of multivariate analyses is relevant to
researchers with access to microdata on innovation and to policy analysts. The chapter also
includes suggestions for future experimentation. The ultimate objective is to ensure that
innovation data, indicators and analysis provide useful information for decision makers in
both government and industry, as discussed in Chapters 1 and 2.
11.3. Most of the discussion in this chapter focuses on data collected through innovation
surveys (see Chapter 9). However, the guidelines and suggestions for indicators and
analysis also apply to data obtained from other sources. For some topics, data from other
sources can substantially improve analysis, such as for research on the effects of innovation
activities on outcomes (see Chapter 8) or the effect of the firm’s external environment on
innovation (see Chapters 6 and 7).
11.4. Section 11.2 below introduces the concepts of statistical data and indicators relating
to business innovation, and discusses desirable properties and the main data resources available.
Section 11.3 covers methodologies for constructing innovation indicators and aggregating
them using dashboards, scoreboards and composite indexes. Section 11.4 presents a blueprint
for the production of innovation indicators by thematic areas, drawing on the recommendations
in previous chapters. Section 11.5 covers multivariate analyses of innovation data, with a
focus on the analysis of innovation outcomes and policy evaluation.
11.2.1. What are innovation indicators and what are they for?
11.5. An innovation indicator is a statistical summary measure of an innovation phenomenon
(activity, output, expenditure, etc.) observed in a population or a sample thereof for a specified
time or place. Indicators are usually corrected (or standardised) to permit comparisons
across units that differ in size or other characteristics. For example, an aggregate indicator
for national innovation expenditures as a percentage of gross domestic product (GDP)
corrects for the size of different economies (Eurostat, 2014; UNECE, 2000).
11.6. Official statistics are produced by organisations that are part of a national statistical
system (NSS) or by international organisations. An NSS produces official statistics for
government. These statistics are usually compiled within a legal framework and in accordance
with basic principles that ensure minimum professional standards, independence and objectivity.
Organisations that are part of an NSS can also publish unofficial statistics, such as the
results of experimental surveys. Statistics about innovation and related phenomena have
progressively become a core element of the NSS of many countries, even when not compiled
by national statistical organisations (NSOs).
11.7. Innovation indicators can be constructed from multiple data sources, including
some that were not explicitly designed to support the statistical measurement of innovation.
Relevant sources for constructing innovation indicators include innovation and related
surveys, administrative data, trade publications, the Internet, etc. (see Chapter 9). The use
of multiple data sources to construct innovation indicators is likely to increase in the future
due to the growing abundance of data generated or made available on line and through other
digital environments. The increasing ability to automate the collection, codification and
analysis of data is another key factor expanding the possibilities for data sourcing strategies.
11.8. Although increasingly used within companies and for other purposes, indicators of
business innovation, especially those from official sources, are usually designed to inform
policy and societal discussions, for example to monitor progress towards a related policy
target (National Research Council, 2014). Indicators themselves can also influence
business behaviour, including how managers respond to surveys. An evaluation of multiple
innovation indicators, along with other types of information, can assist users in better
understanding a wider range of innovation phenomena.
Basic principles
11.10. In line with general statistical principles (UN, 2004), business innovation statistics
must be useful and made publicly available on an impartial basis. It is recommended that
NSOs and other agencies that collect innovation data use a consistent schema for presenting
aggregated results and apply this to data obtained from business innovation surveys. The
data should be disaggregated by industry and firm size, as long as confidentiality and quality
requirements are met. These data are the basic building blocks for constructing indicators.
International comparisons
11.11. User interest in benchmarking requires internationally comparable statistics. The
adoption by statistical agencies of the concepts, classifications and methods contained in
this manual will further promote comparability. Country participation in periodical data
reporting exercises to international organisations such as Eurostat, the OECD and the
United Nations can also contribute to building comparable innovation data.
11.12. As discussed in Chapter 9, international comparability of innovation indicators
based on survey data can be reduced by differences in survey design and implementation
(Wilhelmsen, 2012). These include differences between mandatory and voluntary surveys,
survey and questionnaire design, follow-up practices, and the length of the observation period.
Innovation indicators based on other types of data sources are also subject to comparability
problems, for example in terms of coverage and reporting incentives.
11.13. Another factor affecting comparability stems from national differences in innovation
characteristics, such as the average novelty of innovations and the predominant types of
markets served by firms. These contextual differences also call for caution in interpreting
indicator data for multiple countries.
11.14. Some of the issues caused by differences in methodology or innovation characteristics
can be addressed through data analysis. For example, a country with a one-year observation
period can (if available) use panel data to estimate indicators for a three-year period.
Other research has developed “profile” indicators (see subsection 3.6.2) that improve the
International resources
11.16. Box 11.1 lists three sources of internationally comparable indicators on innovation
that follow, in whole or in part, Oslo Manual guidelines and are available at the time of
publishing this manual.
Box 11.1. Major resources for international innovation data using Oslo Manual guidelines
11.19. Common characteristics for aggregation include the country and region where the
firm is located and characteristics of the firm itself, such as its industry and size (using size
categories such as 10 to 49 persons employed, etc.). Aggregation of business-level data
requires an understanding of the underlying statistical data and the ability to unequivocally
assign a firm to a given category. For example, regional indicators require an ability to
assign or apportion a firm or its activities to a region. Establishment data are easily assigned
to a single region, but enterprises can be active in several regions, requiring spatial
imputation methods to divide activities between regions.
11.20. Indicators at a low level of aggregation can provide detailed information that is of
greater value to policy or understanding than aggregated indicators alone. For example, an
indicator for the share of firms by industry with a product innovation will provide more
useful information than an indicator for all industries combined.
Table 11.2. Descriptive statistics and methods for constructing innovation indicators
Several statistical procedures ranging from simple addition to factor analysis can be used
for this purpose.
11.22. Many indicators are calculated as averages, sums, or maximum values across a
range of variables (see Table 11.2). These methods are useful for summarising related nominal,
ordinal, or categorical variables that are commonly found in innovation surveys. For example,
a firm that reports at least one type of innovation out of a list of eight innovation types (two
products and six business processes) is defined as an innovative firm. This derived variable
can be used to construct an aggregate indicator for the average share of innovative firms by
industry. This is an example of an indicator where only one positive value out of multiple
variables is required for the indicator to be positive. The opposite is an indicator that is only
positive when a firm gives a positive response to all relevant variables.
11.23. Composite indicators are another method for reducing dimensionality. They combine
multiple indicators into a single index based on an underlying conceptual model (OECD/JRC,
2008). Composite indicators can combine indicators for the same dimension (for instance
total expenditures on different types of innovation activities), or indicators measured along
multiple dimensions (for example indicators of framework conditions, innovation investments,
innovation activities, and innovation impacts).
11.24. The number of dimensions can also be reduced through statistical methods such as
cluster analysis and principal component analysis. Several studies have applied these
techniques to microdata to identify typologies of innovation behaviour and to assess the
extent to which different types of behaviour can predict innovation outcomes (de Jong and
Marsili, 2006; Frenz and Lambert, 2012; OECD, 2013).
Composite innovation indexes, presented in scoreboards that rank the performance of countries
or regions, were developed to address the limitations of dashboards. They are mostly produced
by consultants, research institutes, think tanks and policy institutions that lack access to
microdata, with the composite indexes constructed by aggregating existing indicators.
11.28. Compared to simple indicators used in dashboards, the construction of composite
innovation indexes requires two additional steps:
The normalisation of multiple indicators, measured on different scales (nominal,
counts, percentages, expenditures, etc.), into a single scale. Normalisation can be
based on standard deviations, the min-max method, or other options.
The aggregation of normalised indicators into one or more composite indexes. The
aggregation can give an identical weight to all normalised indicators or use different
weights. The weighting determines the relative contribution of each indicator to the
composite index.
11.29. Composite indexes provide a number of advantages as well as challenges over
simple indicators (OECD/JRC, 2008). The main advantages are a reduction in the number of
indicators and simplicity, both of which are desirable attributes that facilitate communication
with a wider user base (i.e. policy makers, media, and citizens). The disadvantages of
composite indexes are as follows:
With few exceptions, the theoretical basis for a composite index is limited. This
can result in problematic combinations of indicators, such as indicators for inputs
and outputs.
Only the aggregate covariance structure of underlying indicators can be used to
build the composite index, if used at all.
The relative importance or weighting of different indicators is often dependent on
the subjective views of those constructing the composite index. Factors that are
minor contributors to innovation can be given as much weight as major ones.
Aside from basic normalisation, structural differences between countries are seldom
taken into account when calculating composite performance indexes.
Aggregation results in a loss of detail, which can hide potential weaknesses and
increase the difficulty in identifying remedial action.
11.30. Due to these disadvantages, composite indicators need to be accompanied by
guidance on how to interpret them. Otherwise, they can mislead readers into supporting
simple solutions to complex policy issues.
11.31. The various innovation dashboards, scoreboards and composite indexes that are
currently available change frequently. Box 11.2 provides examples that have been published
on a regular basis.
11.32. The combination of a lack of innovation data for many countries, plus concerns
over the comparability of innovation survey data, has meant that many innovation rankings
rely on widely available indicators that capture only a fraction of innovation activities, such
as R&D expenditures or IP rights registrations, at the expense of other relevant dimensions.
11.35. This section provides guidelines on the types of innovation indicators that can be
produced by NSOs and other organisations with access to innovation microdata. Many of
these indicators are in widespread use and based on data collected in accordance with
previous editions of this manual. Indicators are also suggested for new types of data discussed
in Chapters 3 to 8. Other types of indicators can be constructed to respond to changes in
user needs or when new data become available.
11.36. Producers of innovation indicators can use answers to the following questions to
guide the construction and presentation of indicators:
What do users want to know and why? What are the relevant concepts?
What indicators are most suitable for representing a concept of interest?
What available data are appropriate for constructing an indicator?
What do users need to know to interpret an indicator?
11.37. The relevance of a given set of indicators depends on user needs and how the
indicators are used (OECD, 2010). Indicators are useful for identifying differences in
innovation activities across categories of interest, such as industry or firm size, or to track
performance over time. Conversely, indicators should not be used to identify causal relationships,
such as the factors that influence innovation performance. This requires analytical methods,
as described in section 11.5 below.
11.39. Table 11.4 provides a list of proposed indicators for measuring the incidence of
innovation that can be mostly produced using nominal data from innovation surveys, as
discussed in Chapter 3. These indicators describe the innovation status of firms and the
characteristics of their innovations.
Note: All indicators refer to activities within the survey observation period. Indicators for innovation rates can
also be calculated as shares of employment or turnover, for instance the share of total employees that work for
an innovative firm, or the share of total sales earned by innovative firms. Unless otherwise noted with an “*”
before a computation note, all indicators can be computed using all firms, innovation-active firms only, or
innovative firms only as the denominator. See section 3.5 for a definition of firm types.
Notes: Indicators derived from Table 4.1 refer to the survey observation period. Expenditure indicators derived
from Table 4.2 and Table 4.3 only refer to the survey reference period. Unless otherwise noted with an “*”
before a computation note, all indicators can be computed using all firms, innovation-active firms only, or
innovative firms only as the denominator. See section 3.5 for a definition of firm types.
11.41. Table 11.6 lists potential indicators of business capabilities for innovation following
Chapter 5. All indicators of innovation capability are relevant to all firms, regardless of
their innovation status. The microdata can also be used to generate synthetic indexes on the
propensity of firms to innovate.
Notes: All indicators refer to activities within the survey observation period. All indicators can be computed
using all firms, innovation-active firms only, or innovative firms only as the denominator. See section 3.5 for a
definition of firm types.
11.42. Table 11.7 provides indicators of knowledge flows for innovation, following
guidance in Chapter 6 on both inbound and outbound flows. With a few exceptions, most
of these indicators are relevant to all firms.
Note: All indicators refer to activities within the survey observation period. Indicators on the role of other
parties in the firm’s innovations are included in Table 11.4 above. Unless otherwise noted with an “*” before a
computation note, all indicators can be computed using all firms, innovation-active firms only, or innovative
firms only as the denominator. See section 3.5 for a definition of firm types.
11.43. Table 11.8 provides a list of indicators for external factors that can potentially
influence innovation, as discussed in Chapter 7. With the exception of drivers of innovation,
all of these indicators can be calculated for all firms.
Note: All indicators refer to activities within the survey observation period. Unless otherwise noted with an “*”
before a computation note, all indicators can be computed using all firms, innovation-active firms only, or
innovative firms only as the denominator. See section 3.5 for a definition of firm types.
11.44. Table 11.9 lists simple outcome (or objective) indicators, based on either nominal
or ordinal survey questions, as proposed in Chapter 8. The objectives are applicable to all
innovation-active firms, while questions on outcomes are only relevant to innovative firms.
1. These indicators can be calculated by thematic area (e.g. production efficiency, markets, environment, etc.).
Note: All indicators refer to activities within the survey observation period. Unless otherwise noted with an “*”
before a computation note, all indicators can be computed using all firms, innovation-active firms only, or
innovative firms only as the denominator. See section 3.5 for a definition of firm types.
and strategy, there may be large gaps between a firm’s scientific and technological outputs
and what it decides to disclose.
11.51. Indicators of innovation intensity (summing all innovation expenditures and dividing
by total expenditures) can be calculated at the level of industry, region, and country.
Intensity indicators avoid the need to standardise by measures of firm size.
11.61. Policy and business decisions can benefit from a thorough understanding of the
factors that affect the performance of an innovation system. Innovation indicators provide
useful information on the current state of the system, including bottlenecks, deficiencies
and weaknesses, and can help track changes over time. However, this is insufficient:
decision makers also need to know how conditions in one part of the system influence other
parts, and how the system works to create outcomes of interest, including the effects of
policy interventions.
11.62. This section examines how innovation data can be used to evaluate the links between
innovation, capability-building activities, and outcomes of interest (Mairesse and Mohnen,
2010). Relevant research has extensively covered productivity (Hall, 2011; Harrison et al., 2014),
management (Bloom and Van Reenen, 2007), employment effects (Griffith et al., 2006),
knowledge sourcing (Laursen and Salter, 2006), profitability (Geroski, Machin and Van
Reenen, 1993), market share and market value (Blundell, Griffith and Van Reenen, 1999),
competition (Aghion et al., 2005), and policy impacts (Czarnitzki, Hanel and Rosa, 2011).
11.70. In innovation policy design, the innovation logic model as described in Figure 11.1
is a useful tool for identifying what is presumed to be necessary for the achievement of
desired outcomes. Measurement can capture evidence of events, conditions and behaviours
that can be treated as proxies of potential inputs and outputs of the innovation process.
Outcomes can be measured directly or indirectly. The evaluation of innovation policy using
innovation data is discussed below.
11.75. Other conditions can increase the difficulty of identifying causality. In research on
knowledge flows, linkages across actors and the importance of both intended and unintended
knowledge diffusion can create challenges for identifying the effect of specific knowledge
sources on outcomes. Important channels could exist for which there are no data. As noted
in Chapter 6, the analysis of knowledge flows would benefit from social network graphs of
the business enterprise to help identify the most relevant channels. A statistical implication
of highly connected innovation systems is that the observed values are not independently
distributed: competition and collaboration generate outcome dependences across firms that
affect estimation outcomes.
11.76. Furthermore, dynamic effects require time series data and an appropriate model of
evolving relationships in an innovation system, for example between inputs in a given
period (t) and outputs in later periods (t+1). In some industries, economic results are only
obtained after several years of investment in innovation. Dynamic analysis could also
require data on changes in the actors in an innovation system, for instance through mergers
and acquisitions. Business deaths can create a strong selection effect, with only surviving
businesses available for analysis.
Matching estimators
11.77. Complementing regression analysis, matching is a method that can be used for
estimating the average effect of business innovation decisions as well as policy interventions
(see subsection 11.5.3 below). Matching imposes no functional form specifications on the
data but assumes that there is a set of observed characteristics such that outcomes are
independent of the treatment conditional on those characteristics (Todd, 2010). Under this
assumption, the impact of innovation activity on an outcome of interest can be estimated
from comparing the performance of innovators with a weighted average of the performance of
non-innovators. The weights need to replicate the observable characteristics of the innovators
in the sample. Under some conditions, the weights can be estimated from predicted innovation
probabilities using discrete analysis (matching based on innovation propensity scores).
11.78. In many cases, there can be systematic differences between the outcomes of treated
and untreated groups, even after conditioning on observables, which could lead to a violation
of the identification conditions required for matching. Independence assumptions can be
more valid for changes in the variable of interest over time. When longitudinal data are
available, the “difference in differences” method can be used. An example is an analysis of
productivity growth that compares firms that introduced innovations in the reference period
with those that did not. Further bias reduction can be attained by using information on past
innovation and economic performance.
11.79. Matching estimators and related regression analysis are particularly useful for the
analysis of reduced-form causal relationship models. Reduced-form models have fewer
requirements than structural models, but are less informative in articulating the mechanisms
that underpin the relationship between different variables.
productivity by innovation output and corrects for the selectivity and endogeneity inherent
in survey data. It includes the following sub-models (Criscuolo, 2009):
1. Propensity among all firms to undertake innovation: This key step requires good
quality information on all firms. This requirement provides a motivation for
collecting data from all firms, regardless of their innovation status, as recommended
in Chapters 4 and 5.
2. Intensity of innovation effort among innovation-active firms: The model recognises
that there is an underlying degree of innovation effort for each firm that is only
observed among those that undertake innovation activities. Therefore, the model
controls for the selective nature of the sample.
3. Scale of innovation output: This is observed only for innovative firms. This model
uses the predicted level of innovation effort identified in model 2 and a control for
the self-selected nature of the sample.
4. Relationship between labour productivity and innovation effort: This is estimated
by incorporating information about the drivers of the innovation outcome variable
(using its predicted value) and the selective nature of the sample.
11.81. Policy variables can be included in a CDM model, provided they display sufficient
variability in the sample and satisfy the independence assumptions (including no self-
selection bias) required for identification.
11.82. The CDM framework has been further developed to work with repeated cross-
sectional and panel data, increasing the value of consistent longitudinal data at the micro
level. Data and modelling methods require additional development before CDM and CDM-
related frameworks can fully address several questions of interest, such as the competing
roles of R&D versus non-R&D types of innovation activity, or the relative importance or
complementarity of innovation activities versus generic competence and capability development
activities. Improvements in data quality for variables on non-R&D activities and capabilities
would facilitate the use of extended CDM models.
businesses that would have performed well even in the absence of support, and businesses
themselves have incentives to apply according to their potential to benefit from policy
support after taking into account potential costs.
11.86. The diagonal arrow in Figure 11.2 shows which empirical comparisons are possible
and how they do not necessarily represent causal effects or impacts when the treated and
non-treated groups differ from each other in ways that relate to the outcomes (i.e. a failure
to control for confounding variables).
Figure 11.2. The innovation policy evaluation problem to identifying causal effects
True impact of
treatment among
Observe the Cannot observe supported firms not
"supported" the "what if non observed.
Firms supported" Missing counterfactual:
outcomes of outcome of
supported supported firms what if support had not
supported firms been provided to these
firms?
Source: Based on Rubin (1974), “Estimating causal effects of treatments in randomized and nonrandomized studies”.
11.89. Randomisation eliminates selection bias, so that both groups are comparable and
any differences between them are the result of the intervention. Randomised trials are
sometimes viewed as politically unfeasible because potential beneficiaries are excluded
from treatment, at least temporarily. However, randomisation can often be justified on the
basis of its potential for policy learning when uncertainty is largest. Furthermore, a selection
procedure is required in the presence of budgetary resource limitations that prevent all firms
from benefiting from innovation support.
Procedures
11.93. With few exceptions, NSOs rarely have a mandate to conduct policy evaluations.
However, it is widely accepted that their infrastructures can greatly facilitate such work in
conditions that do not contravene the confidentiality obligations to businesses reporting
data for statistical purposes. Evaluations are usually left to academics, researchers or
consultants with experience in causal analysis as well as the independence to make critical
comments on public policy issues. This requires providing researchers with access to microdata
under sufficiently secure conditions (see subsection 9.8.2). There have been considerable
advances to minimise the burden associated with secure access to microdata for analysis.
Of note, international organisations such as the Inter-American Development Bank have
contributed to comparative analysis by requiring the development of adequate and accessible
microdata as a condition of funding for an innovation (or related) survey.
11.94. Government agencies that commission policy evaluations using innovation and
other related survey data require basic capabilities in evaluation methodologies in order to
scrutinise and assess the methodologies used by contractors or researchers and to interpret
and communicate the results. Replicability is an important requirement for ensuring quality,
and the programming code used for statistical analysis should thus be included as one of the
evaluation’s deliverables. Linked databases that are created for publicly funded evaluation
studies should also be safely stored and made available to other researchers after a
reasonable time lapse, as long as they do not include confidential data.
11.6. Conclusions
11.102. This chapter has reviewed a number of issues relating to the use of innovation data for
constructing indicators as well as in statistical and econometric analysis. The recommendations
in this chapter are aimed not only at those producing indicators in an official capacity, but
also at other interested users of innovation data. The chapter seeks to guide the work of
those involved in the design, production and use of innovation indicators. It also contributes
to address a broader range of user evidence needs that cannot be met by indicators alone.
The chapter has thus described methods for analysing innovation data, with a focus on
assessing the impacts of innovation and the empirical evaluation of government innovation
policies. It is intended to guide existing data collection and analysis, as well as to encourage
future experimentation which will enhance the quality, visibility, and usefulness of data
and indicators derived from innovation surveys, a key objective of this manual.
References
Aghion, P. et al. (2005), “Competition and innovation: An inverted-U relationship”, The Quarterly
Journal of Economics, Vol. 120/2, pp. 701-728.
Arundel, A. and H. Hollanders (2008), “Innovation scoreboards: Indicators and policy use” in Innovation
Policy in Europe: Measurement and Strategy, Edward Elgar, Cheltenham, pp. 29-52.
Arundel, A. and H. Hollanders (2005), “EXIS: An Exploratory Approach to Innovation Scoreboards”,
European Trend Chart on Innovation, DG Enterprise, European Commission, Brussels,
http://digitalarchive.maastrichtuniversity.nl/fedora/get/guid:25cbd28f-efcf-4850-a43c-
ab25393fcca7/ASSET1 (accessed on 9 August 2018).
Bartelsman, E.J., E. Hagsten and M. Polder (2017), “Micro Moments Database for cross-country analysis
of ICT, innovation, and economic outcomes”, Tinbergen Institute Discussion Papers, No. 2017-
003/VI, http://dx.doi.org/10.2139/ssrn.2898860.
Bloch, C. and V. López-Bassols (2009), “Innovation indicators”, in Innovation in Firms: A
Microeconomic Perspective, OECD Publishing, Paris, https://doi.org/10.1787/9789264056213-en.
Bloom, N. and J. Van Reenen (2007), “Measuring and explaining management practices across
countries”, The Quarterly Journal of Economics, Vol. 122/4, pp. 1351-1408.
Blundell, R., R. Griffith and J. Van Reenen (1999), “Market share, market value and innovation in a
panel of British manufacturing firms”, The Review of Economic Studies, Vol. 66/3, pp. 529-554.
Crépon, B., E. Duguet and J. Mairesse (1998), “Research, innovation and productivity: An econometric
analysis at the firm level”, Economics of Innovation and New Technology, Vol. 7/2, pp. 115-158.
Crespi, G. and P. Zuñiga (2010), “Innovation and productivity: Evidence from six Latin American countries”,
IDB Working Papers, No. IDB-WP-218, Inter-American Development Bank, Washington DC.
Criscuolo, C. (2009), “Innovation and productivity: Estimating the core model across 18 countries”, in
Innovation in Firms: A Microeconomic Perspective, OECD Publishing, Paris,
https://doi.org/10.1787/9789264056213-en.
Czarnitzki, D., P. Hanel and J.M. Rosa (2011), “Evaluating the impact of R&D tax credits on innovation:
A microeconometric study on Canadian firms”, Research Policy, Vol. 40/2, pp. 217-229.
de Jong, J.P.J. and O. Marsili (2006), “The fruit flies of innovations: A taxonomy of innovative small
firms”, Research Policy, Vol.35/2, pp. 213-229.
EC (2010), Elements for the Setting-up of Headline Indicators for Innovation in Support of the Europe
2020 Strategy, Report of the High Level Panel on the Measurement of Innovation, DG Research and
Innovation, European Commission, Brussels.
Edovald, T. and T. Firpo (2016), “Running randomised controlled trials in innovation, entrepreneurship
and growth: An introductory guide”, Innovation Growth Lab, Nesta, London,
https://media.nesta.org.uk/documents/a_guide_to_rcts_-_igl_09aKzWa.pdf (accessed on 9 August 2018).
Eurostat (2014), Glossary of Statistical Terms, http://ec.europa.eu/eurostat/statistics-
explained/index.php/Glossary:Statistical_indicator (accessed on 9 August 2018).
Frenz, M. and R. Lambert (2012), “Mixed modes of innovation: An empiric approach to capturing firms’
innovation behaviour”, OECD Science, Technology and Industry Working Papers, No. 2012/06,
OECD Publishing, Paris, https://doi.org/10.1787/5k8x6l0bp3bp-en.
Galindo-Rueda, F. and V. Millot (2015), “Measuring design and its role in innovation”, OECD Science,
Technology and Industry Working Papers, No. 2015/01, OECD Publishing, Paris,
https://doi.org/10.1787/5js7p6lj6zq6-en.
Gault, F. (ed.) (2013), Handbook of Innovation Indicators and Measurement, Edward Elgar, Cheltenham.
Geroski, P., S. Machin and J. Van Reenen (1993), “The profitability of innovating firms”, The RAND
Journal of Economics, Vol. 24/2, pp. 198-211.
Griffith, R. et al. (2006), “Innovation and productivity across four European countries”, Oxford Review
of Economic Policy, Vol. 22/4, pp. 483-498.
Griliches, Z. (1990), “Patent statistics as economic indicators: A survey”, Journal of Economic
Literature, Vol. 28/4, pp. 1661-1707.
Hall, B.H. (2011), “Innovation and productivity”, NBER Working Papers, No. 17178, National Bureau of
Economic Research (NBER), Cambridge, MA, www.nber.org/papers/w17178.
Harrison, R. et al. (2014), “Does innovation stimulate employment? A firm-level analysis using
comparable micro-data from four European countries”, International Journal of Industrial
Organization, Vol. 35, pp. 29-43.
Hill, C.T. (2013), “US innovation strategy and policy: An indicators perspective”, in Handbook of
Innovation Indicators and Measurement, Edward Elgar, Cheltenham, pp. 333-346.
Hollanders, H. and N. Janz (2013), “Scoreboards and indicator reports”, in Handbook of Innovation
Indicators and Measurement, Edward Elgar, Cheltenham, pp. 279-297.
Laursen, K, and A. Salter (2006), “Open for innovation: the role of openness in explaining innovation
performance among UK manufacturing firms”, Strategic Management Journal, Vol. 27/2, pp. 131-150.
Lööf, H., J. Mairesse and P. Mohnen (2016), “CDM 20 years after”, CESIS Electronic Working Papers,
No. 442, Centre of Excellence for Science and Innovation Studies (CESIS), KTH Royal Institute of
Technology, Stockholm, https://static.sys.kth.se/itm/wp/cesis/cesiswp442.pdf.
Mairesse, J. and P. Mohnen (2010), “Using innovation surveys for econometric analysis”, in Handbook
of the Economics of Innovation, Vol. 2, Elsevier.
McLaughlin, J.A. and G.B. Jordan (1999), “Logic models: A tool for telling your program’s performance
story”, Evaluation and Program Planning, Vol. 22/1, pp. 65-72.
National Research Council (2014), Capturing Change in Science, Technology, and Innovation: Improving
Indicators to Inform Policy, National Academies Press, Washington, DC, https://doi.org/10.17226/18606.
Nesta (2016), “Experimental innovation and growth policy: Why do we need it?”, Innovation Growth Lab,
Nesta, London,
https://media.nesta.org.uk/documents/experimental_innovation_and_growth_policy_why_do_we_nee
d_it.pdf (accessed on 9 August 2018).
OECD (2015), Frascati Manual 2015: Guidelines for Collecting and Reporting Data on Research and
Experimental Development, The Measurement of Scientific, Technological and Innovation Activities,
OECD Publishing, Paris, http://oe.cd/frascati.
OECD (2013), “Knowledge networks and markets”, OECD Science, Technology and Industry Policy
Papers, No. 7, OECD Publishing, Paris, https://doi.org/10.1787/5k44wzw9q5zv-en.
OECD (2010), Measuring Innovation: A New Perspective, OECD Publishing, Paris,
https://doi.org/10.1787/9789264059474-en.
OECD (2009a), OECD Patent Statistics Manual, OECD Publishing, Paris,
https://doi.org/10.1787/9789264056442-en.
OECD (2009b), Innovation in Firms: A Microeconomic Perspective, OECD Publishing, Paris,
https://doi.org/10.1787/9789264056213-en.
OECD/JRC (2008), Handbook on Constructing Composite Indicators - Methodology and User Guide,
OECD Publishing, Paris, www.oecd.org/sdd/42495745.pdf.
OECD and SCImago Research Group (CSIC) (2016), Compendium of Bibliometric Science Indicators,
OECD, Paris, www.oecd.org/sti/inno/Bibliometrics-Compendium.pdf.
Rubin, D.B. (1974), “Estimating causal effects of treatments in randomized and nonrandomized studies”,
Journal of Educational Psychology, Vol. 66/5, pp. 688-701.
Tether, B. (2001), “Identifying innovation, innovators, and innovation behaviours: A critical assessment
of the Community Innovation Survey (CIS)”, CRIC Discussion Papers, No. 48, Centre for Research
on Innovation and Competition, University of Manchester, Manchester.
Todd, P.E. (2010), “Matching estimators”, in Microeconometrics, The New Palgrave Economics
Collection, Palgrave Macmillan, London, pp. 108-121.
UN (2004), Implementation of the Fundamental Principles of Official Statistics; Report of the Secretary-
General, E/CN.3/2004/21, UN Statistical Commission, New York,
https://unstats.un.org/unsd/statcom/doc04/2004-21e.pdf.
UNECE (2000), “Terminology on statistical metadata”, Statistical Standards and Studies, No. 53,
Conference of European Statisticians, UN Statistical Commission and UN Economic Commission for
Europe, Geneva, www.unece.org/fileadmin/DAM/stats/publications/53metadaterminology.pdf.
Wilhelmsen, L. (2012), “A question of context: Assessing the impact of a separate innovation survey and
of response rate on the measurement of innovation activity in Norway”, Documents, No. 51/2012, Statistics
Norway, Oslo, www.ssb.no/a/english/publikasjoner/pdf/doc_201251_en/doc_201251_en.pdf.
Glossary of terms
Activities relating to the This includes the purchase, lease, or acquisition through a takeover of buildings, machinery,
acquisition or lease of equipment, or the in-house production of such goods for own-use. The acquisition or lease
tangible assets of tangible assets can be innovation activities in their own right, such as when a firm
purchases equipment with significantly different characteristics than the existing equipment
that it uses for its business processes. The acquisition of tangible capital goods is generally
not an innovation activity if it is for replacement or capital-widening investments that are
unchanged, or with only minor changes compared to the firm’s existing stock of tangible
capital. The lease or rental of tangible assets is an innovation activity if these assets are
required for the development of product or business process innovations.
Administrative data Administrative data is the set of units and data derived from an administrative source such
as business registers or tax files.
Affiliated firm Affiliated firms include holding, subsidiary or associated companies located in the domestic
country or abroad. See also Enterprise group.
Artificial intelligence (AI) Artificial intelligence (AI) describes the activity and outcome of developing computer
systems that mimic human thought processes, reasoning and behaviour.
Asset An asset is a store of value that represents a benefit or series of benefits accruing to the
economic owner by holding or using the asset over a period of time. Both financial and
non-financial assets are relevant to innovation. Fixed assets are the result of production
activities and are used repeatedly or continuously in production processes for more than
one year.
Big data Data that are too large or complex to be handled by conventional data processing tools and
techniques.
Brand equity activities See Marketing and brand equity activities.
Business capabilities Business capabilities include the knowledge, competencies and resources that a firm
accumulates over time and draws upon in the pursuit of its objectives. The skills and
abilities of a firm's workforce are a particularly critical part of innovation-relevant business
capabilities.
Business enterprise sector The Business enterprise sector comprises:
• All resident corporations, including legally incorporated enterprises, regardless of the
residence of their shareholders. This includes quasi-corporations, i.e. units capable of
generating a profit or other financial gain for their owners, recognised by law as separate
legal entities from their owners, and set up for the purpose of engaging in market
production at prices that are economically significant.
• The unincorporated branches of non-resident enterprises deemed to be resident and part
of this sector because they are engaged in production on the economic territory on a long-
term basis.
• All resident non-profit institutions that are market producers of goods or services or serve
businesses.
Business innovation A business innovation is a new or improved product or business process (or combination
thereof) that differs significantly from the firm's previous products or business processes
and that has been introduced on the market or brought into use by the firm.
Business innovation See Innovation activities (business).
activities
Business model innovation Business model innovation relates to changes in a firm’s core business processes as well
as in the main products that it sells, currently or in the future.
Business process A business process innovation is a new or improved business process for one or more
innovation business functions that differs significantly from the firm’s previous business processes and
that has been brought into use by the firm. The characteristics of an improved business
function include greater efficacy, resource efficiency, reliability and resilience, affordability,
and convenience and usability for those involved in the business process, either external or
internal to the firm. Business process innovations are implemented when they are brought
into use by the firm in its internal or outward-facing operations. Business process
innovations include the following functional categories:
• production of goods and services
• distribution and logistics
• marketing and sales
• information and communication systems
• administration and management
• product and business process development.
Business strategy A business strategy includes the formulation of goals and the identification of policies to
reach these goals. Strategic goals cover the intended outcomes over the mid- and long-
term (excluding the goal of profitability, which is shared by all firms). Strategic policies or
plans include how a firm creates a competitive advantage or a “unique selling proposition”.
Capital expenditures Capital expenditures are the annual gross amount paid for the acquisition of fixed assets
and the costs of internally developing fixed assets. These include gross expenditures on
land and buildings, machinery, instruments, transport equipment and other equipment, as
well as intellectual property products. See also Current expenditures.
CDM model The CDM model (based on the initials of the three authors’ names, Crépon, Duguet and
Mairesse) is an econometric model widely used in empirical research on innovation and
productivity. The CDM framework provides a structural model that explains productivity by
innovation output and corrects for the selectivity and endogeneity inherent in survey data.
Cloud computing Cloud systems and applications are digital storage and computing resources remotely
available on-demand via the Internet.
Cognitive testing Cognitive testing is a methodology developed by psychologists and survey researchers
which collects verbal information on survey responses. It is used to evaluate the ability of a
question (or group of questions) to measure constructs as intended by the researcher and
if respondents can provide reasonably accurate responses.
Co-innovation Co-innovation, or “coupled open innovation”, occurs when collaboration between two or
more partners results in an innovation.
Collaboration Collaboration requires co-ordinated activity across different parties to address a jointly
defined problem, with all partners contributing. Collaboration requires the explicit definition
of common objectives and it may include agreement over the distribution of inputs, risks
and potential benefits. Collaboration can create new knowledge, but it does not need to
result in an innovation. See also Co-operation.
Community Innovation The Community Innovation Survey (CIS) is a harmonised survey of innovation in
Survey (CIS) enterprises co-ordinated by Eurostat and currently carried out every two years in EU
member states and several European Statistical System (ESS) member countries.
Composite indicator A composite indicator compiles multiple indicators into a single index based on an
underlying conceptual model in a manner which reflects the dimensions or structure of the
phenomena being measured. See also Indicator.
Computer-assisted personal Computer-assisted personal interviewing (CAPI) is a method of data collection in which an
interviewing (CAPI) interviewer uses a computer to display questions and accept responses during a face-to-
face interview.
Computer-assisted telephone Computer-assisted telephone interviewing (CATI) is a method of data collection by telephone
interviewing (CATI) with questions displayed on a computer and responses entered directly into a computer.
Co-operation Co-operation occurs when two or more participants agree to take responsibility for a task or
series of tasks and information is shared between the parties to facilitate the agreement.
See also Collaboration.
Corporations The System of National Accounts (SNA) Corporations sector consists of corporations that
are principally engaged in the production of market goods and services. This manual
adopts the convention of referring to this sector as the Business enterprise sector, in line
with the terminology adopted in the OECD’s Frascati Manual.
Counterfactual In impact evaluation, the counterfactual refers to what would have happened to potential
beneficiaries in the absence of an intervention. Impacts can thus be estimated as the
difference between potential outcomes under observed and unobserved counterfactual
treatments. An example is estimating the causal impacts of a policy “treatment” to support
innovation activities. The researcher cannot directly observe the counterfactuals: for
supported firms, what would have been their performance if they had not been supported,
and similarly with non-supported firms.
Cross-sectional survey A cross-sectional survey collects data to make inferences about a population of interest (or
subset) at a specific point in time.
Current expenditures Current expenditures include all costs for labour, materials, services and other inputs to the
production process that are consumed within less than one year, and the costs for leasing
fixed assets. See also Capital expenditures.
Design Design is defined as an innovation activity aimed at planning and designing procedures,
technical specifications and other user and functional characteristics for new products and
business processes. Design includes a wide range of activities to develop a new or modified
function, form or appearance for goods, services or processes, including business processes
to be used by the firm itself. Most design (and other creative work) activities are innovation
activities, with the exception of minor design changes that do not meet the requirements for
an innovation, such as producing an existing product in a new colour. Design capabilities
include the following: (i) engineering design; (ii) product design; and (iii) design thinking.
Design Ladder The Design Ladder is a tool developed by the Danish Design Centre for illustrating and
rating a company's use of design. The Design Ladder is based on the hypothesis that there
is a positive link between higher earnings, placing a greater emphasis on design methods
in the early stages of development and giving design a more strategic position in the
company’s overall business strategy. The four steps are: (i) non-design; (ii) design as form-
giving; (iii) design as process; and (iv) design as strategy.
Design thinking Design thinking is a systematic methodology for the design process that uses design
methods to identify needs, define problems, generate ideas, develop prototypes and test
solutions. It can be used for the design of systems, goods, and services. Collecting data on
design thinking is of value to policy because the methodology can support the innovation
activities of both service and manufacturing firms, resulting in improvements to
competitiveness and economic outcomes.
Diffusion (innovation) Innovation diffusion encompasses both the process by which ideas underpinning product
and business process innovations spread (innovation knowledge diffusion), and the adoption
of such products, or business processes by other firms (innovation output diffusion).
Digital-based innovations Digital-based innovations include product or business process innovations that contain
ICTs, as well as innovations that rely to a significant degree on information and
communication technologies (ICTs) for their development or implementation.
Digital platforms Digital platforms are information and communication technology-enabled mechanisms that
connect and integrate producers and users in online environments. They often form an
ecosystem in which goods and services are requested, developed and sold, and data
generated and exchanged.
Digitalisation Digitalisation is the application or increase in use of digital technologies by an organisation,
industry, country, etc. It refers to how digitisation affects the economy or society. See also
Digitisation.
Digitisation Digitisation is the conversion of an analogue signal conveying information (e.g. sound,
image, printed text) to binary bits. See also Digitalisation.
Dynamic managerial Dynamic managerial capabilities refer to the ability of managers to organise an effective
capabilities response to internal and external challenges Dynamic managerial capabilities include the
following three main dimensions: (i) managerial cognition; (ii) managerial social capital; and
(iii) managerial human capital.
Employee training activities Employee training includes all activities that are paid for or subsidised by the firm to
develop knowledge and skills required for the specific trade, occupation or vocation of a
firm’s employees. Employee training includes on-the-job training and job-related education
at training and educational institutions. Examples of training as an innovation activity
include training personnel to use innovations, such as new software logistical systems or
new equipment; and training relevant to the implementation of an innovation, such as
instructing marketing personnel or customers on the features of a product innovation.
Engineering, design and Engineering, design and other creative work cover experimental and creative activities that
other creative work may be closely related to research and experimental development (R&D), but do not meet
activities all of the five R&D criteria. These include follow-up or auxiliary activities of R&D, or
activities that are performed independently from R&D. Engineering involves production and
quality control procedures, methods and standards. Design includes a wide range of
activities to develop a new or modified function, form or appearance for goods, services or
processes, including business processes to be used by the firm itself. Other creative work
includes all activities for gaining new knowledge or applying knowledge in a novel way that
do not meet the specific novelty and uncertainty (also relating to non-obviousness)
requirements for R&D. Most design and other creative work are innovation activities, with
the exception of minor design changes that do not meet the requirements for an innovation.
Many engineering activities are not innovation activities, such as day-to-day production and
quality control procedures for existing processes.
Enterprise An enterprise is the smallest combination of legal units with autonomy in respect of
financial and investment decision-making, as well as authority and responsibility for
allocating resources for the production of goods and services. The term enterprise may
refer to a corporation, a quasi-corporation, a non-profit institution or an unincorporated
enterprise. It is used throughout this manual to refer specifically to business enterprises.
See also Business enterprise sector.
Enterprise group A set of enterprises controlled by the group head, which is a parent legal unit that is not
controlled either directly or indirectly by any other legal unit. See also Enterprise.
Establishment An establishment is an enterprise, or part of an enterprise, that is situated in a single
location and in which only a single productive activity is carried out or in which the principal
productive activity accounts for most of the value added. See also Enterprise.
Extramural innovation Expenditures for innovation activities carried out by third parties on behalf of the firm,
expenditure including extramural R&D expenditure.
Extramural R&D Extramural research and experimental development (R&D) is any R&D performed outside
of the statistical unit about which information is being reported. Extramural R&D is
considered an innovation activity alongside intramural R&D. See also Intramural R&D.
Firm Informal term used in this manual to refer to business enterprises. See also Enterprise.
Filters Filters and skip instructions direct respondents to different parts of a questionnaire,
depending on their answers to the filter questions. Filters can be helpful for reducing
response burden, particularly in complex questionnaires, but they can also encourage
satisficing behaviour.
Focal innovation Data collection using the object-based method can focus on a firm’s single, “focal”
innovation. This is usually defined as the firm’s most important innovation in terms of some
measurable criteria (e.g. the innovation’s actual or expected contribution to the firm’s
performance, the one with the highest innovation expenditures, the one with the greatest
contribution to sales), but can also be the firm’s most recent innovation.
Follow-on activities Follow-on activities are efforts undertaken by firms for users of an innovation after its
implementation, but within the observation period. These include marketing activities,
employee training, and after-sales services. These follow-on activities can be critical for the
success of an innovation, but they are not included in the definition of an innovation activity.
Framework conditions Broader set of contextual factors related to the external environment that facilitate or hinder
business activities in a given country. These usually include the regulatory environment,
taxation, competition, product and labour markets, institutions, human capital,
infrastructure, standards, etc.
Full-time equivalent (FTE) Full-time equivalent (FTE) is the ratio of working hours actually spent on an activity during a
specific reference period (usually a calendar year) divided by the total number of hours
conventionally worked in the same period.
General government General government consists of institutional units that, in addition to meeting their political
(sector) and regulatory responsibilities, redistribute income and wealth and produce services and
goods for individual or collective consumption, mainly on a non-market basis. The General
government sector also includes non-profit institutions controlled by the government.
Global value chains Pattern of organisation of production involving international trade and investment flows
whereby the different stages of the production process are located across different
countries.
Goods Goods are physical, produced objects for which a demand exists, over which ownership
rights can be established and whose ownership can be transferred from one institutional
unit to another by engaging in transactions on markets. See also Products.
Government support Government support programmes represent direct or indirect transfers of resources to
programmes firms. Support can be of a financial nature or may be provided in kind. This support may
come directly from government authorities or indirectly, for example when consumers are
subsidised to purchase specific products. Innovation-related activities and outcomes are
common targets of government support.
Households Households are institutional units consisting of one or more individuals. In the System of
National Accounts, individuals must belong to only one household. The principal functions
of households are to supply labour, to undertake final consumption and, as entrepreneurs,
to produce market goods and services.
Implementation Implementation refers to the point in time when a significantly different new or improved
product or business process is first made available for use. In the case of product
innovation, this refers to its market introduction, while for business process innovations it
relates to their first use within the firm.
Imputation Imputation is a post-survey adjustment method for dealing with item non-response. A
replacement value is assigned for specific data items where the response is missing or
unusable. Various methods can be used for imputation including mean value, hot-/cold-
deck, nearest-neighbour techniques and regression. See also Item non-response.
Informal sector (or The informal sector is broadly characterised as consisting of units engaged in the
economy) production of goods or services with the primary objective of generating employment and
incomes to the persons concerned. These units typically operate at a low level of
organisation, with little or no division between labour and capital as factors of production
and on a small scale.
Indicator An indicator is a variable that purports to represent the performance of different units along
some dimension. Its value is generated through a process that simplifies raw data about
complex phenomena in order to compare similar units of analysis across time or location.
See also Innovation indicator.
Industry An industry consists of a group of establishments engaged in the same, or similar, kinds of
activity. See also ISIC.
Innovation An innovation is a new or improved product or process (or combination thereof) that differs
significantly from the unit’s previous products or processes and that has been made
available to potential users (product) or brought into use by the unit (process).
Innovation-active firm An innovation-active firm is engaged at some time during the observation period in one or
more activities to develop or implement new or improved products or business processes
for an intended use. Both innovative and non-innovative firms can be innovation-active
during an observation period. See also Innovation status.
Innovation activities Institutional units can undertake a series of actions with the intention to develop
innovations. This can require dedicated resources and engagement in specific activities,
including policies, processes and procedures. See also Innovation activities (business).
Innovation activities Business innovation activities include all developmental, financial and commercial activities
(business) undertaken by a firm that are intended to result in an innovation for the firm. They include:
• research and experimental development (R&D) activities
• engineering, design and other creative work activities
• marketing and brand equity activities
• intellectual property (IP) related activities
• employee training activities
• software development and database activities
• activities related to the acquisition or lease of tangible assets
• innovation management activities.
Innovation activities can result in an innovation, be ongoing, postponed or abandoned.
Innovation barriers and Internal or external factors that hamper or incentivise business innovation efforts.
drivers Depending on the context, an external factor can act as a driver of innovation or as a
barrier to innovation.
Innovation expenditure Economic cost of innovation activities undertaken by a firm or group of firms. Expenditure
(business) can be intramural (activities carried out in-house) or extramural (carried out by third parties
on behalf of the firm). See also Innovation activities (business).
Innovation indicator An innovation indicator is a statistical summary measure of an innovation phenomenon
(activity, output, expenditure, etc.) observed in a population or a sample thereof for a
specified time or place. Indicators are usually corrected (or standardised) to permit
comparisons across units that differ in size or other characteristics. See also Indicator.
Innovation management Innovation management includes all systematic activities to plan, govern and control
internal and external resources for innovation. This includes how resources for innovation
are allocated, the organisation of responsibilities and decision-making among employees,
the management of collaboration with external partners, the integration of external inputs
into a firm’s innovation activities, and activities to monitor the results of innovation and to
support learning from experience.
Innovation objectives Innovation objectives consist of a firm’s identifiable goals that reflect its motives and
underlying strategies with respect to its innovation efforts. The objectives can concern the
characteristics of the innovation itself, such as its specifications, or its market and
economic objectives.
Innovation outcomes Innovation outcomes are the observed effects of innovations, including the extent to which
a firm’s objectives are met and the broader effects of innovation on other organisations, the
economy, society, and the environment. These can also include unexpected effects that
were not identified among the firm’s initial objectives (e.g. spillovers and other
externalities).
Innovation project An innovation project is a set of activities that are organised and managed for a specific
purpose and with their own objectives, resources and expected outcomes. Information on
innovation projects can complement other qualitative and quantitative data on innovation activities.
Innovation sales share The innovation sales share indicator is the share of a firm’s total sales in the reference year
that is due to product innovations. It is an indicator of the economic significance of product
innovations at the level of the innovative firm.
Innovation status The innovation status of a firm is defined on the basis of its engagement in innovation
activities and its introduction of one or more innovations over the observation period of a
data collection exercise. See also Innovative firm and Innovation-active firm.
Innovative firm An innovative firm reports one or more innovations within the observation period. This
applies equally to a firm that is individually or jointly responsible for an innovation. The term
“innovative” is only used in the manual in this context. See also Innovation status.
Institutional unit An institutional unit is defined in the System of National Accounts as “an economic entity
that is capable, in its own right, of owning assets, incurring liabilities, and engaging in
economic activities and transactions with other entities.” Institutional units can undertake a
series of actions with the intention to develop innovations.
Intangible assets See Knowledge-based capital.
Intellectual property (IP) Intellectual property (IP) refers to creations of the mind such as inventions; literary and
artistic works; and symbols, names and images used in commerce. See also Intellectual
property rights.
Intellectual property (IP) Intellectual property (IP) related activities include the protection or exploitation of
related activities knowledge, often created through research and experimental development (R&D), software
development, and engineering, design and other creative work. IP activities include all
administrative and legal work to apply for, register, document, manage, trade, license-out,
market and enforce a firm’s own intellectual property rights (IPRs), all activities to acquire
IPRs from other organisations such as through licensing-in or the outright purchase of IP,
and activities to sell IP to third parties. IP activities for ideas, inventions and new or
improved products or business processes developed during the observation period are
innovation activities. See also Intellectual property and Intellectual property rights.
Intellectual property Intellectual property products (IPPs) are the result of research, development, investigation
products (IPPs) or innovation leading to knowledge that the developers can market or use to their own
benefit in production because use of the knowledge is restricted by means of legal or other
protection. They include:
• research and experimental development (R&D)
• mineral exploration and evaluation
• computer software and databases
• entertainment, literary and artistic originals; and other IPPs.
Intellectual property rights Intellectual property rights (IPRs) are legal rights over intellectual property. See also
(IPRs) Intellectual property.
International Standard The International Standard Industrial Classification of All Economic Activities (ISIC) consist
Industrial Classification of of coherent and consistent classification structure of economic activities based on a set of
All Economic Activities internationally agreed concepts, definitions, principles and classification rules. It provides a
(ISIC) comprehensive framework within which economic data can be collected and reported in a
format that is designed for purposes of economic analysis, decision-taking and policy-
making. The scope of ISIC in general covers productive activities, i.e. economic activities
within the production boundary of the System of National Accounts (SNA). The
classification is used to classify statistical units, such as establishments or enterprises,
according to the economic activity in which they mainly engage. The most recent version is
ISIC Revision 4.
Intramural R&D Intramural research and experimental development (R&D) expenditures are all current
expenditures plus gross fixed capital expenditures for R&D performed within a statistical
unit. Intramural R&D is an innovation activity alongside extramural R&D. See also
Extramural R&D.
ISO 50500 International Organization for Standardization (ISO) standards on innovation management
fundamentals and vocabulary developed by the ISO/TC 279 Technical Committee. The
definitions of innovation and innovation management in the Oslo Manual are aligned with
those used by ISO.
Item non-response When a sampled unit responds to a questionnaire incompletely.
Kind-of-activity unit (KAU) A kind-of-activity unit (KAU) is an enterprise, or a part of an enterprise, that engages in only
one kind of productive activity or in which the principal productive activity accounts for most
of the value added. See also Enterprise.
Knowledge Knowledge refers to an understanding of information and the ability to use information for
different purposes.
Knowledge-based capital Knowledge-based capital (KBC) comprises intangible assets that create future benefits.
(KBC) It comprises software and databases, Intellectual property products, and economic
competencies (including brand equity, firm-specific human capital, organisational capital).
Software, databases and intellectual property products are currently recognised by the
System of National Accounts as produced assets. See also Intellectual property products.
Knowledge-capturing Knowledge-capturing products concern the provision, storage, communication and
products dissemination of information, advice and entertainment in such a way that the consuming
unit can access the knowledge repeatedly.
Knowledge flows Knowledge flows refer to inbound and outbound exchanges of knowledge, through market
transactions as well as non-market means. Knowledge flows encompass both deliberate
and accidental transmission of knowledge.
Knowledge management Knowledge management is the co-ordination of all activities by an organisation to direct,
control, capture, use, and share knowledge within and outside its boundaries.
Knowledge network A knowledge network consists of the knowledge-based interactions or linkages shared by a
group of firms and possibly other actors. It includes knowledge elements, repositories and
agents that search for, transmit and create knowledge. These are interconnected by
relationships that enable, shape or constrain the acquisition, transfer and creation of
knowledge. Knowledge networks contain two main components: the type of knowledge and
the actors that receive, supply or exchange knowledge.
Logic model A logic model is a tool used by funders, managers, and evaluators of programmes to
represent the sequence of impacts and evaluate the effectiveness of a programme.
Longitudinal survey A longitudinal survey collects data on the same units (panel) over multiple time periods.
Management capabilities Management capabilities can influence a firm’s ability to undertake innovation activities,
introduce innovations and generate innovation outcomes. For the purpose of innovation,
two key areas are considered: (i) a firm’s competitive strategy; and (ii) the organisational
and managerial capabilities used to implement this strategy. See also Managerial
capabilities.
Managerial capabilities Managerial capabilities include all of a firm’s internal abilities, capacities, and competences
that can be used to mobilise, command and exploit resources in order to meet the firm’s
strategic goals. These capabilities typically relate to managing people; intangible, physical
and financial capital; and knowledge. Capabilities concern both internal processes and
external relations. Managerial capabilities are a specific subset of organisational capabilities
that relate to the ability of managers to organise change. See also Management capabilities.
Marketing and brand equity Marketing and brand equity activities include market research and market testing, methods
activities for pricing, product placement and product promotion; product advertising, the promotion of
products at trade fairs or exhibitions and the development of marketing strategies.
Marketing activities for existing products are only innovation activities if the marketing
practice is itself an innovation.
Marketing innovation Type of innovations used in the previous edition of this Manual, currently these are mostly
subsumed under business process innovation, except for innovations in product design
which are included under product innovation.
Metadata Metadata are data that define and describe other data. This includes including information
on the procedure used to collect data, sampling methods, procedures for dealing with non-
response, and quality indicators.
Moments (statistical) Statistical indicators providing information on the shape of the distribution of a database.
Examples include the mean and the variance.
Multinational enterprise A multinational enterprise (MNE) refers to a parent company resident in a country and its
(MNE) majority-owned affiliates located abroad, which are labelled controlled affiliates abroad.
MNEs are also referred to as global enterprise groups. See also Enterprise group.
New-to-firm (NTF) Lowest threshold for innovation in terms of novelty referring to a first time use or
innovation implementation by a firm. A new-to-firm (NTF) innovation can also be new-to-market (NTM)
(or world), but not vice versa. If an innovation is NTF but not NTM (e.g. when adopting
existing products or business processes – as long as they differ significantly from what the
firm offered or used previously – with little or no modification), it is referred to as “NTF
only”. See also New-to-market innovation.
New-to-market (NTM) An innovation by a firm that has not been available in the market(s) served by the firm.
innovation New-to-market innovation represent a higher threshold for innovation than a new-to-firm
innovation in terms of novelty. See also New-to-firm innovation.
Nominal variable Categorical variable with no intrinsic ordering. See also Ordinal variable.
Non-innovative firm A non-innovative firm is one that does not report an innovation within the observation
period. A non-innovative firm can still be innovation-active if it had one or more ongoing,
suspended, abandoned or completed innovation activities that did not result in an
innovation during the observation period. See also Innovative firm.
Non-profit institution (NPI) Non-profit institutions (NPIs) are legal or social entities created for the purpose of
producing goods and services, whose status does not permit them to be a source of
income, profit or other financial gain for the units that establish, control or finance them.
They can be engaged in market or non-market production.
Non-profit institutions Non-profit institutions serving households (NPISHs) are legal entities that are principally
serving households engaged in the production of non-market services for households or the community at large
(NPISHs) and whose main resource is from voluntary contributions. If controlled by government, they
are part of the General government sector. If controlled by firms, they are assigned to the
Business enterprise sector. See also Non-profit institution.
Non-response survey A non-response survey is a survey aimed to identify likely significant differences between
responding and non-responding units and to obtain information on why non-responding
units did not answer. See also Unit non-response,
Novelty Novelty is a dimension used to assess whether a product or business process is
“significantly different” from previous ones and if so, it could be considered an innovation.
The first and most widely used approach to determine the novelty of a firm’s innovations is
to compare these with the state of the art in the market or industry in which the firm
operates. The second option is to assess the potential for an innovation to transform (or
create) a market, which can provide a possible indicator for the incidence of radical or
disruptive innovation. A final option for product innovations is to measure the observed
change in sales over the observation period or by asking directly about future expectations
of the effect of these innovations on competitiveness.
Object-based approach The object approach to innovation measurement collects data on a single, focal innovation
(the object of the study). See also Subject-based approach.
Observation period The observation period is the length of time covered by a question in a survey. See also
Reference period.
Open innovation Open innovation denotes the flow of innovation-relevant knowledge across the boundaries
of individual organisations. This notion of “openness” does not necessarily imply that
knowledge is free of charge or exempt from use restrictions.
Ordinal variable An ordinal variable is a categorical variable for which the values are ordered. See also
Nominal variable.
Organisational capabilities See Managerial capabilities.
Organisational innovation Type of innovation used in the previous edition of this Manual, currently subsumed under
business process innovation.
Panel A panel is the subset of units that are repeatedly sampled over two or more iterations of a
longitudinal survey. See also Longitudinal survey.
Paradata Paradata refers to the data about the process by which surveys are filled in. Paradata can
be analysed to identify best practices that minimise undesirable respondent behaviour such
as premature termination or satisficing, in order to improve future iterations of the survey
instrument.
Product A product is a good or service (including knowledge-capturing products as well as
combinations of goods and services) that results from a process of production. See also
Goods and Services.
Product innovation A product innovation is a new or improved good or service that differs significantly from the
firm’s previous goods or services and that has been introduced on the market. Product
innovations must provide significant improvements to one or more characteristics or
performance specifications. See also Product.
Production processes Production processes (or production activities) are defined in the System of National
Accounts as all activities, under the control of an institutional unit, that use inputs of labour,
capital, goods and services to produce outputs of goods and services. These activities are
the focus of innovation analysis.
Public sector The public sector includes all institutions controlled by government, including public
business enterprises. The latter should not be confused with publicly listed (and traded)
corporations. The public sector is a broader concept than the General government sector.
Public infrastructure Public infrastructure can be defined by government ownership or by government control
through direct regulation. The technical and economic characteristics of public
infrastructure strongly influence the functional capabilities, development and performance
of an economy, hence the inclusion of public infrastructure as an external factor that can
influence innovation. Public infrastructure includes areas such as transport, energy,
information and communication technology, waste management, water supply, knowledge
infrastructure, and health.
Public research institution Although there is no formal definition of a public research institution (PRI) (sometimes also
(PRI) referred to as a public research organisation), it must meet two criteria: (i) it performs
research and experimental developmentas a primary economic activity (research); and (ii)
it is controlled by government. Private non-profit research institutes are therefore excluded.
Reference period The reference period is the final year of the overall survey observation period and is used
as the effective observation period for collecting interval level data items, such as
expenditures or the number of employed persons. See also Observation period.
Regulation Regulation refers to the implementation of rules by public authorities and governmental
bodies to influence market activity and the behaviour of private actors in the economy. A
wide variety of regulations can affect the innovation activities of firms, industries and
economies.
Reporting unit The reporting unit refers to the “level” within the business from which the required data are
collected. The reporting unit may differ from the required statistical unit.
Research and experimental Research and experimental development (R&D) comprise creative and systematic work
development (R&D) undertaken in order to increase the stock of knowledge – including knowledge of
humankind, culture and society – and to devise new applications of available knowledge.
Sampling fraction The sampling fraction is the ratio of the sample size to the population size.
Satisficing Satisficing refers to respondent behaviours to reduce the time and effort required to
complete an online or printed questionnaire. These include abandoning the survey before it
is completed (premature termination), skipping questions, non-differentiation (when
respondents give the identical response category to all sub-questions in a question, for
example answering “slightly important” to all sub-questions in a grid question), and
speeding through the questionnaire.
Services Services are the result of a production activity that changes the conditions of the
consuming units, or facilitates the exchange of products or financial assets. They cannot be
traded separately from their production. Services can also include some knowledge-
capturing products. See also Products.
Social innovation Innovations defined by their (social) objectives to improve the welfare of individuals or
communities.
Software development and Software development and database activities include:
database activities
• The in-house development and purchase of computer software, programme descriptions
and supporting materials for both systems and applications software (including standard
software packages, customised software solutions and software embedded in products or
equipment).
• The acquisition, in-house development and analysis of computer databases and other
computerised information, including the collection and analysis of data in proprietary
computer databases and data obtained from publicly available reports or the Internet.
Technical expertise Technical expertise consists of a firm’s knowledge of and ability to use technology. This
knowledge is derived from the skills and qualifications of its employees, including its
engineering and technical workforce, accumulated experience in using the technology, the
use of capital goods containing the technology, and control over the relevant intellectual
property. See also Technology.
Technology Technology refers to the state of knowledge on how to convert resources into outputs. This
includes the practical use and application to business processes or products of technical
methods, systems, devices, skills and practices.
Training See Employee training activities.
Unit non-response When a sampled unit that is contacted does not respond to a survey.
User innovation User innovation refers to activities whereby consumers or end-users modify a firm’s
products, with or without the firm’s consent, or when users develop entirely new products.
Value creation The existence of opportunity costs implies the likely intention to pursue some form of value
creation (or value preservation) by the actors responsible for an innovation activity. Value is
therefore an implicit goal of innovation, but cannot be guaranteed on an ex ante basis. The
realisation of the value of an innovation is uncertain and can only be fully assessed
sometime after its implementation. The value of an innovation can also evolve over time
and provide different types of benefits to different stakeholders.
ISBN 978-92-64-30455-0
92 2018 03 1 P
9HSTCQE*daeffa+