Technology Research Explained: March 2007
Technology Research Explained: March 2007
Technology Research Explained: March 2007
net/publication/312473089
CITATIONS READS
4 18,192
2 authors, including:
Ketil Stølen
SINTEF
271 PUBLICATIONS 3,381 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Pilot-T ASAM project: Message Services for Cooperative Intelligent Transport Systems View project
All content following this page was uploaded by Ketil Stølen on 17 January 2017.
1 Introduction
1.1 Background
Research method is a relevant topic to anybody performing research of some kind, from basic
research to applied, industrial research. There exists an abundance of literature prescribing how
the researcher should work, what methods the researcher should use, etc. Explanations of research
method typically take as starting point the kinds of research performed within the natural or social
sciences, conveying guidelines to researchers of these disciplines.
What then about engineering? May the research method established within the classical sciences
be adopted by ship designers, bridge constructors, computer scientists and other technologists?
Or, is there an essential difference when it comes to approaches and problem formulation? By
throwing light on these questions, the authors wish to contribute to a better understood and more
commonly accepted research method for technologists.
Honouring the ancient philosophers, scientists and their successors, we here use the notion
classical research method about what is above called the scientific method. The term classical not
only refers to the Greek and Roman cultures of the Antiquity, but also means “standard”,
“traditional” or “authoritative” [1]. Classical research is research focusing on the world around us,
seeking new knowledge about nature, space, the human body, the society, etc. The researcher
asks: What is the real world like?
Technology research takes another starting point. The word technology stems from the Greek
word techne, meaning art, skill [1]. Technology may be defined as “the study, mastery and
utilization of manufacturing methods and industrial arts” ([3]). Objects manufactured by human
beings will in the following be called artefacts. The technology researcher tries to create artefacts
which are better than those which already exist, e.g. quicker, more useful, or more dependable [4].
On this basis we define technology and technology research like this:
The technology researcher seeks principles and ideas for manufacturing of new and better
artefacts, which may be materials, automates, medicines, oil production methods, computer
programmes, etc. The basic question is: How to produce a new/improved artefact?
As we have now drawn the most important divide between classical research and technology
research, it is relevant to ask how these two variants relate to basic research and to applied
research. We make the following definitions:
4
While classical research is heavily rooted in basic research, technology research is more used
within applied research (Figure 1). Note that some of the classical research is in fact applied (A-C
in the figure); for instance, thermodynamics was invented in order to increase the effeciency of
steam engines. On the other hand, technology research may also reside within the area of basic
research (B-T in the figure). As an example, imagine a beautiful sculpture whose creation was
made possible due to technology research on materials. Another example is Rubik’s cube, a
puzzle that reached high popularity among both children and adults in the 1980ies. In these
examples, technology research is a means to create artefacts that do not solve practical problems,
but which are beautiful or amusing.
and it lives until it is refuted. An example of a refuted theory is Ptolemy’s doctrine of the Earth as
the centre of the universe1 [5].
Investigations may thus support (confirm, strengthen) a hypothesis, but cannot give an ultimate
proof. On the other hand, would it be possible to find an ultimate proof to the contrary? That
question leads us to discuss falsifiability, an important principle in hypothesis evaluation. A
hypothesis is falsifiable if it is possible to reject (falsify) it through observations. Otherwise, the
hypothesis is unfit for use within research. This principle implies that a hypothesis (or theory) that
has been verified by many observations, still must be rejected if a counter-example exists. Karl
Popper2 was an advocate of falsifiability. According to Popper, evaluation should not attempt to
verify a theory or hypothesis, but rather strive to falsify it. Only theories passing such tests should
have the right to survive. Furthermore, Popper asserted that evaluation cannot give ultimate
answers, and that hypotheses consequently always remain assumptions. Therefore, researchers
should not try to avoid criticism, but accept that their hypotheses may be rejected [6]3. The
standpoints of Popper have been much referred and debated. One of the objections points to the
possible situation of a sound hypothesis resting on a fallible theory. In such a case, the fallible
theory would contribute to the formulation of erroneous predictions, which in turn would imply an
incorrect rejection of the hypothesis [2]4.
1
Claudius Ptolemy, geographer, astronomer and astrologer, lived probably in Alexandria about AD 90—168. His
geocentric model had massive support among both Europeans and Arabs for more than a thousand years.
2
Karl Popper, Austrian philosopher of science, 1902–1994
3
pages 98–104
4
pages 30–33
6
Semmelweiss’ prediction was: If childbed fever is caused by an infectious material from the
autopsy room, the women will not get ill if the birth assistants wash their hands with chlorinated
lime in advance. This prediction turned out to be true, and the hypothesis of Semmelweiss5 was
confirmed.
1. Problem analysis – The researcher identifies a need for new or better theory. This need has
arisen due to lack of theory, or because the researcher has found a deviation between present
theory and the real world.
2. Innovation – This is the creative part of research. In this step, new questions are created, and
good ideas are hatched. Sometimes, the innovation is moderate, but still useful, e.g. when
existing theory is applied to another domain. Or, the innovation may be on the cutting edge, as
5
Another part of this story was the disapproval of Semmelweiss’ explanation by the medical establishment at that
time. Firstly, his hypothesis contradicted the current view of the causes of disease. Secondly, physicians refuted to
have contributed to the death of post-natal women.
7
when Einstein elaborated his theory of relativity. The researcher works according to the
overall hypothesis that the new explanation agrees with reality.
3. Evaluation – The researcher makes investigations to find out whether the new tentative
explanation (the hypothesis) may be true. Taking the hypothesis as a basis, the researcher
formulates predictions about the real world and then checks whether these predictions come
true. If the observations agree with the predictions, the researcher can argue that the new
explanation agrees with reality. Repeated confirmations of the hypothesis may cause its
“promotion” to theory.
Sometimes, it turns out that observations reject rather than confirm the hypotheses. Far from being
in vein, such investigations may also yield new knowledge, or encourage new iterations in the
research cycle: problem analysis – innovation – evaluation.
The researcher starts out by collecting requirements to the artefact. For example, the artefact is
required to tolerate a certain pressure, yield more dependable analysis results, manage a complex
problem within a certain time limit, etc. Such requirements are stated by existing users (in case the
artefact needs improvement) or by new or potential users (in case this type of artefact not yet
exists). In addition, the researcher collects requirements and viewpoints from other stakeholders,
such as those who are going to maintain the artefact or make money on it.
In order to evaluate the overall hypothesis, the researcher has to formulate sub-hypotheses about
the properties of the artefact, e.g.:
However, neither such hypotheses can be tested straight away. Therefore, the researcher uses
predictions, which are statements about the artefact’s behaviour under certain circumstances.
Predictions may be derived from the need and from the posed requirements. It is essential to
formulate the predictions in such a way that they are falsifiable6. Let us take a look at the
following predictions:
6
Refer to the discussion at the end of section 2.2.
8
P1 cannot be falsified straight away, because one first needs an agreement of what is “a short
training time”. On the other hand, it is much simpler to find out if P2 is false.
Predictions form the starting point of evaluation (hypothesis testing). To the researcher, evaluation
may be just as challenging as the manufacturing of the artefact. Chapter 6 gives an overview of
evaluation strategies.
Technology research does not always produce artefacts that are complete, regarded from a user’s
point of view. It is common to make a so-called functional prototype, which must be able to
undergo the necessary evaluation. If the prototype looks promising, it can later on be elaborated to
a complete, saleable product. Such finalization is typically done by other people than researchers.
How to make an
artefact that
Innovation
satisfies the
need? Artefact
How to show
that the artefact Argumentation
Evaluation
satisfies the
need?
1. Problem analysis – The researcher maps a potential need for a new or improved artefact by
interacting with possible users and other stakeholders.
2. Innovation – The researcher tries to construct an artefact that satisfies the potential need. The
overall hypothesis is that the artefact satisfies this need.
9
3. Evaluation – Based on the potential need, the researcher formulates predictions about the
artefact and checks whether these predictions come true. If the results are positive, the
researcher may argue that the artefact satisfies the need.
Just like in classical research, positive evaluation results confirm the hypothesis, but they do not
prove anything. Negative or dissatisfactory evaluation results certainly impair the hypothesis, but
nevertheless stimulate new iterations in the research cycle: problem analysis – innovation –
evaluation.
Let us consider an example: A large company needs to rebuild its computerized system for
materiel procurement. It turns out that no standardized system is able to satisfy the need of the
users in this company. Neither do other companies of similar type (and known by “our” company)
possess a similar system. Therefore, it is necessary to build important parts of the procurement
system from scratch. The company’s IT professionals co-operate with colleagues in order to
specify the system to be made. After several cycles of programming, testing and new
requirements, the system is ready for use. It turns out that the majority of users are satisfied with
the result, because they have got simpler work routines, better overview of the procurements, and
more time to spend on tasks which were formerly put aside.
In the above example, the result was a success in that the artefact, the new procurement system,
satisfied the need which was disclosed in the company. Let us now try to answer the three
questions which were asked initially:
1. Does the artefact represent new knowledge? As far as we know, the new procurement
system may be unique in the meaning “one of a kind”. That is, however, not important. The
essential questions are (a) whether any of the properties of this system (e.g. design,
architecture) represents new insight, and (b) if this insight may become important for others
who are to make similar systems. At this the next question is introduced, that of relevance.
2. Is this new knowledge of interest to others? The new procurement system as such may have
no interest beyond the company that owns it. All the same, the system or some of its
components may build on principles that are new and transferable to other settings. In that
case, one can argue that the new knowledge is relevant.
3. Are the results and the new knowledge documented in a way that enables re-examination?
Success stories need documentation in order to be trustworthy. Research requires
documentation; the work shall be so well described that others may repeat and verify it.
10
Any issue in the position of spreading doubt about the knowledge and the results must be
accounted for and discussed.
Results satisfying the three criteria above represent research and should as such be published. The
purpose of publication is to make the results known among other researchers and among the
general public. In this way, the results can be debated and criticized, and probably contribute to
further research and/or development. Research dissemination is therefore of great importance to
the research community and to the society at large.
• The medicine shall be more effective than other medicines for the same illness.
• The medicine shall give fewer side effects than comparable medicines.
These requirements are concretized and transformed into a more detailed recipe for the new
medicine. After some time, the new medicine is tried on volunteers, and their reactions to the
medicine are carefully registered7. What is tested in this case is thus the impact of a new artefact
on the real world, i.e., on humans with a certain illness. The result of the test is compared to the
need that formed the starting point: Is the new medicine more effective, and does it give fewer
side effects? Thus, we conclude that this is technology research, even when humans or other
living beings are used in the evaluation.
7
We assume that the researcher measures the results in a proper way in order to avoid bias. Within medicine, it is
common practice to use double-blind experiments with control groups. This strategy implies that neither the patient
nor his/her physician knows whether the remedy given is actually a medicine or a non-effective substitution.
11
This apparent divergence need not represent any conflict between the two parties. On the contrary,
it is assumed that the researcher’s view and the stakeholder’s view contribute to a common goal: a
new or improved artefact that benefits the stakeholder.
Solution should be
Relevant part of the real world Relevant need
compared to …
The new explanations agree with The new artefact satisfies the
Overall hypothesis
reality need
The starting point of either variant is a need. In the classical variant there is a need for a new
theory, while in the technology variant there is a need for a new artefact. The solution sought in
classical research is new explanations qualifying for new theory. The solution sought in
technology research is the new (or improved) artefact. To check whether the solution comes up to
expectations, one has to compare it to something:
• In the classical variant, one invents new explanations and compares these to a relevant part
of the real world. The overall hypothesis is that the new explanations agree with reality.
• In the technology variant, one invents a new artefact and compares it to the potential need.
The overall hypothesis is that the new artefact satisfies the need.
In both variants, the overall hypothesis is on the form “B solves the problem A”. A is the need for
new theory, or the need for a new artefact. B is a new explanation, or a new artefact, which
satisfies the identified need. We have already mentioned that hypotheses are evaluated by means
of predictions. How to evaluate predictions is the subject of chapter 6.
12
5 Action Research
5.1 Introduction
In the literature, action research is described as research and/or development directed towards the
improvement of processes or systems within organizations. The goal is to reduce or eliminate
organizational problems by improving the organization. The action researcher brings change into
the organization by intervening it and then observing the effects of the changes. Action research
differs from other kinds of research in that the researcher and the researcher’s activities are
included in the research object. In other words, the researcher and the object under study are not
clearly separated.
Action research originated within social psychology in the middle of the twentieth century (see for
instance [7], [8]). Later on, it has been used mainly within social research and medicine. As time
went by, several variants of action research developed. Some of these are action learning,
participative observation, and clinical field work [9]. Action research has been criticized for
producing much action and little research [10]. Even though “action research” refers to the term
“research”, many of its activities come close to development, and the work of action researchers
often borders on consultancy. Therefore, action research is not necessarily research in the sense
that we have defined it (section 3.4). Action research may rightly be called research only when it
provides new knowledge which is of interest to others, and which is documented in a way that
enables re-examination.
In the following, we shall go through action research as defined by Davison et al. [11] and
Baskerville et al. [9] and eventually examine the relation between action research and
technology research.
It is important for the participants to have a common understanding of what is going to be done,
and to be aware of the advantages and disadvantages for the organization in which the method
will be used. The co-operation should therefore be stated in a written agreement. Beside
containing the client’s consent, the agreement should, inter alia, make clear what is the interest of
the researcher, and what is the interest of the client, in the work to be done.
H1: Action A will imply that the employee staff becomes more stable.
P1: Next year, there will be a 50 % reduction in the number of employees leaving their
positions in order to join other enterprises.
The predictions within action research is usually on the form: In situation S and under circum-
stances F, G and H, the actions A, B, and C are expected to result in X, Y and Z. According to
Davison et al. [11], such predictions form the theory in canonical action research, as opposed to
action learning, which lacks theory.
The actions may certainly affect the individuals of the organization, e.g. change their roles and
responsibilities, or require the employees to develop new skills. The structures and systems of the
organization may be altered as well. Therefore, a thorough assessment should be carried out
before the situation is altered, and also after the change has been completed.
The action researcher and the persons involved have to go through the results systematically and
critically in order to find out what they have learned. The first question to ask is whether the
actions performed did have the desired effect. If yes, it is necessary to find out if the original
problem has been solved. If it turns out that the problem has been solved, they have to find out if
the solution was brought about by the performed actions or was due to other causes. Another kind
of learning considers action research as a framework; to what degree it proved useful to this type
of problem, and in which way it should be adjusted according to this particular experience.
5: 1:
Describe Diagnose
learning
4: 2:
Evaluate Plan actons
effect
3:
Implement
actions
1:
Problem
Diagnose
analysis
2:
Innovation Plan
actions
3: 4: 5:
Evaluation Perform Evaluate Describe
actions effect learning
Figure 5: The five phases of action research (to the right) compared to the three main
phases of classical research and technology research (to the left).
Figure 6 depicts action research in the three phases we already know. The problem analysis
identifies the need for improvement in the organization, resulting in a description of this need.
The innovation phase results in an action plan. The evaluation results in an argument for the
validity of the overall hypothesis, which is that the actions imply the desired improvement in the
organization.
What kind of
Problem improvement Need
analysis does the
organisation need?
How to satisfy
the need for Acton plan
Innovation
improvement?
research with the main elements of canonical action research. The starting point of either is a
need. In technology research, the need is a new artefact, while action research addresses an
improvement need in an organization. The solution in technology research is a new artefact
satisfying the need. In action research, the solution is an improved organization that satisfies the
need. We notice that, in both variants, the overall hypothesis is on the form “B solves the problem
A”. B is either a new artefact, or an improved organization, satisfying the need that has been
identified.
Table 2: The main elements of technology research and of (canonical) action research
Technology Research Action Research
Solution should be
Relevant need Relevant improvement need
compared to …
The actions result in an
The new artefact satisfies the
Overall hypothesis improved organization
need
satisfying the need
The action plan, which represents the solution within action research, often aims at introducing or
changing a production method, an accounting system, a customer relations system, the personnel
management, the organization chart or anything else influencing the organization’s efficiency,
profitability or similar. The intention is to improve the organization. To achieve this goal, the
action researcher first intervenes with the organization’s doings by e.g. introducing a new method,
and afterwards examines both his/her own behaviour and the effects of it.
Action research poses restrictions on the choice of evaluation strategies. Chapter 6 is about
evaluation in general, and section 7.4 gives special account to evaluations strategies within action
research.
6 Evaluation
• laboratory experiment – giving the researcher a large degree of control and the possibility
to isolate the variables to be examined;
• experimental simulation – laboratory test simulating a relevant part of the real world;
17
• field experiment – experiment carried out in a natural environment, but in which the
researcher intervenes and manipulates a certain factor;
• field study – direct observation of “natural” systems, with little or no interference from the
researcher;
• computer simulation – operating on a model of a given system;
• non-empirical evidence – argumentation based on logical reasoning;
• survey – collection of information from a broad and carefully selected group of informants;
and
• qualitative interview – collection of information from a few selected individuals. The
answers are more precise than those of a survey, but cannot be generalized to the same
degree.
Figure 7 shows the eight strategies in a circle, divided into four groups:
In the following, we shall take a closer look at the properties of these strategies and discuss which
factors should determine our choice of strategy.
Precision
Laboratory Experimental
experiment simulation
Qualitative Field
interview II II experiment
III I
III I
Survey IV IV Field
study
Realism
Non-empirical Computer
evidence simulation
Generality
Obviously, the best choice of strategy would be that of a high score on generality, precision and
realism altogether. That choice is, however, impossible in practice. The figure depicts this fact by
placing the three properties far apart on the circle. We notice that laboratory experiments have a
high score on precision, and that field studies have greatest realism. The greatest generality is
found in surveys and non-empirical evidence. The solution must then be to choose several
strategies that complement each other. When choosing strategies, the researcher has to decide
among other things:
• Is the strategy feasible? Time and cost are two important constraints when it comes to
selecting evaluation strategy. In addition we have the availability of the individuals who are
supposed to participate in the evaluation. An experiment requires thorough planning and
usually involves many people, and is therefore a costly strategy. The other extreme is
computer simulation. Involving no human subjects, this strategy may be cheap and quick to
carry out, if possible and relevant.
• How to ensure that a measurement really measures the property it is supposed to measure?
The important thing is to isolate the property to be measured, and then account for all
possible factors which might be supposed to influence the result. This topic is discussed in
both text books and papers (e.g. [13], [14]).
• What is needed to falsify the prediction? Evaluation is nothing worth if a positive result can
be given in advance, i.e. if falsification is impossible. Therefore, it is important to choose
strategies that may, eventually, cause the prediction to be rejected, even if rejection would
imply that the artefact is a failure.
Evaluation strategies may be regarded as tools by which the researcher can examine if the
predictions are true. These tools give various possibilities, but also constrains. It is e.g. not
feasible to test a system’s functions by means of a qualitative interview (!). Hence, a dependency
exists: When the researcher has chosen his/her tools (strategies), then the investigations
(predictions) have been chosen also, at least to a certain degree. That is because the chosen
evaluation strategies must have the potential to falsify the predictions. If not, the researcher either
has to reformulate the predictions or choose other evaluation strategies.
7 Evaluation in Practice
The following sub-sections presents evaluation in two practical cases, the first one comprising
several different evaluation strategies (section 7.2), and the second one handling evaluation within
action research (section 7.4).
19
• a survey – in which the questions are formulated in advance, facilitating comparison of the
answers. Surveys do not require much time of each informant and may therefore be
distributed to many persons;
• qualitative interviews – which are much less rigid and yield more detailed and precise
information than a survey can provide. Performing qualitative interviews require a
relatively long time and should therefore be carried out with a few, selected individuals.
Such collection of information about other control rooms may serve as evaluation of those control
rooms, or rather, evaluation of the hypotheses underlying their shaping. At the same time, this
information provides important input to the next research cycle about control rooms. It is just
here, in the transition to the next cycle, that our example comes in.
Let us now imagine that relevant information has been collected, and that the new control room
has been planned. One may save time and money by first building an artificial control room and
experimenting with this one until everybody involved is content. Then, one may go ahead
building the real control room. In the artificial control room, IT systems are used to simulate
reality. Two types of simulation may be relevant:
• experimental simulation – simulating the reactor and the processes around it. The operators
of the artificial control room interact through IT systems with a reactor simulator instead of
a real reactor. This is a technique often used to optimize control rooms with regard to
potential accident scenarios;
• computer simulation – simulating the operators in addition to the reactor and its related
processes. Simulating the operators means to translate human actions into actions that are
20
performed by a computer. The person responsible for the simulation will compose several
action sequences in advance, each sequence simulating one type of human behaviour.
Efficient Reactor
The fuel rods in the reactor have to be constructed in such a way that they give optimal effect.
New types of fuel rods are tested in a laboratory before they are set going. Laboratory
experiments give the researcher the opportunity to test one variable at a time while keeping the
other variables constant.
A research team has developed a new method for risk analysis. This method includes a new
graphical language for expressing assets, threats, vulnerabilities, probabilities, etc. During the
project, the research team performed a first iteration resulting in the first version of the method. In
this iteration, one carried out simple field experiments or experimental simulations in order to
evaluate the method. This approach was not action research, but ordinary technology research.
Later on, the method has been further developed and evaluated in several large companies. These
new iterations have been carried out as action research. The researcher’s intervention has been to
introduce the new risk analysis method in an organization, adapting it to new requirements and
local needs. These new requirements and needs have often been expressed during the risk analysis
meetings. Thus, the researcher’s capability to improvise has had a great impact on the result,
together with the other participants’ goodwill and receptiveness to new approaches. A survey has
been central in the evaluation, querying both the appropriateness of the method and the behaviour
of the researcher during the risk analysis meeting. Afterwards, the researcher has critically gone
through the results of the survey plus his/her own experiences from the meeting. The result of
each iteration has been a report suggesting improvements to the method, both regarding the
development of the language, and when it comes to the practical use of the method in risk analysis
meetings.
8 Conclusion
In both classical research, technology research and action research, the starting point is an overall
hypothesis on the form “B solves the problem A”. In classical research A is a need for new theory,
in technology research A is a need for a new artefact, and in action research A is a need to
improve an organization’s processes/systems. In all three variants, the overall hypothesis has to be
specialized to more concrete hypotheses, which in turn are used as the basis for predictions. A
prediction is a statement about what will happen (under given circumstances) if the hypothesis is
true. The researcher tests the predictions by means of various evaluation strategies, which should
be combined in order to yield credible results.
Thus, classical research, technology research and action research are closely related. All three
variants are performed during iterations of the three main phases:
Action research may be understood as a special case of technology research, in which the artefact
is an organization, and in which the researcher forms part of the research object.
The main message of this report is that technology research does follow a principal method, and
that this method has many points of resemblance with that of classical research. Furthermore, the
authors wish to inspire technology researchers to use this method more deliberately, from the
belief that the method makes technology research more efficient. This is because the method
reminds the researcher what to do, when to do it, and why.
However, things do not always proceed smoothly. Premises may fail, hypotheses may have to be
rejected along the way, and conditions that nobody had ever thought of, may be discovered. In
such situations, the work may seem to be at a loose end. But in moments like that, the research
method may come to one’s rescue. To unravel the tangle, the researcher should start a new
iteration by reformulating the problem and asking what is the actual need.
9 Acknowledgements
The authors of this report have benefited greatly from the kind support and input from our
colleagues Randi Eidsmo Reinertsen and Erik Wold. Moreover, students at SINTEF’s internal
course of technology research have provided much valuable feedback on early versions of the
report. Thanks to all!
Some aspects of this work have been funded by the project MODELWARE (FP6-IST 511731).
MODELWARE was a project co-funded by the European Commission under the "Information
Society Technologies" Sixth Framework Programme (2002–2006). Information included in this
document reflects only the author's views. The European Community is not liable for any use that
may be made of the information contained herein.
22
References