Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content
BY 4.0 license Open Access Published by Oldenbourg Wissenschaftsverlag December 6, 2023

The Method Radar: a way to organize methods for technology development with participation in mind

  • Marc Schmidt ORCID logo EMAIL logo , Stefan Bente ORCID logo , Bruno Baruque Zanón ORCID logo and Ana María Lara Palma ORCID logo
From the journal i-com

Abstract

Identifying appropriate methods for any process, such as uncovering needs of juveniles in social welfare or designing interactive systems, requires intensive research and generally using a categorization system that brings methods in a systematic order. These taxonomy systems are heavily responsible for the later method usage and start the thinking process for researchers and practitioners alike in a specific direction. So making participation visible in such taxonomy systems directly affects the later method usage and makes participation more visible and easier to use. This article presents the Method Radar, a visualized categorization of methods with a focus on participation using the ladder of participation, that makes participation visible right at the beginning of any method selection. The Method Radar builds on the radar representation established in the technology sector, which allows a multi-dimensional classification. In addition, an implementation and systematic process for categorizing these methods are presented. It can be used for any form of method categorization in which participation is supposed to be thought of.

1 Introduction

Methods are a crucial part of any process, whether they are for technology development or something else. They are used throughout any domain and by researchers as well as by practitioners. In the current age, there is such a vast amount of methods that often the question is not to find a method to tackle a problem but to decide on the right one from a mountain of methods. Which is symbolized in Figure 1.

Figure 1: 
Symbolic picture of stimulus overload from sheer amount of methods [1].
Figure 1:

Symbolic picture of stimulus overload from sheer amount of methods [1].

Categorization systems address this problem by putting methods on a specific topic into a structured form that simplifies decision-making. A visual representation often supports this. Such a categorization system must always be based on aspects that determine how methods should be classified, forcing the user to think about these aspects in the first instance. From this initial viewpoint, it can be concluded that these categorization systems are not neutral, but help to shape the choice of method. Allport [2] already made clear that categorical thinking is an inevitable tendency of the human mind and is therefore a core part of our thinking process.

This paper presents a categorization via a Method Radar, originally used for ranking technology maturity, that focuses on participation as its leading metric and thus directly invites users to consider participation in their planning. It makes participation visible at an early stage instead of just evaluating it at the end of a process (compared to approaches mentioned in chapter 2). In the first step, the scientific status of structuring participation in levels is presented and a Method Radar is developed based on the monitoring radar described by Schuh and Klappert [3], which is an established model for ranking technologies according to their maturity (see [4, 5] or [6]). A concrete implementation for such a radar presentation is the ThoughWorks’ technology radar [7].

Participation, as defined by Hella von Unger [8], is described as: “Participation means, first of all, the individual or collective participation of people in decisions that are relevant to them, to their lives and, more narrowly, to their health.” The important aspect of this definition is that it addresses individuals that are relevant. In the case of technology development, the later users are participants who need to be able to make decisions. Problems arise when a power gap exists between the creators of such technology and the users (e.g. adults to children). The Method Radar was initially developed for professionals who are working in institutions of social welfare, that work with juveniles. In this environment, a constant power gap exists as the professionals are authorized to issue directives and are responsible for the juveniles. Making participation visible at an early stage helps these professionals take into account the existing power relationship. But, and that is also important, it should be mentioned that high levels of participation are not always desirable (which will be further elaborated upon later in the discussion section).

Participation has such importance in research that the whole topic of participatory design has emerged. It can be described as an “approach towards computer system design in which the people destined to use the system play a critical role in designing it” [9]. It’s an area that by nature includes multiple disciplines including social sciences, design, and software engineering. Which makes it a common place for transfer innovations. As mentioned above this paper combines a categorization system primarily used in software engineering with a participation model commonly discussed in social sciences.

Following Bratteteig et al. [10] participatory design can be split into three aspects that are guiding and form the core of the participatory design. First is having a say which describes the sharing of decision-making power between the researchers, future users, and other stakeholders in the process. Secondly the process of mutual learning between them. And lastly the collaborative development of technologies and practices. The Method Radar highlights methods that especially support the first aspect of sharing and exchanging in the process.

All of these points come together in the question, “How can methods for participatory technology development be structured according to their degree of participation?” This contribution proposes a visual method, in form of the Method Radar, for representing the question mentioned above that enables to the illustration of the level of participation as well as a broad categorization in process phases based on the double diamond (Discover, Define, Test, Deliver). The Method Radar was developed to support participative technology development within institutions of social welfare. So it primarily aims at professionals who work in such institutes as users and people (mostly juveniles) within these institutions as a target group. It should be applicable to any domain that might require participation to address power gaps.

This paper starts in Section 2 by summarising the current state of the art, split into related research in regards to participation and representation as radar. Section 3 presents the Method Radar, and its design principles and gives a brief view of the implementation in Section 4. Sections 5 and 6 discuss the existing constraints, limitations, and future directions. Section 7 finalizes the paper with a conclusion.

2 Related work

2.1 Models of participation

Arnstein [11] designed 1969 with the ‘ladder of participation’ a first-stage model of participation, which should serve to visualize an exchange between citizens and politicians and to recognize possible “empty forms of participation” [12]. This ladder divides the cooperation between decision-makers of urban projects and affected citizens into eight stages from active manipulation to a process steered by citizens. This is shown in Figure 2 and explained afterwards. These stages are not to be assumed as complete or comprehensive [11].

  1. Manipulation

    describes a form of non-participation in which, however, the appearance of participation is to be created from the outside, e.g. through pseudo-committees with no real decision-making power.

  2. Therapy

    describes a form of non-participation in which, the values/opinions of the “participations” is changed to match the desired result, instead of the other way round.

  3. Informing

    describes the first and most important step towards participation. You can make decisions only when you know your rights, duties, and options.

  4. Consultation

    describes the process of asking the target audience for their opinion. For example through surveys.

  5. Placation

    describes the first level where some degree of influence from the target audience is visible. For example, by electing representatives to accompany the process or to provide feedback.

  6. Partnership

    describes a power redistribution between all parties. Decision-making is made in collaborative groups.

  7. Delegation

    describes a decision-making authority in the hand of the target audience.

  8. Citizen Control

    describes that the whole process is in the hands of the individuals for which the decisions are relevant.

Figure 2: 
Ladder of partizipation arnstein [13].
Figure 2:

Ladder of partizipation arnstein [13].

A later interpretation based on Arnstein’s [11] model was presented by Roger Hart [14]. This interpretation explicitly addresses the participation of children, who in many countries are excluded from state democratic processes (e.g. elections). Roger Hart considers this state of affairs to be insufficient and not in accordance with the applicable conventions of children’s rights. In order to provide a means of discussion, he has developed his form of ‘ladder of participation’. This is meant to serve as a linguistic metaphor to be able to talk about the involvement of children. The ladder is shown in Figure 3. The consideration of Hart’s model is interesting in that he himself wrote a critical reflection on his ladder of participation in 2008 [16]. This critique is discussed in more detail later on in the discussion section.

Figure 3: 
Ladder of participation following hart [15] (an adaption from [14]).
Figure 3:

Ladder of participation following hart [15] (an adaption from [14]).

There is also the participation pyramid from Straßburger and Rieger [17], which focuses on the following aspects:

  1. Who initiated the process, and who bears responsibility for its further course?

  2. How far do the possibilities of co-determination extend?

  3. What enables and promotes co-determination or hinders or limits it?

The seven levels of the pyramid underline the spectrum between minimum participation and decision-making power, whereby the higher the level, the greater the degree of participation. The left side of the pyramid shows the view of the professionals on participation processes, and the right side the view of the citizens (or more concretely: addressees). As a special feature here, the half-pyramid tip is to be noted, since this degree of participation can only happen in civic self-direction. According to Straßer and Rieger [17], the central question for the classification of the degree of participation should be the following: “How are decisions made? With or without the citizens and addresses?” The answer to this question can be classified into different levels. It should also be mentioned that Chung and Lounsbury [18] have created a model, which represents the interaction between researchers and participants in eight levels. There are also bigger models like the twelve-stage model for citizen participation from Trojan [19]. Finally one of the most interesting models is the Kreis der Entscheidung [20], which can be translated as the circle of decision (this translation will be used in later references). The interesting aspect of it is the circular representation of the ladder model. This circular model fits perfectly into a radar representation and is there used later on as described in section 3. It can be seen in Figure 4. To give a comparison between the different levels of participation presented in the beginning by Arnstein [11], Hart [14], and Wright [20] a comparison is presented in Table 1. It shows the similarities between the models but also that there is a slight shift in the definition of each level. To make the comparison easier, the original names from the circle of decision, which are in German, can be found in Figure 4, but appear translated to English for the comparison table (Table 1). In comparison to the models of participation which evaluate methods after their usage, there is also a simpler approach used by Stickdorn et al. [22]They are categorizing methods beforehand in non-participative and participative approach. They just differentiate whether there is interaction between researchers and a target group. E.g. a method like cultural probe kits [22, p. 124] is listed as non-participative as the researchers do not face-to-face interacting with the target group. Within the ladder of participation, the probe kit can have participative aspects as the participants give feedback through the probe kit.

Figure 4: 
Circles of decision (translated by the author) [21].
Figure 4:

Circles of decision (translated by the author) [21].

Table 1:

Comparison between the participation levels from Arnstein [11], Hart [14] and the circle of decision.

Arnstein Hart Circle of decision
Citizen control People-initiated Full decision-making authority
Delegation People-initiated, but directed
Partnership Shared decisions Partial decision-making authority
Placation Codetermination
Consultation Consulted Involvement
Consultation
Informing Informed Information
Therapy Decoration Not involved
Manipulation Manipulation Not involved

2.2 Technology radar

There are numerous established metrics in the IT industry that assess and classify trends, technologies, and products. “In a broader sense, the goal of a radar or map is to gain a competitive advantage by identifying opportunities and threats emerging from technological developments at an early stage and to provide the technological capabilities needed to face these challenges” defines Rohrbeck [5], who also classifies different goals and approaches in technology scouting.

Visualizations for such scouting studies appear in various standardized styles. Amongst the most well-known is the Gartner Magic Quadrant [23] and some of its competitors from other established consultancy companies. They use the so-called “Boston square” visualization, consisting of a 2 × 2 matrix, with technologies or products shown with symbols. This allows showing two main dimensions (x- and y-axis), plus additional information in the symbol shape, size, or color.

Also well known in the IT industry is the Gartner Hype Cycle [24], basically a graph with an upward slope, peak, trough, and then a plateau line in middle height. Trends, technologies, or products are shown as symbols on the >>hype cycle<<, visualizing the maturity and perception of each contestant.

The most versatile of these display styles is the radar chart. A radar chart in general has a spider-web structure, allowing it to show several related market areas as segments in the circle. In addition, concentric rings allow a classification according to maturity levels or similar categories. Trends, technologies, and products can then be displayed by symbols in varying shapes and colors. A more in-depth description of the fundamental function of the radar representation used in the technology sector can be found in the work from Schuh and Klappert [3], who explain it under the name “monitoring radar”. Although there are alternative visualizations (e.g. scored profiles [25] or technology curves [26]), this is an intuitive and well-established way of representing information.

The most prominent technology radar, according to Kroß et al. [27], is the Thoughtworks technology radar [7].[1] The IT consulting company Thoughtworks evaluates every six months the technologies they are using and structures them in a so-called technology radar. It is a tool to get a broad overview of many of the currently used technologies, and in which state they are currently.

The basic representation that is used is a structure that looks similar to a radar screen, where the name comes from. This radar contains a set of technologies each represented in one dot in the system (later on only called a blip). This can be seen in Figure 5, which is a radar from 2016 that is still presented in a single image. The radar is primarily showing the maturity of a technology. This is done by using rings, in which each ring represents one maturity of a technology, from “hold” (technologies that should be avoided) over “assess” (could be interesting but was not tested yet by Thoughtworks) and “trial” (has been tested and is a good solution to consider in the right situation) to the highest level of “adopt” (should be used if appropriate). In Addition, to this maturity grade, the radar is split into quadrants to roughly divide the technologies into thematic areas. Finally, the blips themselves are highlighted by their shape when they are new on the radar.

Figure 5: 
Full overview of technology radar from Thoughtworks [30].
Figure 5:

Full overview of technology radar from Thoughtworks [30].

In Figure 6 a detailed version of one quadrant is shown.

Figure 6: 
Detail view of one quadrant in Thoughtworks technology radar [7].
Figure 6:

Detail view of one quadrant in Thoughtworks technology radar [7].

Radar representations are discussed in many publications for a variety of reasons. Berndt and Mietzner [4] address how a technology radar can help facilitate knowledge and technology transfer. It shows a good example of what a radar visualization has to offer. Rohrbeck [5] also presents a radar similar to the cisco technology radar but for technology scouting. Boe-Lillegraven and Monterde [6] explore how and why foresight can influence ways of processing information based on the cisco technology radar. The paper conducted a survey and identified the most important reasons for using technology radar as the following:

  1. “It helps us communicate”

  2. “It is good for business”

  3. “It brings new perspectives”

  4. “We like the process”

  5. “It helps carry out our task”

These reasons show the importance that a categorization system brings to any kind of process, as it shapes not only the personal perspective but also the communication between each other.

3 Method Radar

Based on the previously mentioned literature, we can consider the Method Radar as a transfer innovation that takes the Thoughtworks technology radar and adapts it to apply the participation levels described by Wright et al. [20] in the circle of decision.

3.1 Transforming technology radar to Method Radar

Starting with the technology radar as shown in Figure 7 the four maturity grades (hold, assess, trial, and adopt) is replaced by the seven levels of participation (not involved – nicht beteiligt, information – information, consultation – anhörung, involvement – einbeziehung, codetermination – mitbestimmung, partial decision-making authority – teilw. entscheidungskompetenz, full decision-making authority – entscheidungsmacht) based on the circle of decision, as shown in Figure 4. The four quadrants (techniques, tools, platforms, and frameworks) are no longer appropriate categories for the content as methods need a different kind of categorization. For this categorization, the well-known phases of the double diamond [22, p. 88] (Discover, Define, Test, Deliver) are used. A translated version in form of the categories “Entdecken, Definieren, Testen, Entwickeln” is planned for the target (German) audience. These phases are exchangeable with different categories depending on the type of methods that are used. In this case, the design focuses on methods for participative technology development, which can be categorized by the double diamond.

Figure 7: 
Technology radar from Thoughtworks (newer version) [7].
Figure 7:

Technology radar from Thoughtworks (newer version) [7].

The concept art of the resulting method radar can be seen in Figure 8. It shows the radar separated in seven rings with the previously mentioned levels (in German) and four quadrants based on the double diamond (which will be translated to German in the implementation). The coloring of the radar itself and the blips are exchangeable to match the surrounding content. The different coloring blips is intended for existing methods (light green), newly developed methods (dark green), and currently selected element (red).

Figure 8: 
Method Radar.
Figure 8:

Method Radar.

3.2 Adding methods to the radar

Adding methods to the radar requires categorization of them into a level of participation. The following question structure should aid in determining the right ring in the radar. Which is also visualized in Figure 9.

  1. Is it clear who should be participating?

    Who should benefit from the usage of the method and should they be part of it?

    e.g. Is it about the participation of a social work professional, a young person in an institution, or a scientist?

    For technology development always choose the perspective of the target audience for which the technology is developed.

  2. Is any type of exchange with the target group planned?

    Pure observing or watching footage for example does not create any exchange with the group itself.

  3. Is the exchange direct?

    Exchanging through a third party filter the real opinion of the target audience through the third party.

    E.g. Asking professionals of social work what technology would help the juveniles most. It will always be what they think is the best for the juveniles.

  4. Is the target group informed enough to make decisions?

    Participation can only happen when the participants can make informed decisions. Every decision without information can be interpreted as non-participative.

  5. Is the target audience consulted for their opinion?

    The first step in participation is to ask. This could be starting a method by asking participants what technologies they think might be helpful to them.

  6. Is the target group’s opinions consulted on findings/needs/plans found?

    The next level of participation is to not only ask for opinions but also get feedback on the findings. This ensures that the conclusions drawn are also accurate.

  7. Are representatives of the target group involved in the process?

    Representatives from the group can become part of the process and help at least partly to get the target’s opinion.

  8. Does the target group make decisions in at least one, not insignificant subarea?

    An important question is what decisions the target group is able to make. These decisions need to be important otherwise its false participation.

    E.g. Building a house and asking the future residents what color the hallway should have would be not significant.

  9. Does the target group make all significant decisions?

    If this happens the target audience is in the primary driver role, while others are just supporters of them.

  10. Does the target group initiate and takes responsibility for the action itself?

    The highest level of participation is when a group imitates something by themselves. This is not represented in the radar as it is rarely the case when applying methods.

Figure 9: 
Flowchart for adding methods to the radar.
Figure 9:

Flowchart for adding methods to the radar.

Besides the described level of participation which is presented as the rings on the radar, there are also the quadrants. To sort methods into the quadrants the primary focus of the method should be considered and sorted into one of the four categories They can be described as follows:

  1. Discover

    Methods that focus on discovering and understanding a domain to get insights into it are placed here.

  2. Define

    Methods that condense the discovered insights are listed here. These methods help to focus on the right things and define the steps to be taken to move on.

  3. Develop

    Methods to take the steps that have been previously defined to create a solution. This can be ideation, prototyping, testing, and much more.

  4. Deliver

    Methods that focus on wrapping things up, creating finished goods, and getting them out to the users.

To get a better understanding of the process of adding methods some examples are given. While participation is in focus here the sorting into the right quadrants is also done at the end of the section. The methods are taken from Stickdorn et al. [22]. The first example would be “preparatory research” [22, p. 144]. This is a desk research method that tries to get an understanding of what might be the right questions in a topic. It is intended to get a process started and can be considered non-involvement. If we follow the flowchart provided in Figure 9 it would look as follows:

  1. Is it clear who should be participating?

    Depending on the topic that is researched, we may know the exact target group or have more of a rough idea. Although this question can certainly be discussed, it will become clear at the latest with the next questions that no participation is envisaged.

  2. Is any type of exchange planned?

    No, this method does not include any exchange with people besides the researchers.

In comparison, “participant observation” [22, p. 146] directly targets the right audience as it involves understanding their environment. Depending on the exact definition it can be just watching or asking questions during the observation. It does not plan to bring the target to a level to make informed decisions and therefore it is in most cases non-involvement. Using the flowchart:

  1. Is it clear who should be participating?

    Yes, the target group we are observing.

  2. Is any type of exchange planned?

    As mentioned above it can be argued but again this is solved by looking further ahead. It is not intended by the method to inform the target group enough so that they can make informed decisions.

  3. Is the exchange direct?

    Most likely yes.

  4. Is the target group informed enough?

    Most likely no. They are regarded as objects of observation in the method.

A journey map [22, p. 152] on the other hand starts of with communicating the scope of the map and its usage. It invites the participants and not asks them but also discusses with them every aspect of the journey through multiple iterations. This can be considered to be partial decision-making authority.

  1. Is it clear who should be participating?

    Yes, this is a requirement.

  2. Is any type of exchange planned?

    Yes, as part of the method.

  3. Is the exchange direct?

    Yes, the participants help form the journey.

  4. Is the target group informed enough?

    Yes, the method intends that the participants take part in the discussions to form the journey, which requires sufficient information.

    With this knowledge, we know we are Rang one (information) on the radar or higher.

  5. Is the target audience asked for their opinion?

    Yes, as mentioned they are not only asked they are enforced to take part in the discussion.

    We are now in Rang two (consultation) or higher.

  6. Are the group consulted on findings/needs/plans?

    Yes, as mentioned above they are forming the findings together.

    We are now in Rang three (involvement) or higher.

  7. Are representatives involved in the process?

    Yes, as mentioned above.

    We are now in Rang four (codetermination) or higher.

  8. Does the group make significant decisions?

    Yes, they are taking part in shaping the whole journey which can be considered significant in regard to the method. We are now in Rang five (partial decision-making authority) or higher.

  9. Does the group make all significant decisions?

    This can be discussed as depending on the execution the target audience can decide everything or can be restricted. Whether the method matches more full decision-making authority or particular decision-making authority depends on the strictness of the evaluation, which is to be determined by the user of the radar.

With these steps, the methods are categorized in regard to the participation level, which is represented through the rings in the radar. The Method Radar does offer a second dimension to sort the methods, which focuses on the phase in a process, based on the double diamond. The first two methods “preparatory research” and “participant observation” are intended to get a first understanding of a domain, its scope, and possible questions that need to be tackled. These two methods are part of the first quadrant which is “discover”. A journey map on the other hand helps get all the insights sorted and bring them into shape. It is part of the “define” step. Methods that focus on generating solutions, prototyping, and testing are sorted in the “develop” category, while the last quadrant “deliver” hosts methods that focus on creating a deliverable product and how to get it into the world. More examples can be seen in the implemented radar which is mentioned in chapter 4.3.

4 Implementation

In the framework of the use of the radar in the INTIA [31] project, there is a code design to be able to visualize this or other modified radars on a webpage (for better sharing). INTIA is a research project funded by the German Federal Ministry of Education and Research to develop methods and technologies for supporting everyday activities of young people with special needs. The Method Radar is implemented as an open-source web application, which can be embedded into existing websites or copied. With this, it is easy to add the Method Radar to your own processes or projects. This allows a high level of flexibility. For this reason, the radar should also work with any basic HTML page that is able to use vanilla javascript. To provide the best and easiest solution for the later user it has been decided to create one javascript file that contains the whole radar. This makes it as easy as possible to add the radar to a webpage, which is visible in the following usage section. All of this is implemented with the help of the D3.js [32] library. As a fundament for the implementation is the, under MIT license listed, Zalando technology radar [33] taken. It is a simpler open-source version of the Thoughtworks technology radar. This radar has been taken and then adapted to be able to use more rings and quadrants than the original. In this step, it was also refined by using the previously mentioned D3.js to be cleaner and easier to maintain. This new radar file is clearly structured in eight sections.

  1. Declaration of constants and variables

    For fixed values for minimal sizes and defaults. These are overwritten by the configuration.

  2. Helper functions

    Primarily math functions.

  3. Data processing

    Reading the JSON data and calculating the sizes of blips, segments, and sectors.

  4. Creating the structure

    creates a div element with all needed elements to, later on, replace the dummy div in the HTML.

  5. Events

    Every interaction-related event like click or hover.

  6. D3 components

    Defining the functions that are later on called to generate the elements.

  7. Element generation

    Generates all elements based on the previously provided data and components and attaches them to their designated place in the div structure.

  8. Simulation

    A collision simulation with d3.

The most interesting aspect of the implementation is to ensure that the blips fit into their assigned locations without overlapping with other areas. To understand this it is important to know that each segment consists of a start and end angle of the sector in which the segment is located. And also the minimum and maximum radius for blips, of the ring in which the segment is located. To get started the minimum distance that each blip to the border has is calculated by getting the margin and blip size from the configuration and then calculating: blipDistance = blipSize/2 + borderMargin This blipDistance is now applied to the outer and inner radius of each segment to create a smaller segment in which the blips are allowed to be placed (without touching the border). To make this calculation easier it is only been done once for each ring and then inherited by each segment through their ring. To allow more or less than (four) quadrants it is also needed to calculate borders for the borders based on the angle that each segment has to its neighbor segment. By using the Pythagorean theorem the angle can be calculated. This is done by having the blipRadius as one cathetus and the blipDistance as the second one. With this angle, it is now possible similar to before to create an inner place in which all blips need to stay.

4.1 Usage

The Method Radar can be added to any standard HTML webpage. In the Listings used as an example, it is added to a GitHub page [34]. GitHub pages, by default, run a static HTML site generated by the tool Jekyll [35]. Whenever it is essential in the usage, the Listing mentions if anything is different from vanilla HTML. As the radar is based on D3.js [32] it is required to add the library as a script. In addition, it is necessary to import the radar script itself and add a style sheet. All files can be imported as external references and later copied into the project to customize everything.

After adding the required files, the radar must be placed on the website. This is done by adding a div element to the location where the radar should be later. This can be seen in Listing 1.

Listing 1

Add radar to html.

Through the id in the div the javascript locates the element and replaces it with the radar. This function expects three parameters that define the radar. The parameters are the configuration of the radar, its structure, and the entries. To allow the usage in any webpage, the expected format for the three parameters is JSON [36]. Listing 2 shows how the files can be added to the radar.

Listing 2

LoadConfig.

After these few steps, the page can be rebuilt, and the radar will be visible. The next step is to add entries to the radar and customize it to fit the own page.

4.2 Configuration

The radar can be configured through three files. The first file focuses on the radar’s configuration, e.g. coloring or naming. It contains four sections: One general, one for sectors, one for segments, and one for the blips. The second file that is needed defines the structure of the radar. That means it defines how many rings and sectors the radar has. It also defines the possible states that blips can have. It is also possible to visualize the movement of blips between different versions of the radar. The last JSON file is the list of entries in this case a list of methods.

4.3 Example

The final implementation of the Method Radar is viable under https://intia.de/werkzeuge/methoden/ [31]. It consists of an explanation of the Method Radar, its concept, and how to use it. Besides the radar, a list of the methods is provided, allowing a user to read short abstracts of the methods and hyperlinks to a detailed description of the method and its application. Currently, not all methods sorted in the radar have a detailed description of the method. The coloring of the blips is here used to indicate methods already on the website and are currently only categorized without description. A screenshot of the side is viable in Figure 10.

Figure 10: 
Implementation of the method radar.
Figure 10:

Implementation of the method radar.

In this Figure, a radar with twelve methods is shown, which have been used in the INTIA research project. They are sorted according to the usage in the project and are mostly in German. Nevertheless, as an example, they are presented here. The darker-colored methods are described on the webpage, the lighter colors are not yet defined on the website. The list of the methods:

Analog prototyping for example is an open approach that intends to help give ideas a form and discuss possible solutions. It consists of a set of tiles with different (geometric) shapes. The tiles can be used to lay simple sequences. The tiles can be taken in hand and arranged according to rules. This makes it easy to come up with ideas together and map them out. It’s a method that focuses on helping a target group shape their ideas within the smart home domain (limited by the types of tiles). The intention is that people who are not developers can use it. The tiles are used to represent ideas in a simple programming language. To do this, the different tiles must be placed in a row so that a set is created. The programming tiles stand for corresponding actions. There are four types of programming tiles: tech (e.g. sensors), actions (e.g. open/close), and sentence structure (e.g. if/then). The tiles can be placed together in groups or individually. Here is an example: All sentences start with a sentence structure tile (in this example “If”). This is followed by a tech tile (“contact sensor”) and an action (“close”).

If the contact sensor is closed, then the smart lamp goes on.

Everything from the starting idea that is shaped to laying tiles is done by the target group. It does completely rely on the target group. Using the same approach as in chapter 3.2:

  1. Is it clear who should be participating?

    Yes, this is a requirement.

  2. Is any type of exchange planned?

    Yes, as part of the method.

  3. Is the exchange direct?

    Yes, the participants exchange primarily with each other and the researchers are only on request part of the discussion.

  4. Is the target group informed enough?

    Yes, the target group is briefed on the method beforehand and works on their own ideas.

    With this knowledge, we know we are Rang one (information) on the radar or higher.

  5. Is the target audience for their opinion?

    Yes, as mentioned they are not only asked they are the primary part of the discussion.

    We are now in Rang two (consultation) or higher.

  6. Are the group consulted on findings/needs/plans?

    Yes, as mentioned above they are forming the findings primarily by themself.

    We are now in Rang three (involvement) or higher.

  7. Are representatives involved in the process?

    Yes, the whole process is shaped by the target group and only limited in the domain by the hexagonal plates.

    We are now in Rang four (codetermination) or higher.

  8. Does the group make significant decisions?

    Yes, they are deciding which ideas are further developed, how they take shape and when a good solution is found. We are now in Rang five (partial decision-making authority) or higher.

  9. Does the group make all significant decisions?

    This can be discussed (similar to the journey map) as depending on the execution the target audience can decide everything or can be restricted. Whether the method matches more full decision-making authority or particular decision-making authority depends on the strictness of the evaluation, which is to be determined by the user of the radar.

5 Discussion

While the Method Radar offers an interesting approach, it comes with some limitations that need to be discussed.

First of all is the underlying ladder of participation intended as a reflective model for evaluation of already finished work. This comes to focus when classifying methods into the ladder. Many methods have a range of possible levels of participation which is determined upon usage, which makes the categorisation hard and error prone.

This opens up the second point for discussion: The concrete method for classification. This is also mentioned by Wright et al. [37]: “The evaluation of participation on the basis of hierarchical models is increasingly viewed critically because individual degrees, levels or forms of participation must always be evaluated in the context of resources, settings and the target group [translated by the author].” This point was already mentioned before, but it is important as the proposed Method Radar does not claim to be an evaluation of a concrete executed method. While the paper proposes a methodology to categorize methods into the ladder model. This proposal only provides guidelines for decision-making, as it is difficult to put the participation possibilities of methods into universal rules. It is on the users to adapt the generic method radar to the individual needs of the target group. So it is important to find a common evaluation ground for adding methods after implementing the radar, but it does not have the same challenges that evaluative approaches have.

The direct approach for improving the categorization quality would be to evaluate the participation after each method used. This would in theory get the methods after each usage closer to their real level of participation. It needs to be evaluated if considering the critics mentioned by Wright et al. [37] it matches the estimation made by professionals that have deeper knowledge of a target group.

The presented model is used in the domain of participative technology development, which is a close domain to the original technology radar. Nevertheless does it not seem that there is any reason that it is only usable in this domain. And the presentation in this limited space might be unnecessary.

Besides the comments focused on the presented Method Radar it should also be mentioned that there is general criticism for the ladder of participation. Roger Hart has written a critical reflection [16] on his interpretation of the ladder model. The core contents of this critique are summarized below. Hart himself understood the ladder as a mere metaphor and never as the model to which it is applied today. Therefore, he has taken a stand in his reflection report. In this statement, he makes the following criticisms of his metaphor:

  1. The focus is purely on projects and ignores the everyday participation of children.

  2. It limits itself purely to describing the role of adults in the process.

  3. The ways in which adults organize/prepare for participation should be given equal consideration.

  4. The ladder is not meant to be a form of climbable steps (This is why, for example, there are comments about whether one should speak of forms rather than steps).

  5. Higher rungs in the ladder are not superior to those below.

  6. It is not a matter of children having the last word or making the final decisions.

  7. Children also have a power relationship with each other (Removing adults does not create participation at the same time).

  8. Non-participation is not synonymous with ‘bad’ or morally reprehensible.

  9. The model is based for the Western world and does not correspond to all cultures (culture of individualists vs. culture of the collective).

Many of these bullet points are also applicable to the presented Method Radar and should be recognised by a user to understand the limitations it has. Specifically, the points concern the Method Radar as follows:

  1. The focus of the Method Radar lies not on projects, but on methods. It is thus more lightweight than whole projects, but only partially helps in the everyday participation of children.

  2. As the Method Radar relies on the ladder of participation it does not improve the role description in comparison to the original.

  3. The Method radar brings focus on participation in the method decision phase and heavily improves this point of criticism.

  4. The risk of a focus on the highest possible level of participation, as described by Hart, exists with the Method Radar. The reason for this is that participation is already the focus of method selection and the model does not provide an adequate warning of this risk on its own.

    The ring representation was also used to reduce this effect as it does not directly encourage you to climb up.

  5. Making clear that higher levels of participation are not always better is not improved by the Method Radar itself.

  6. Similar to point two the Method Radar does not improve in critic points six to nine as it relies on the classification given by the ladder of participation. E.g. The Method Radar does not address the relationship in between the target group.

6 Future directions

The presented method radar is already in use as part of the technology development research project INTIA to categorize methods for professionals who already started a technology development process with the INTIA suitcase. A list of sorted methods is shown in Table 2. Nevertheless, a more in-depth analysis of the implications of its use should be studied. This would help to understand the thought process applied during the usage of the Method Radar and, therefore, if it fulfills its goals. In addition to the direct evaluation of the radar, it should also be considered which level of participation methods are sorted and the actual participation after the usage. The insights gained from this will help to better classify methods in the various participation levels. For the concrete implementation, it is planned in the first instance to design the application with as few barriers as possible to provide as many people as possible with a helpful tool. This is the only way to allow people to take the initiative for themselves. Besides increasing the accessibility of the radar, more functions like sorting or filtering to find methods easier and enrich the user experience are still needed. In addition it would be worth to flip the evaluation by empowering the target group. The Method Radar provides an initial categorization based on the ladder of participation. The direct way of evaluation is to use one of the ladder of participation models to evaluate the usage and then improve the Method Radar. An alternative would be to let the target group estimate the level of participation after a method was used and compare it with the estimated participation level. This gives helpful insights and brings the Method Radar itself under the direct influence of the target group.

Table 2:

List of methods in Method Radar implementation (translated by the authors).

Number

Name

Source

1

Smiley feedback

INTIA

2

Analog prototyping

INTIA

3

Phase model of inclusive prototype development in presence

INTIA

4

Escape game

INTIA

5

Impulse cards

INTIA

6

Photo research

[22]

7

The superhero

INTIA

8

Interviews

[22]

9

Invention memory

INTIA

10

Hospitation

[22]

11

User journey

[22]

12

Needs cards

INTIA

7 Conclusions

This work presents a categorization system in the form of radar for categorizing technology development methods. It allows the categorization in one to three dimensions: rings, segments, and the color of blips. Using the circle of decision (translation from the author) [20], adoption of the ladder of participation from Arnstein, as the rings, the user focus is concentrated on the participation aspect.

Compared to alternative approaches and categorization systems, this gives participants a high level of attention during method selection. This attention leads to a higher awareness of opportunities for participation.

A practical implementation of the presented radar is provided as an open-source website that anyone can use, extend, or adapt. The radar in use can be found under https://intia.de/werkzeuge/methoden/ [31]. The underlying implementation can be seen in the associated github project under https://github.com/th-koeln-intia/intia-portal [38]. For those who just want to use the implementation of the radar for alternative radar representations a standalone version with usage guide (in German) is also available under https://github.com/th-koeln-intia/method-radar [39].


Corresponding author: Marc Schmidt, Cologne Institute for Digital Ecosystems, Technische Hochschule Köln, Cologne, Germany,E-mail:

Award Identifier / Grant number: 13FH534SX7

Acknowledgments

The authors want to thank all partners involved in the development of the INTIA project.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: The authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Competing interests: The authors state no conflict of interest.

  5. Research funding: The project on which this paper is based was funded by the German Federal Ministry of Education and Research under the grant number 13FH534SX7. The responsibility for the content of this publication lies with the author.

  6. Data availability: Not applicable.

References

1. Piqsels. Post It Notes, 2021. https://www.piqsels.com/en/public-domain-photo-fffnv.Search in Google Scholar

2. Allport, G. W. The Nature of Prejudice; Addison-Wesley. xviii: Boston, 1954; p. 537.Search in Google Scholar

3. Schuh, G., Klapper, S. Technologiemanagement; Springer: Berlin, Heidelberg, 2011.10.1007/978-3-642-12530-0Search in Google Scholar

4. Berndt, M., Mietzner, D. Facilitating knowledge and technology transfer via a technology radar as an open and collaborative tool. In New Perspectives in Technology Transfer: Theories, Concepts, and Practices in an Age of Complexity; Dana, M., Schultz, C., Eds.; Springer International Publishing, 2021; pp. 207–230.10.1007/978-3-030-61477-5_12Search in Google Scholar

5. Rohrbeck, R. Harnessing a network of experts for competitive advantage: technology scouting in the ICT industry. R&D Manag. 2010, 40, 169–180. https://doi.org/10.1111/j.1467-9310.2010.00601.x.Search in Google Scholar

6. Boe-Lillegraven, S., Monterde, S. Exploring the cognitive value of technology foresight. Technol. Forecast. Soc. Change 2015, 101, 62–82. https://doi.org/10.1016/j.techfore.2014.07.014.Search in Google Scholar

7. Thoughtworks. Technology Radar | An opinionated Guide to Technology Frontiers | Thoughtworks, [n.d.]. https://www.thoughtworks.com/en-de/radar.Search in Google Scholar

8. von Unger, H. Zentrale Komponenten eines partizipativen Designs. In Partizipative Forschung: Einführung in die Forschungspraxis; von Unger, H., Ed.; Springer Fachmedien, 2014; pp. 35–49.10.1007/978-3-658-01290-8_3Search in Google Scholar

9. Douglas, S., Aki, N. Participatory Design: Principles and Practices; CRC Press: Boca Raton, 1993.Search in Google Scholar

10. Bratteteig, T., Bodker, K., Dittrich, Y., Morgensen, P. H., Simonsen, J Routledge International Handbook of Participatory Design; Routledge. Google-Books-ID, SnO5JDzp3t4C, 2013.Search in Google Scholar

11. Arnstein, S. R. A ladder of citizen participation. J. Am. Plann. Assoc. 1969, 35, 216–224. https://doi.org/10.1080/01944366908977225.Search in Google Scholar

12. Hartung, S., Wihofszky, P., Michael, T., Wright Partizipative Forschung – ein Forschungsansatz für Gesundheit und seine Methoden. In Partizipative Forschung: Ein Forschungsansatz für Gesundheit und seine Methoden; Hartung, S., Wihofszky, P., Wright, M. T., Eds.; Springer Fachmedien: Wiesbaden, 2020; pp. 1–19.10.1007/978-3-658-30361-7_1Search in Google Scholar

13. Citizenshandbook. The Citizens Handbook, 2019. https://www.citizenshandbook.org/arnsteinsladder.html.Search in Google Scholar

14. Hart, R. Children’s Participation: From Tokenism to Citizenship; UNICEF, International Child Development Centre: Florence, 1992.Search in Google Scholar

15. Hart, R. Roger Hart’s Ladder of Children’s Participation, 2021. https://www.mefirst.org.uk/resource/arnsteins-ladder-of-participation/.Search in Google Scholar

16. Hart, R. Stepping back from “the ladder”: reflections on a model of participatory work with children. In Participation and Learning; Springer, 2008; pp. 19–31. https://www.researchgate.net/publication/316118875_Stepping_back_from_The_ladder_Reflections_on_a_model_of_participatory_work_with_children.10.1007/978-1-4020-6416-6_2Search in Google Scholar

17. Straßburger, G., Rieger, J. Partizipation Kompakt Für Studium, Lehre Und Praxis Sozialer Berufe, 2nd Aufl; Beltz Juventa: Weinheim, 2019.Search in Google Scholar

18. Chung, K., Lounsbury, D. W. The role of power, process, and relationships in participatory research for statewide HIV/AIDS programming. Soc. Sci. Med. 2006, 63, 2129–2140. https://doi.org/10.1016/j.socscimed.2006.04.035.Search in Google Scholar PubMed

19. Alf, T. Bürgerbeteiligung–Die 12-Stufige Leiter Der Beteiligung von Bürgern an Lokalen Entscheidungsprozessen; Verlag für Akademische Schriften: Bad Homburg v d Höhe, 2001.Search in Google Scholar

20. Wright, M. T., Block, M., von Unger, H. Partizipation in der Zusammenarbeit zwischen Zielgruppe, Projekt und Geldgeber. Gesundheitswesen 2008, 70, 748–754.10.1055/s-0028-1102955Search in Google Scholar PubMed

21. Bethmann, A., Hilgenböcker, E., Wright, M. Partizipative Qualitätsentwicklung in der Prävention und Gesundheitsförderung. In Prävention und Gesundheitsförderung; Tiemann, M., Mohokum, M., Eds.; Springer, 2020; pp. 1–13.10.1007/978-3-662-55793-8_119-1Search in Google Scholar

22. Stickdorn, M., Hormess, M. E., Adam, L., Schneider, J. This Is Service Design Doing: Applying Service Design Thinking in the Real World, 1st ed; O’Reilly Media: Sebastopol, 2018.Search in Google Scholar

23. Gartner, I. Magic Quadrant Research Methodology, 2022. https://www.gartner.com/en/research/methodologies/magic-quadrants-research.Search in Google Scholar

24. Fenn, J., Raskino, M., Burton, B. Understanding Gartner’s Hype Cycles; Gartner ID G: Stamford, G00251964, 2017; pp. 1–35.Search in Google Scholar

25. Phaal, R., Clare, F., Probert, D. Technology management tools: concept, development and application. Technovation 2006, 26, 336–344. https://doi.org/10.1016/j.technovation.2005.02.001.Search in Google Scholar

26. Schimpf, S., Faerber, M., Heubach, D., Korell, M., Lang-Koetz, C., Masior, J., Rettinger, A., Rummel, S., Jan, S., Schmitz, M., Sieber, V., Wichert, P., Wohlfart, L. Technologien frühzeitig erkennen, Nutzenpotenziale systematisch Bewerten; Fraunhofer Verlag: Stuttgart, 2015.Search in Google Scholar

27. Kroß, J., Bludau, P., Pretschner, A. Center for Code Excellence Trends – Eine Plattform für Trends in Softwaretechnologien. Inform. Spektrum 2020, 43, 417–424; https://doi.org/10.1007/s00287-020-01287-y.Search in Google Scholar

28. Zalando. Zalando Tech Radar — 2022.10, [n.d.]. https://opensource.zalando.com/tech-radar/.Search in Google Scholar

29. Pötzinger, D. Technologie-Radar für IT-Organisationen, [n.d.]. https://www.aoe.com/de/insights/technologie-radar-fuer-bessere-entscheidungen.html.Search in Google Scholar

30. Der Hess. Der Hess Technology Radar, 2016. http://derhess.de/2016/11/22/technology-radar-november-2016-by-thoughtworks/.10.1007/s15006-016-7686-2Search in Google Scholar PubMed

31. INTIA. Methoden, [n.d.]. https://intia.de/werkzeuge/methoden/.Search in Google Scholar

32. Bostock, M. D3.js – Data-Driven Documents, [n.d.]. https://d3js.org/.Search in Google Scholar

33. Zalando. Zalando Technology Radar, 2022. https://github.com/zalando/tech-radar.Search in Google Scholar

34. GitHub. GitHub Pages, [n.d.]. https://pages.github.com/.Search in Google Scholar

35. Jekyll. Jekyll • Simple, Blog-Aware, Static Sites, [n.d.]. https://jekyllrb.com/.Search in Google Scholar

36. Ecma International. ECMA-404 The JSON Data Interchange Standard, 2017. https://www.json.org/json-en.html.Search in Google Scholar

37. Wright, M. T., Von Unger, H., Block, M. Partizipation Der Zielgruppe in Der Gesundheitsförderung Und Prävention; Hans Huber: Bern, vol. 1, 2010; pp. 35–52.Search in Google Scholar

38. Schmidt, M., Wilk, D. C., Hanna, T., Rösch, J., Janda, C., Kortekaas, C., Bente, S., Zorn, I., Kohls, C., Mager, B. INTIA Plattform, 2022. https://github.com/th-koeln-intia/intia-portal.Search in Google Scholar

39. Schmidt, M., Klein, F., Wilk, D. C. INTIA Method Radar, 2022. https://github.com/th-koeln-intia/intia-portal.Search in Google Scholar

Received: 2023-03-14
Accepted: 2023-11-19
Published Online: 2023-12-06

© 2023 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 16.7.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2023-0012/html
Scroll to top button