Abstract
Interaction-free usage (IfU) will be one of the quantitatively dominant forms of computer use in the future. In qualitative terms, this form of use will cover a wide range of applications, also software that supports communication and cooperation. Digital twins for cooperation and communication will be employed by individual users to maintain a variety of social networking activities. Generative AI will play a decisive role in this development, autonomously identifying user needs, replacing the predominant form of use through prompting with question-and-answer dialogs. These dialogs will also be used to preconfigure systems for IfU phases. The counterpart to IfU, which will become ever less-frequent, is intervening interaction, when users intervene to explore and adjust the performance of AI-based systems in exceptional situations or to optimize them for future task handling.
1 A personal review of previews
Since the 1980s, our team has often faced the challenge of forecasting developments in the realm of Human-Computer Interaction (HCI) and collaborative computing, including areas such as Computer-Supported Cooperative Work (CSCW). As specialists in work-related IT support, our working assumption was that technology development is driven by industries seeking to improve economic efficiency, especially labor productivity. Consequently, we focused on innovations in the tools provided by management for increasing worker productivity within organizations. This focus on tools, local organizations and the efficiency of task handling brought some shortcomings for HCI-development forecasting:
The tool perspective helped us anticipate phenomena such as digital photography. We also understood how computers would become a medium but saw it being a tool to support communication and coordination in organizations. However, we had a notable blind spot: we did not foresee social media and social networking, exemplified by platforms like Instagram.
By focusing on efficiency gains for work tasks in local organizations, we failed to anticipate the profound impact of IT-supported globalization on the labor landscape where workers in traditionally industrialized nations found themselves increasingly compelled to compete with a global workforce.
By assuming that technological innovation mainly follows the patterns of increasing economic efficiency in companies, we encountered difficulties foreseeing the implications of the shift towards consumer-oriented applications and its consequential effects on HCI research. While early innovations in HCI were predominantly driven by advances in the workplace, this trend was reversed with the rise of the internet and web-based mobile applications, wherein private and consumer usage surpassed the technical level of many office environments in the working world.
The focus on HCI for supporting work in companies experienced a watershed when, with the development of the World Wide Web, Web 2.0 and smart phones, more and more applications were developed to be used only occasionally. This break necessitated the systematic consideration of occasional users and led to significant new challenges for usability research and experience design. As consumers, users were increasingly confronted with a multitude of possible applications on the internet or on their smartphones, several of which they could run in parallel. Consumer-orientation became a driving factor including the consuming behavior as an inexhaustible source of data for marketing. Marketing oriented exploitation of HCI led to increasing efforts of observing and recording people’s behavior and, subsequently, to increasing capabilities for surveillance and its societal impacts. 1
In addition, media communication, social media and networking have not only enabled and encouraged people to use multiple applications, but also to get and stay in touch with a large number of other people who may be spread all over the world. This trend could be observed early in the 1980s through sociological studies on the increasing use of telephony. 2 , 3 One noticeable tendency that can be derived from the studies is the behavior of people of seeking distance in proximity and proximity in distance. With the emergence of social media, the concept of socio-technical systems introduced in the 1950s was challenged by a dissolution of boundaries. The boundaries of what was understood as a system could no longer be related to individual teams or companies. With the emergence of social media, the socio-technical orientation was given a new justification, but one in which social exchange takes place across companies and borders.
Our rather cautious predictions of the development speed of AI were mostly more appropriate than the exaggerated visions articulated in AI research. An example is the translation of natural spoken language, envisioned early on by AI researchers. 4 They also posited that AI should be adept at engaging in conversations resembling human dialogues as a kind of assistant, particularly for tasks like database research or hotel reservations. 5 These approaches were among the long-standing suggestions that AI could provide assistants that support people in their everyday tasks. However, even the conversational AI that has recently become available, such as ChatGPT, is not able to recognize the needs of users in a dialogue. The way ChatGPT is realized, based on large language models, does not yet allow it to ask inquiries that lead to a deeper understanding of users’ needs or characteristics. From these shortcomings and experiences with past attempts to anticipate the development of HCI, we derive some consequences that will guide the following sections. To make more appropriate forecasts we must understand that:
Technology is driven more by the focus on consumer needs and their role as marketing addressees and data providers than by the pursuit of efficiency gains within companies.
Global distribution is more relevant than innovation in single companies.
The multiple options for connecting with people and – partially simultaneously – using software applications and information sources are constantly expanding.
From a socio-technical perspective, the boundaries of social systems are blurring, and this expanded social context is a key driving factor.
The tool perspective is supplemented or even overcome by the media and network metaphor or by conversational agents.
The focus shifts from job-oriented tasks to be supported by HCI to everyday tasks.
2 A possible focus: interaction-free usage
Our prediction of the future evolution of HCI is that the predominant HCI-mode will be “no HCI,” or, more precisely, the prevailing of phases where usage takes place without fine grained interaction as a continuous flow of control and response. HCI for exercising control will still play a role but will be reserved for those situations were something goes wrong, or expectations of users emerge that are not met by the system.
We think that “interaction-free usage” (IfU) is a common phenomenon where, for example, machines, robots and autonomous vehicles start and run at least for a while without influence by direct human control. However, IfU is only marginally described in the literature. 6 We define “interaction-free usage” as phases of usage during which all people who benefit from the system do not have to input data that are meant to intentionally and explicitly control or influence the system. The more often these phases occur during the use of a system and the longer they last, the more the application of this system is an IfU case. IfU does not exclude reading, watching or listening, but will reduce the need for continuous monitoring of IT-based processes; it is usually accompanied by phases of configuration, testing and re-adjustment.
The reasons why IfU will increasingly dominate are related to the phenomenon of the “invisible computer” 7 due to which users are using more and more computers. In addition, in future one or more users will not only benefit from one, but from a multitude of simultaneously running IT-based processes. Users will not be able to directly control all these processes because of their sheer number and/or their complexity: they will lack sufficient resources to exercise continuous and detailed control of every process. A typical example is the smart home with several ongoing processes to control air conditioning, lighting, shading of windows, watering of plants, alarm functions etc. 8 The computers behind it may be invisible, but the processes they produce should be visible enough 9 to allow the user to be aware that they are functioning and check reliability or appropriateness. Other examples where people benefit from IT-driven processes are monitoring, advice and countermeasures in health care; notifications and warnings; monitoring logistics and adaptation of transporting systems and routes; and production surveillance. The processes in these contexts can be of a purely technical nature or the result of socio-technical integration by workflows where phases of IfU are completed with phases of task handling by people.
We suggest the following differentiation of cases of how IfU is enabled:
Implicit interaction 10 where users’ actions are not primarily aimed as input for a computerized system but which such a system can interpret as contextual change that should trigger a certain process or the provision of a certain output. A simple example is a motion detector that controls lights: People approach a house and the light turns on although they had not intended their movement to function as an intentional system input.
Furthermore, the general development of changes in the system’s context – not directly caused by users – can be exploited to trigger automated processes. Examples are switching on lights when natural light wanes or when a road junction is reached by an autonomous vehicle. While the change in the first example is independent from the system, the second is influenced by the autonomous process.
A simple case of IfU is that an automated process is just started by the user as it is the case with washing machines, and where the system has usually only to monitor its internal states.
A specific case are processes that monitor the data available via internet – e.g. to present notifications if a certain product is available or if stock prices exceed certain thresholds.
A certain kind of context that can be explored for IfU are changes of body-based parameters of a user for health support or emotion detection.
There are several technological developments that will support the emergence of IfU. The further development of AI in the field of image and pattern recognition or the analysis of scenes will help to analyze the situational context and thus obviate user input. The detection of outliers based on machine learning will help to reduce the need for attentive monitoring of IT-based processes. AI not only supports the monitoring of the situational context, but also of machine decision-making and action. A typical example is AI-based intelligent warehouse and logistics processes, where the need for human involvement is gradually reduced, both in routine tasks and in complex decisions by dispatchers. 11 Generative AI will help to eliminate a lot of fine-grained editing in the creation of texts, websites, software or presentations by humans. This not only applies to business life, but also supports the everyday lives of consumers: Translations into other languages are produced without time-consuming research; letters, emails and presentations can be created without fine-grained editing. Numerous apps on smartphones take on monitoring tasks and suggest suitable measures to the user at the right time, such as terminating a contract. The associated completion of forms for this type of tasks can also be taken care of.
The main driving forces for the further development of IfU are likely to be societal rather than technical. The sociological discussion points to a multi-option society 12 or an increasing acceleration of aspects of our lives 13 in which we carry out several tasks and contacts with other people in parallel – not only in business life, but also in everyday life. The provision of apps that enable IfU increases the possibility of using more and more options. The IT industry has recognized and driven the need to be surrounded by multiple options for action and wide-ranging social interactions and offers software that simultaneously produces marketing-relevant data while being used. IfU is a perfect ally for increasingly parallelized consumption processes.
These circumstances contribute to an increase in the extent of interaction-free phases when using IT. It has also to be mentioned that a fluent transition will take place between interpreting the behavior of users as actions of explicit control on the one hand and implicit interaction on the other hand, as may be the case with eye movement or sensing of brain activities. It is important to realize that IfU does not suddenly appear in its most developed form in a specific domain but is the result of a transition between different levels of proactivity ranging from fully interactive to fully automated. A typical example is the development of autonomous vehicles. 14 In a first step, the lowest level of full control by the driver is merely supplemented by assistive functionality. At a next level, drivers need to be ready to take control when needed, and at the highest level we have IfU, where a driverless car could be sent to a specific location to pick someone up. In a more general model, for instance Parasuraman & Sheridan 15 present ten stages of such a transition.
One counterpart to IfU are intervention user interfaces as proposed by Schmidt and Herrmann 16 in the context of human-centered AI to keep the human in the loop. Schmidt requires that “… systems must make their state observable and provide ways for humans to see how to intervene and predict the outcome of their interventions” [17, p. 3]. Intervening interaction is an activity of users that temporarily changes the behavior of an automated process. It can be considered as an ad hoc change based on an extraordinary, unplanned control by humans, which is effective only for a certain time slot or for a limited area of the automated processes (such as a certain parameter of an air conditioning system). Accordingly, intervention means that phases of IfU can be interrupted by stopping them for a while or by exceptional phases of fine-grained control. The possibility of intervention complements concepts such as explainability and trust calibration in the context of AI. It should not only be possible to intervene in automated technical processes. It is also important for socio-technical workflows where people must be able to veto AI-generated decisions instead of simply having to execute them. Intervention interfaces must provide robustness so that interventions can be easily started, completed and terminated, and so that their effect can be revised. Interventions can also be considered as input for the training of machine learning (ML) systems, leading to user-driven continuous improvement. 18
As AI advances and more intervenability allows users to mitigate a larger number of unsolicited effects, more phases of IfU will become possible. This is primarily a quantitative effect (see Section 5).
3 Interaction-free usage in the socio-technical context of collaboration and communication support
If we look at IT support of collaboration and communication between humans, it might be questionable whether phases of interaction-free usage will be highly relevant in the future. A typical example is the usage of messenger apps: we can watch people spending a lot of time posting messages. We see people actively collaborating with each other by sending mails, providing slides, parts of software, drafts in all areas of design etc. However, many conversational tools and text editors already offer auto-completion of text. Why not provide drafts or even send messages that are produced by generative AI such as ChatGPT? One can imagine typical situations where one says goodbye to visitors and ask them to send a message if they have arrived home. Sending this type of message could eventually be taken over by AI as a result of implicit prompting based on the identification of context changes. Future users might only need to initially configure whether and in what style such a process of sending messages is started. Proposals for emails in the business context are already available.[1] We can imagine that a huge variety of types of tasks that contribute to collaboration with others might be provided by widely interaction-free processes, such as drafting slides, making plans or extracting short lists.
In the context of human-centered artificial intelligence (HCAI),[2] we have ongoing discussions about the role AI-agents could play within teams of collaboration settings, such as human-AI teaming 20 or human autonomy teaming. 21 The idea here is that AI can present an autonomous agent that takes the role of a teammate collaboration partner or an assistant with whom one runs natural language dialogues to specify needs to be met. In this context, the question arises of what kind of tasks such an AI-agent might take over, how it will be controlled, how trust calibration works etc. 22 – 24 By contrast, the case of AI being used mainly in the mode of IfU to let the AI take over a user’s collaborative or communicational tasks is slightly but decisively different. It is a kind of like using an agent that serves as someone’s cookie manager.[3] If an AI-agent serves as a teammate or an assistant with whom one continuously talks, it might be easy to document what it has provided and what others – maybe human collaborators – have contributed. In the case of the cookie manager that works in the background, it is still the person using it who is the originator of certain decisions and who is responsible for it, similarly to cases where someone gives privacy consent or accepts ‘general terms and conditions’.
We call the constellation where someone lets an AI-driven interaction-free process represent him- or herself within collaboration a ’digital collaboration and conversation twin’ (digital CC-twin). We consider this a specifically new quality of IfU that goes beyond the type of IfU that has been already established as described in the previous section. Such a digital CC-twin will help users to stay in contact with numerous people. People will be challenged to decide how to realize their communicative contributions by choosing possibilities within the fluid transition between two poles: enjoying the experience of self-authored communicational exchange with others and increase the number of people and the extent of communicative actions within their social interaction by employing digital CC-twins. When people ask ChatGPT to draft their emails, this is an example of a possibility within this fluid transition. If the digital CC-twin is applied predominantly, the only remaining real experience would be watching the stream of message exchange containing their allegedly personal but AI-produced posts. One reason to use this is that people do not want to lose certain options of maintaining possible collaboration partners by being inactive. The digital CC-twin corresponds with the tendency of “seeking proximity in the distance and distance in proximity” 2 which is completed by a trend we see within social networking constellations that a multiplicity of potential options for staying in contact with others is preferred to individual, concrete experiences. The question is to what extent such a CC-twin can be used without interaction. We can imagine using a messenger to keep in touch with someone by having it automatically produce messages such as congratulations, keeping others informed about events in their personal life and even generating simple communicative responses to greetings or announcements. The automated conversation could mirror the exchanges people have with their core social group. Only occasionally might humans intervene to add a non-routine exchange to a thread of conversation.
Another, more business-oriented example can be expected for documentation tasks in the healthcare sector. 25 AI agents can observe the processing of care tasks in order to derive implicit prompts for documenting the progress of care procedures. This is based on AI solutions that can both evaluate contextual cues and generate textual entries. This would enable the automatic completion of forms (when did what happen) as well as informal notes that help the caregiver to improve or justify their decisions. 26 We assume that documentation work is generally a communicative task, as the documents can potentially be accessed by others. It makes a subtle difference whether such documentation tasks are delegated to AI as a team member or taken over by a digital CC-twin who acts as a representative of the person who is responsible for the documentation work.
Employing a digital CC-twin is like having somebody whom I allow and whom I trust to stand in for me, and who can become active on my behalf to interact collaboratively with other humans. Here it becomes obvious that trust calibration is relevant. 24 Explainability 27 is a crucial addition to trust calibration, as it helps to maintain trust even when the system behaves unexpectedly – as long as it is able to explain its behavior. Whether IfU becomes real depends on the willingness of people to leave their communicative activities to an IfU-AI agent and on the trustworthiness of this agent. We assume that digital CC-twins will be used as long as the balance between effort and risk on the one hand and benefit on the other is positive. 9 If people cannot trust that the benefit will outweigh the risk, trust will decrease, and they will feel a constant need for control and switch to a mode where the AI agent only makes suggestions that need to be confirmed or corrected by a proactive user before a contribution is submitted. This could be less efficient than writing the post oneself. Overtrust will also be a problem, as an AI agent may act in a way that the human behind it may be embarrassed about afterwards or even face legal consequences such as claims for damages. It is therefore desirable – but possibly not envisaged by future providers – for a digital CC-twin to issue prompts from time to time, prompting the user to check the reliability of the system and readjust it if necessary. Consequently, trust calibration has to be supported 24 and users must be able to subordinate digital twins to their own communication and cooperation habits, for example by being able to intervene, configure and test them.
4 The role of human actors: relations of symmetry and asymmetry
To employ IfU, users first must be able to configure an AI agent, to check in advance by explorative usage compatibility with their own preferences and to supervise it, especially if they allow themselves to be represented by the AI in exchanges with other humans. Obviously, phases of IfU need to be accompanied from time to time by interaction modes that require proactive contributions from the user. The subtle balance between proactive phases and IfU must be maintained and dynamically adapted to the contextual conditions of use, for example by considering the different levels of automation mentioned above. 15 Continuous feedback is required so that the user can understand what is being automated and with what impact. Providing this feedback without overtaxing the user’s attention is a challenge that must be overcome technically. Furthermore, it is also important that the industry is willing to proactively comply with legal and ethical regulations. 28 So far, it is difficult to imagine what suitable feedback mechanisms would look like. One solution for the digital CC-twin could be to view feedback as a collaborative task where trusted friends issue warnings when the twin exhibits undesirable behavior.
We consider the question of how much oversight and control is exercised while employing IfU as a question of whether there will be more of a symmetric or asymmetric relationship 29 between humans and AI agents. There are some examples that suggest a symmetry. If the user can prompt AI to do something, AI could also be able to prompt the user by, for example, asking for additional data or asking for a check of the correctness of entered data. It is remarkable that conversational AI-systems do not currently prompt people to answer certain questions that specify the users’ needs. Thus, user-driven prompt engineering is currently considered the dominating paradigm for using generative AI. 30 Presumably, this will not last. Once current dialogs with generative AI have built up a sufficient base of training material, these systems will be able to ask questions that guide users to describe their needs. A further example for symmetry would be if not only humans, but also AI can intervene in how the task is handled by the other side, e.g. by providing critique. 31 Also, assistance could be a reciprocal phenomenon. Usually AI-systems assist human users. Kamar 32 proposes a type of hybrid systems where AI can ask humans for assistance. In the current discussion on HCAI, explainability is demanded as an essential characteristic of AI. Future developments will establish symmetry also if humans can be asked by AI to explain their behavior as a way to improve itself, for example after an intervention. These are tendencies that will establish symmetry between human and AI sides, i.e. that will enable interactions between humans and AI where both carry out the same type of actions.
There are also examples of asymmetry where certain activities are reserved for either humans or AI exclusively. For instance, testing by end-users, particularly after re-configuration has taken place, will be a case of asymmetry. Interventions could take place just for the purpose of analyzing how the system works and reacts. This is a form of testing, in the sense of questions about what could happen if a parameter is temporarily changed. Herrmann and Pfeiffer 33 discuss this in the case of AI-based predictive maintenance. This kind of testing might – or at least should – be reserved to humans. Answering what-if questions may be difficult for human users and it would be perceived as inappropriate if AI were to ask people trick questions to see how they react. Furthermore, we can imagine humans excluding an AI agent from a team, but it is hard to imagine this happening in reverse. Another example is persuasion: we can assume that AI systems are trained to persuade or nudge humans to do certain things, e.g. to drink enough water or to take a break after cognitively demanding tasks. But can we also imagine having to persuade an AI system to perform certain tasks? Furthermore, will we configure the reactions of a generative AI, especially a digital CC-twin, but will we allow for the same configuration options in reverse? It will be more likely that we will allow the AI system to train humans, but we may not want to understand this as a process where humans are configured by the AI.
Obviously, it will be important to maintain a last but decisive and crucial degree of asymmetry in the relationship between humans and AI to allow humans to exercise oversight. This is especially true for supervising, (re-)configuration and subsequent testing, at least to see if the system meets relevant expectations. In the near future, test environments to check the reliability of IfU in advance will be an important task for interface design. In addition, the formulation of prompts could increasingly be supplemented or replaced by configuration. Instead of requiring users to provide explicit prompts, they will rather be supported by AI agents, which will be able to use context changes as implicit prompts that enable interaction-free delivery of content.
What kind of task sharing will arise in dealing with computers under the conditions described above? In our opinion, two trends will emerge. On the one hand, tedious routine tasks will be performed by IfU. For example, the completion of documentation tasks could be delegated to an agent that represents the person who has to document his or her work. The selection, review and orientation by configuration and testing of this representative is then still the responsibility of the human actor. This becomes clear, for example, in the case of writing medical assessments, for which the physician bears full responsibility. Automated creation of these letters is limited decisively by the individual qualities of each patient case. However, the aim is also to ensure that errors are largely avoided. The important task remains to continuously optimize the phases of IfU for the writing of medical assessments to maintain the personal perspective of the doctor and to take all individual patient circumstances into consideration. This optimization can be achieved through interventions on the one hand and question-and-answer dialogues triggered by AI on the other.
A second possible tendency will be that people will choose a niche in which they want to be better than AI, for example because this is the area in which their performance and skills can best develop. This may be the case in the professional or private sphere. It could be design tasks, for example, where individual style and creativity are important, thus increasing the chance of attracting customers. Interior design would be a suitable example of this. For the task handling process, it could be a typical procedure to first have an AI-based solution developed[4] and then see how one can significantly go beyond the level of the AI-generated solution with regard to individuality, creativity, regional characteristics, etc. In such a scenario, AI will be employed as a sparring partner for continuously developing one’s own skills. This will be the type of use that probably still offers the most opportunities to create positive experiences for the user. It remains to be seen whether gaming will represent such a niche or whether forms of IfU will also predominate where self-configured AI actors are sent into the field, essentially only to be monitored in their performance.
An ethical perspective justifies the demand that we ensure that people can distinguish between what has been generated by AI and what originates from humans. 28 From the perspective outlined here, however, it should be noted that trends such as the digital CC-twin would make this more difficult. In approaches where AI takes on the role of a team member, it is possible to keep a record of what has been done by which member. However, when configuring a representative that acts closely aligned with the user’s preferences and abilities, the transitions are fluid when it comes to the question of what comes from humans and what comes from AI.
5 Summary
Figure 1 summarizes key hypotheses, presenting the strong hypothesis that interaction-free usage will become the dominant mode of use quantitatively. A weaker hypothesis would express that it is one important mode of usage amongst others that might have similar weight. For the strong hypothesis it is essential that context changes are analyzed by AI in order to control autonomous processes in accordance with users’ needs. For many phases of IT usage, users will be in a passive mode, in which they will listen and watch or read text. Active behavior will be more like confirming a proposal or selecting between options. The strongly outlined ellipses indicate possible future developments. Contributions of AI systems will be not triggered primarily through prompting. Prompting as we currently know it will be replaced increasingly by AI-driven question-answer dialogs. Dialogs currently taking place with generative AI are creating the training material for this feature. Fine-grained direct manipulation will only take place in niches. In the context of AI and autonomous processes, direct manipulation will be implemented through the design of intervention interfaces that can be used for exception handling, testing and exploring system behavior. Interfaces that can be used to test the reliability of IfU after configuration will be of particular importance. From a qualitative point of view, the possibility will emerge that IfU will also be implemented in the area of communication and cooperation support in the form of digital CC-twins. They are activated implicitly by users as their representatives and subsequently analyze context changes to recognize when and in what form they should communicate or take on tasks as input for others. Employing digital CC-twins will not happen suddenly. It will gradually develop through stages of modes of use in which communicative contributions will be proposed, adapted and confirmed by AI, thus giving users the opportunity to find out whether and where they will let AI become a representative for conversational and collaborative tasks.
About the author
Thomas Herrmann is a senior professor of Information and Technology Management at the Institute of Applied Work Science (IAW), University of Bochum, Germany where he has worked since 2004, and he is a fellow of the Faculty for Computer Science. He has developed, evaluated and refined methods that support the analysis and the design of socio-technical processes. His research has focused on socio-technical systems in various areas such as human-centred AI, smart factories, healthcare, creativity support, computer supported collaboration, knowledge management.
-
Research ethics: Not applicable.
-
Author contributions: The author has accepted responsibility for the entire content of this manuscript and approved its submission.
-
Competing interests: The author state no conflict of interest.
-
Research funding: None declared.
-
Data availability: Not applicable.
References
1. Zuboff, S. Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. J. Inf. Technol. 2015, 30 (1), 75–89; https://doi.org/10.1057/jit.2015.5.Search in Google Scholar
2. Forschungsgruppe Telekommunikation, Ed. Telefon und Gesellschaft. Bd. 1-3: Beiträge zu einer Soziologie der Telefonkommunikation; Spiess: Berlin, 1989.Search in Google Scholar
3. Fielding, G.; Hartley, P. The Telephone: A Neglected Medium. In Studies in communication; Cashdan, A.; Jordin, M., Eds.; Basil Blackwell: New York, 1987; pp. 110–124.Search in Google Scholar
4. Wahlster, W. Verbmobil. In Grundlagen und Anwendungen der Künstlichen Intelligenz; Herzog, O.; Christaller, T.; Schütt, D., Eds.; Springer: Berlin, Heidelberg, 1993; pp. 393–402.10.1007/978-3-642-78545-0_37Search in Google Scholar
5. Hoeppner, W.; Busemann, S.; Christaller, T.; Marburger, H.; Morik, K.; Nebel, B. Dialoging HAM-ANS: Commented Terminal Sessions with a Natural Language System. In Memo ANS-23, Research Unit for Information Science and Artificial Intelligence; Univ. of Hamburg, 1984.Search in Google Scholar
6. Herrmann, T.; Lentzsch, C.; Degeling, M. Intervention and EUD: A Combination for Appropriating Automated Processes. In End-User Development; Malizia, A.; Valtolina, S.; Morch, A.; Serrano, A.; Stratton, A., Eds.; Springer International Publishing: Cham, 2019; pp. 67–82.10.1007/978-3-030-24781-2_5Search in Google Scholar
7. Norman, D. A. The Invisible Computer, 1999.Search in Google Scholar
8. Hargreaves, T.; Wilson, C. Control of Smart Home Technologies, 2017; pp. 91–105.10.1007/978-3-319-68018-7_6Search in Google Scholar
9. Herrmann, T.; Jahnke, I.; Nolte, A. A Problem-Based Approach to the Advancement of Heuristics for Socio-Technical Evaluation. Behav. Inf. Technol. 2021, 41, 1–23; https://doi.org/10.1080/0144929X.2021.1972157.Search in Google Scholar
10. Schmidt, A. Implicit Human Computer Interaction through Context. Pers. Technol. 2000, 4 (2–3), 191–199; https://doi.org/10.1007/BF01324126.Search in Google Scholar
11. Pandian, A. P. Artificial Intelligence Application in Smart Warehousing Environment for Automated Logistics. JAICN 2019, 2019 (2), 63–72; https://doi.org/10.36548/jaicn.2019.2.002.Search in Google Scholar
12. Gross, P. Die Multioptionsgesellschaft, 1. Aufl., 1. Ausg. In Edition Suhrkamp, no. 1917 = n.F. Bd. 917; Suhrkamp: Frankfurt am Main, 1994.Search in Google Scholar
13. Rosa, H. Social Acceleration: A New Theory of Modernity; Columbia University Press, 2013.10.7312/rosa14834Search in Google Scholar
14. Kukkala, V. K.; Tunnell, J.; Pasricha, S.; Bradley, T. Advanced Driver-Assistance Systems: A Path toward Autonomous Vehicles. IEEE Consumer Electron. Mag. 2018, 7 (5), 18–25; https://doi.org/10.1109/MCE.2018.2828440.Search in Google Scholar
15. Parasuraman, R., Sheridan, T. B., Wickens, C. D. A Model for Types and Levels of Human Interaction with Automation. IEEE Trans. Syst. Man. Cybern. 2000, 30(3), 286–297; https://doi.org/10.1109/3468.844354.Search in Google Scholar PubMed
16. Schmidt, A.; Herrmann, T. Intervention User Interfaces: a New Interaction Paradigm for Automated Systems. Interactions 2017, 24 (5), 40–45; https://doi.org/10.1145/3121357.Search in Google Scholar
17. Schmidt, A. Interactive Human Centered Artificial Intelligence: A Definition and Research Challenges. In Proceedings of the International Conference on Advanced Visual Interfaces; ACM: Salerno Italy, 2020; pp. 1–4.10.1145/3399715.3400873Search in Google Scholar
18. Amershi, S.; Cakmak, M.; Knox, W. B.; Kulesza, T. Power to the People: The Role of Humans in Interactive Machine Learning. Ai Magazine 2014, 35 (4), 105–120; https://doi.org/10.1609/aimag.v35i4.2513.Search in Google Scholar
19. Dellermann, D.; Calma, A.; Lipusch, N.; Weber, T.; Weigel, S.; Ebel, P. A. The Future of Human-AI Collaboration: A Taxonomy of Design Knowledge for Hybrid Intelligence Systems. In HICSS, 2019.10.24251/HICSS.2019.034Search in Google Scholar
20. Dubey, A.; Abhinav, K.; Jain, S.; Arora, V.; Puttaveerana, A. HACO: A Framework for Developing Human-AI Teaming. In Proceedings of the 13th Innovations in Software Engineering Conference on Formerly known as India Software Engineering Conference; ACM: Jabalpur, India, 2020; pp. 1–9.10.1145/3385032.3385044Search in Google Scholar
21. O’Neill, T.; McNeese, N.; Barron, A.; Schelble, B. Human–Autonomy Teaming: A Review and Analysis of the Empirical Literature. Hum Factors 2022, 64 (5), 904–938; https://doi.org/10.1177/0018720820960865.Search in Google Scholar PubMed PubMed Central
22. Dwivedi, Y. K.; Kshetri, N.; Hughes, L.; Slade, E. L.; Jeyaraj, A.; Kar, A. K.; Baabdullah, A. M.; Koohang, A.; Raghavan, V.; Ahuja, M.; Albanna, H.; Albashrawi, M. A.; Al-Busaidi, A. S.; Balakrishnan, J.; Barlette, Y.; Basu, S.; Bose, I.; Brooks, L.; Buhalis, D.; Carter, L.; Chowdhury, S.; Crick, T.; Cunningham, S. W.; Davies, G. H.; Davison, R. M.; Dé, R.; Dennehy, D.; Duan, Y.; Dubey, R.; Dwivedi, R.; Edwards, J. S.; Flavián, C.; Gauld, R.; Grover, V.; Hu, M. -C.; Janssen, M.; Jones, P.; Junglas, I.; Khorana, S.; Kraus, S.; Larsen, K. R.; Latreille, P.; Laumer, S.; Malik, F. T.; Mardani, A.; Mariani, M.; Mithas, S.; Mogaji, E.; Nord, J. H.; O’Connor, S.; Okumus, F.; Pagani, M.; Pandey, N.; Papagiannidis, S.; Pappas, I. O.; Pathak, N.; Pries-Heje, J.; Raman, R.; Rana, N. P.; Rehm, S. V.; Ribeiro-Navarrete, S.; Richter, A.; Rowe, F.; Sarker, S.; Stahl, B. C.; Tiwari, M. K.; Van Der Aalst, W.; Venkatesh, V.; Viglia, G.; Wade, M.; Walton, P.; Wirtz, J.; Wright, R. Opinion Paper: ‘So what if ChatGPT Wrote it?’ Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy. Int. J. Inf. Manage. 2023, 71, 102642; https://doi.org/10.1016/j.ijinfomgt.2023.102642.Search in Google Scholar
23. Herrmann, T. Calibrating the Coordination between Humans and AI by Analyzing the Socio-Technical Variety of Task Sharing. In HCI International 2023 – Late Breaking Posters. In Communications in Computer and Information Science; Stephanidis, C.; Antona, M.; Ntoa, S.; Salvendy, G., Eds. Springer Nature Switzerland: Cham, Vol. 1958, 2024; pp. 25–33.10.1007/978-3-031-49215-0_4Search in Google Scholar
24. Okamura, K.; Yamada, S. Adaptive Trust Calibration for Human-AI Collaboration. PLoS One 2020, 15 (2), e0229132; https://doi.org/10.1371/journal.pone.0229132.Search in Google Scholar PubMed PubMed Central
25. Ackermann, M. S.; Goggins, S. P.; Herrmann, T.; Prilla, M.; Stary, C. Designing Healthcare that Works – A Socio-Technical Approach; Academic Press: United Kingdom, United States, 2018.10.1016/B978-0-12-812583-0.00011-0Search in Google Scholar
26. Jelonek, M.; Herrmann, T.; Ksoll, M.; Altmann, N. Ethnographically Derived Socio-Technical Analysis for Information System Support in Intensive Home Care. In Complex Systems Informatics and Modeling Quarterly, 2020. no. 22.10.7250/csimq.2020-22.01Search in Google Scholar
27. Meske, C.; Bunde, E.; Schneider, J.; Gersch, M. Explainable Artificial Intelligence: Objectives, Stakeholders, and Future Research Opportunities. Inf. Syst. Manage. 2022, 39 (1), 53–63; https://doi.org/10.1080/10580530.2020.1849465.Search in Google Scholar
28. European Commission, C.; Directorate, T. General for Communications Networks, and High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, 2019. [Online]. Available: https://data.europa.eu/doi/10.2759/346720 (accessed May 23, 2021).Search in Google Scholar
29. Suchman, L. Agencies in Technology Design: Feminist Reconfigurations. In Machine Ethics and Robot Ethics; Wallach, W.; Asaro, P., Eds.; Routledge, 2020, 1st ed.; pp. 361–375.10.4324/9781003074991-32Search in Google Scholar
30. Chen, B.; Zhang, Z.; Langrené, N.; Zhu, S. Unleashing the Potential of Prompt Engineering in Large Language Models: A Comprehensive Review, 2023. [Online]. Available: http://arxiv.org/abs/2310.14735 (accessed: Jan 16, 2024).Search in Google Scholar
31. Fischer, G.; Nakakoji, K.; Ostwald, J.; Stahl, G.; Sumner, T. In Embedding critics in design environments; Maybury, M. T.; Wahlster, W., Eds.; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1998; pp. 537–561. [Online]. Available: www.sociotech-lit.de/FNOS98-Eci.pdf.Search in Google Scholar
32. Kamar, E. Directions in Hybrid Intelligence: Complementing AI Systems with Human Intelligence. In IJCAI, 2016; pp. 4070–4073.Search in Google Scholar
33. Herrmann, T.; Pfeiffer, S. Keeping the Organization in the Loop: a Socio-Technical Extension of Human-Centered Artificial Intelligence. AI & Soc 2023, 38, 1523–1542; https://doi.org/10.1007/s00146-022-01391-5.Search in Google Scholar
© 2024 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.