Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Chatbots to Support Young Adults’ Mental Health: An Exploratory Study of Acceptability

Published: 20 July 2022 Publication History

Abstract

Despite the prevalence of mental health conditions, stigma, lack of awareness, and limited resources impede access to care, creating a need to improve mental health support. The recent surge in scientific and commercial interest in conversational agents and their potential to improve diagnosis and treatment seems a potentially fruitful area in this respect, particularly for young adults who widely use such systems in other contexts. Yet, there is little research that considers the acceptability of conversational agents in mental health. This study, therefore, presents three research activities that explore whether conversational agents and, in particular, chatbots can be an acceptable solution in mental healthcare for young adults. First, a survey of young adults (in a university setting) provides an understanding of the landscape of mental health in this age group and of their views around mental health technology, including chatbots. Second, a literature review synthesises current evidence relating to the acceptability of mental health conversational agents and points to future research priorities. Third, interviews with counsellors who work with young adults, supported by a chatbot prototype and user-centred design techniques, reveal the perceived benefits and potential roles of mental health chatbots from the perspective of mental health professionals, while suggesting preconditions for the acceptability of the technology. Taken together, these research activities: provide evidence that chatbots are an acceptable solution to offering mental health support for young adults; identify specific challenges relating to both the technology and environment; and argue for the application of user-centred approaches during development of mental health chatbots and more systematic and rigorous evaluations of the resulting solutions.

1 Introduction

Mental health conditions are a growing global public health concern, being among the leading causes of disability [World Health Organization 2018]. Young people are particularly vulnerable, with suicide being the second most common cause of death globally and first in the UK among this age group [World Health Organization 2013; Office for National Statistics 2018], while the majority of mental health problems are established by the age of 25 [Kessler et al. 2005]. Moreover, suicide rates and the number of young adults experiencing mental health issues have risen in the past decade [Randall & Bewick 2016; Office for National Statistics 2018]. The prevalence of mental health conditions in young adults, and their upward trend, has been attributed to increasing financial and life pressures and uncertainty [Education Policy Institute 2018] and have been exacerbated by the COVID-19 pandemic [Mind Charity 2020]. Several effective mental health interventions exist, and national health services, further and higher educational institutions, and large employers often provide access to counselling, diagnosis and treatment. However, the majority of young adults with mental health conditions do not seek help [Macaskill 2013; Gorczynski et al. 2017], with several barriers having been found to impede their access to mental health care, including lack of awareness, stigma, and limited availability [Eisenberg Golberstein and Gollust 2007; Gulliver Griffiths and Christensen 2010; Czyz et al. 2013; Levin et al. 2016; D'Alfonso et al. 2017; House of Commons Committee of Public Accounts 2019; Jungmann et al. 2019].
Health organisations, researchers, and professionals have recognised the potential of technology to support and enhance mental health care [Foley & Woollard 2019], particularly of young people [D'Alfonso et al. 2017]. Mental health technologies are either designed as standalone interventions or as tools to complement the services or treatment provided by professionals. Such technologies include online resources, programmes and communities, and mobile phone applications (apps). Of these, websites provide an effective and inexpensive way to deliver information and advice [Levin et al. 2016; Toivonen et al. 2017]. There are also web-based services that offer access to peer support through online communities, resources for self-diagnosis and management, online courses, as well as to therapy sessions with counsellors, which can take place via text messaging, audio, or video. Mental health apps are rapidly expanding in number, functions offered, and popularity. Their popularity is largely driven by the prominence of smartphones in daily life, meaning that these apps may be deployed on a platform that is inherently more personalised, more multimedia-driven, and, most significantly, remains with the individual at all times. Such characteristics are argued to facilitate engagement, motivation, and adherence [Lui et al. 2017].
In recent years, there has also been renewed interest in conversational agents—for example, chatbots and digital/virtual assistants. Conversational agents refer to technology that enables user interactions by means of natural language; mainstream examples include Apple's Siri, Facebook's M, Google Assistant, and Amazon's Alexa. Chatbots, in particular, commonly support text-based conversation or clickable responses and are designed to look like instant messaging applications. Chatbots may be deployed on familiar platforms, such as Facebook and Skype, that people use for social communication with friends and family, and whose interface is well-understood, and particularly popular with young adults [Klopfenstein et al. 2017]. Driven by advances in the underlying technologies, conversational agents hold the promise of enabling natural, “human-like” interactions with the user [McTear et al. 2016] and have been successful in the domains of education and e-commerce. The question that naturally arises is whether conversational agents also have a place in mental health management. There is evidence that they may—a 2019 independent report on behalf of the UK Government singles out chatbots as a key technology poised to transform mental healthcare in the near future [Foley and Woollard 2019, part of “The Topol Review”1] and envisions chatbots as automated or semi-automated therapeutic and diagnostic tools.
However, numerous issues around the interaction between individuals with mental health difficulties and conversational agents in such contexts remain unknown and require exploration [de Barcelos Silva et al. 2020]; this is because the experience of users—and developers alike—with conversational agents is mostly derived from “task-oriented” interaction domains (e.g., booking a flight, ordering food, playing music, and controlling the heating), which are, across several dimensions, not comparable to the domain of mental health [Morris et al. 2018].
There is a growing number of mental health-oriented chatbots targeting a variety of mental health difficulties and with functions ranging from providing education and self-help techniques to offering diagnosis and counselling [Vaidyam et al. 2019]. Recent, preliminary research shows positive outcomes, particularly in terms of efficacy. For example, the Woebot chatbot reduced symptoms of stress and anxiety in two weeks [Fitzpatrick et al. 2017], while frequent users of the Wysa chatbot also reported lower levels of depression [Inkster et al. 2018]. Woebot and Wysa are examples of chatbots that use approved techniques, such as CBT, and mental health professionals are often part of their development or advisor teams. However, a 2018 systematic review of conversational agents in healthcare found that evaluations around user experience and perceptions, including acceptability, were scarce [Laranjo et al. 2018].
The use of chatbots in mental health is an emerging field of research, and they are an innovative and, arguably, disruptive technology. Unlike websites and mobile phone apps, conversational agents are a much less familiar technology to users, so their acceptability can only be speculated at. Moreover, unlike conversational agents in other domains, in the domain of health, more serious concerns become pertinent; these relate to the performance of the chatbot, safety of the user and data security/privacy, all of which are anticipated to impact acceptability [Palanica et al. 2019; Nadarzynski et al. 2019]. Moreover, the acceptability of AI-based technology, such as chatbots, can be undermined by lack of public trust and arguments that AI poses a threat to human employment [Aoki 2020]. It is, in fact, argued that acceptability is rarely considered when designing innovative technologies [Kim 2015]. Yet, acceptability in the domain of healthcare interventions is a necessary precondition for the effectiveness of the intervention [Sekhon et al. 2017] and, for this reason, it has been increasingly emphasised in the guidelines from health organisations (for example, the Medical Research Council in the UK). Acceptability considerations should encompass all stakeholders and user groups; the Topol report advocates the involvement of mental health patients and staff in the design process of chatbot applications to ensure that the technology is usable, accessible, and acceptable to them. This approach will mitigate the risk of the technology creating new barriers to care for patients and added burdens for staff [Foley and Woollard 2019; Topol Review 2019].
This article therefore aims to explore whether conversational agents, and, in particular, chatbots, present an acceptable solution to support young adults with mental health conditions. The article adopts the following definition of acceptability, developed by Sekhon, Cartwright and Francis [Sekhon et al. 2017, 2018]: “the extent to which people delivering or receiving a healthcare intervention consider it to be appropriate, based on anticipated or experienced cognitive and emotional responses to the intervention.” In particular, the study focuses on young adults, and counsellors in a university context, and their responses to a chatbot as a mental health intervention.
The study achieves its aim through three exploratory research activities (activities 1 and 3 were approved through the university ethics process):
(1)
Survey study with young adults to enable a better understanding of the “users” by exploring issues around their mental health and their experience with, and perceptions of, mental health technology, including chatbots.
(2)
A literature review to synthesise current empirical evidence relating to the acceptability of mental health chatbots.
(3)
Interviews with counsellors whose work is largely with young adults, based on their use of a chatbot prototype and user-centred design (UCD) methods, to produce insights into the acceptability of chatbots from the perspective of mental health professionals.

2 Survey Study with Young Adults

2.1 Methods

Previous research on acceptability of digital healthcare interventions advocates development guided by a profound understanding of the views and needs of the individuals that will use the intervention [Yardley et al. 2015; Apolinário-Hagen et al. 2017]. Therefore, the first research activity focuses on delivering a better understanding of the landscape of mental health in young adults and of their attitudes towards mental health technology, including chatbots.
The questionnaire used in the survey was administered to young adults at a UK university and consisted of two parts. Questions in the first part were derived from the 2017 NUS-USI Student Wellbeing Survey [NUS-USI 2017] and focused on mental health conditions that they have experienced, how and where they sought support, their perceptions of how helpful this support was, and access issues. While these questions were oriented towards students, they are valuable in the wider context of young adults, because the questions are broad and not university-related. The questions produced quantitative data. The questionnaire items in the survey's second part were designed to elicit quantitative and qualitative data on engagement with, and perceptions of, mental health technologies, including chatbots. Analysis of the qualitative statements was performed by two coders on the responses to two open questions, with high inter-coder agreement (>0.9). The complete questionnaire is included in Appendix A (Figure A1).

2.2 Results

One hundred and fifty respondents provided valid survey responses—83 (55.3%) respondents were male and 67 (44.7%) were female. Ninety percent were aged between 18 and 20 (41%) and 21–24 (49%), with 10% aged 25 years or older.

2.2.1 Mental Health of Young Adults.

A marginal majority of respondents (53%) reported experiences of “mental health worries in the past 12 months, regardless of whether they have been diagnosed or not.” Feelings of stress were experienced by 80% of respondents, followed by lack of energy or motivation (71%); feelings of being unhappy and down (55%); loss of interest in activities (52%); depression (50%); and anxiety (49%). Most importantly, only 11% of respondents reported no mental health worries in the previous 12 months (Figure 1). The results showed that mental health difficulties, or how these are manifested, are prevalent in the surveyed population. Many respondents, while having stated that they did not have mental health difficulties, also reported that they had experienced one or more of these feelings, suggesting that this group may not able to recognise that these feelings are linked with mental health. This aligns with previous studies that argued that there is insufficient understanding around mental health conditions [Memon et al. 2016], while the most cited reason for not seeking help was the perception that it was not needed [Czyz et al. 2013].
Fig. 1.
Fig. 1. Feelings experienced by respondents in past year.
It was also concerning that almost half of the participants (49%) responded that they did not “know where to seek support, if they needed it”; lack of awareness of available support is consistently found to be a major barrier to mental health access [Gulliver et al. 2010].
When asked from whom they sought support for their mental health problems, the most frequent response was “a friend or a family member” (29%), followed by “doctor or GP” (21%) (Figure 2). In addition to service-level/structural barriers, young adults are found to hold social and self-stigmatising attitudes that inhibit them from seeking support from a specialist [Gulliver et al. 2010]. As such, they often prefer to either self-rely or to seek support from family and friends [Anderson et al. 2017; Rickwood and Braithwaite 1994].
Fig. 2.
Fig. 2. Mental health support used by respondents.
Respondents who had received any type of support for their mental health, professional support (i.e., from local/university counselling services, GPs, or health service professionals) or non-professional support (e.g., friends and family, websites, and apps), were asked about its effectiveness. Opinions appeared largely divided in terms of the effectiveness of professional support; for example, 46% found the counselling services unhelpful (25% very unhelpful, 21% unhelpful), while 46% found them helpful (13% very helpful, 33% helpful) (see Figure 3).
Fig. 3.
Fig. 3. Helpfulness of mental health support received by respondents.
The next question probed the efficiency of the professional support. Respondents who had sought professional support were asked how long they waited to receive it. Fifty-six percent waited over a week; 17% waited over a month; and 9% waited over three months (Figure 4). This is consistent with previous studies reporting that a primary barrier to access is the lack of availability of professional help, which results in long waiting times [House of Commons Committee of Public Accounts, 2019].
Fig. 4.
Fig. 4. Waiting times to access professional support.

2.2.2 Mental Health Technology and Chatbots.

The analysis in this section used quantitative and qualitative methods to identify how young adults engage with mental health technology, such as websites and mobile phone apps.
As Figure 3 shows, none of the respondents who used mental health apps found the technology helpful, while just 19% found websites and online forums helpful. This result may paint a negative picture around the potential of technology to support mental health care and resonates with research reporting poor perceptions of technology [Musiat et al. 2014; Apolinário-Hagen et al. 2017].
Thirty respondents had used an app or website designed to support their mental health; 56% of them had used an app, while 44% had used a website. The most commonly cited app was Headspace (20%), an app that provides meditation exercises for daily use. The full list of apps and websites mentioned by the respondents can be found in Appendix F.
However, the majority of respondents only engaged with the app or website once or twice a month (40%) or twice a year (33%) (Figure 5). This result echoes previous findings that respondents have not fully adopted current mental health technologies and is consistent with research that indicated that e-Health—in general—is characterised by a lack of long-term user engagement [Druss and Dimitropoulos 2013; Greenhalgh et al. 2017; Torous et al. 2018].
Fig. 5.
Fig. 5. Frequency of usage of mental health technology.
The respondents who had used mental health apps and websites were asked to name the functionality that they perceived as being most useful. Thirty short responses were collected and categorised into four groups of features. The categories of features, frequency, and examples of responses are provided in Table 1. The most cited useful features included learning about self-help therapeutic techniques, such as meditation, mindfulness, cognitive behavioural therapy (CBT), and breathing; receiving support in the app or information about available support, when they needed it; and accessing psychoeducational content, which enhanced respondents’ understanding of mental health conditions.
Table 1.
CategoryFrequencyExamples of Respondents’ Statements
Self-Help Therapeutic Techniques11“meditation techniques”; “breathing techniques”; “I learned about self-help techniques such as exercising regularly, meditating, being mindfulness etc. that I could use to improve my mind, feelings and become more positive.”; “I found the self-help CBT useful”
Support Availability9“the app provided support availability”; “provided immediate access to support”; “quick access to support”; “the application had a lot of support functions”
Psychoeducational Content8“explanation on experiencing mental health worries was useful”; “information about my mental health concerns”
Search Functions2“search functions were useful”
Table 1. Useful Features of Mental Health Technology
These 30 participants were then asked to provide a short response to the question “what changes would improve the website or app you have used.” The responses fell neatly into four categories: interactivity; adaptive content; professional support; and usability. Some responses spanned more than one theme; for example, “more interactivity and guidance.” The categories and the associated frequencies and example statements are shown in Table 2.
Table 2.
CategoryFrequencyExamples of Respondents’ Statements
Interactivity11“more interactivity”; “interactive information”; “more interactive functions not just meditation techniques”
Adaptive Content9“guided meditation”; “more guided information”; “directed techniques to help with my mental health”
Professional Support8“opportunity to talk with a counsellor”; “human-computer and counsellor interaction”; “provide contact information and directions how to get that support”; “directions to services”
Usability6“less complicated to use”
Table 2. Proposed Improvements in Current Mental Health Technology
The analysis indicated that respondents thought that technology should support more interactivity and “human”-like interactions. For example, one participant stated that “if the websites could provide a more interactive, one-to-one platform where I could communicate regularly, receive individual attention, like a friend/therapist, that would be great!,” while another suggested a hybrid, complementary approach of interactions with the “computer and counsellor.” As the first statement cited above suggests, the aspect of interactivity closely relates to adaptive or personalised content—participants expressed the need for applications that guide them through the content and adapt to them and their individual mental health needs, in terms of psychoeducation, information and support, and therapeutic techniques. For example, participants indicated that the apps should “provide directed techniques to help with [their] mental health” and “guided meditation features.” At the same time, better integration with professional services appeared important to participants who expressed the need to find and access “human” support through the application, but, again, with a degree of personalisation, at least in terms of location. For example, one participant stated that they would have liked “guided information to where nearby mental health services are.” The analysis also suggested that a number of respondents would like the website or app to be “easier to use,” echoing previous research that highlighted usability issues in mental health technologies.
Participants who had not engaged with a mental health app or website (N = 120) were asked if they would ever consider using one. A large majority (78%) responded that they would not consider it. This, as has been previously suggested, reflects poor perceptions and low anticipated acceptability of mental health technology.
The final survey questions focused on gauging participants’ familiarity with, and views towards, conversational agents. The analysis revealed that most participants (75%) had engaged with voice-activated digital assistants (such as Siri, Cortana, Google Assistant, or Alexa) and almost half (47%) had also interacted with text-based chatbots. These results suggest that this age group are likely to be familiar with conversational agents. According to “technology acceptance models” from information systems research, such as the TAM and UTAUT [Venkatesh et al. 2003], experience and familiarity predict the acceptability or adoption of a new technology. Specific to the domain of the present study, a scoping review by Apolinário-Hagen et al. [2017] concluded that these factors also inform the acceptability of mental health technologies.
Finally, when asked how comfortable they would feel talking with a chatbot about their mental health, 37% said that they would feel comfortable, 39% said that they would feel neither comfortable nor uncomfortable, while 24% responded that they would feel uncomfortable. This finding could be viewed in conjunction with a number of studies that have suggested that people feel comfortable disclosing sensitive or mental health information to a conversational agent [Ho et al. 2018; Lucas et al. 2014; Pickard et al. 2016; Yokotani et al. 2018]. In a recent study, young adults who used a mental health chatbot stated that the chatbot offered them a safe and anonymous space to talk about their mental health [Bae Brandtzæg et al. 2021]. To summarise this phase of the study, the first part of the survey confirmed that mental health is a major issue, with many young adults possibly unable to recognise signs of poor mental health. Lack of awareness of support services was found, and long waiting times were reported, which may explain issues with the perceived effectiveness of the support. The survey's second part provided insights into participants’ perceptions, actual use of, and intention to use mental health technology. The reported poor adoption of, and satisfaction with, current solutions raises serious issues relevant to the acceptability of the technology. Users were, however, able to identify useful features associated with the technology, such as learning self-help skills and being able to access support and information quickly. In addition, the analysis revealed essential or desirable characteristics that could increase a solution's acceptability, including interactivity, adaptive/personalised content, improved links to professional services, and usability. Finally, the results indicated that young adults are familiar with conversational agents and would generally feel comfortable with, or were neutral about, interacting with mental health chatbots. Familiarity and intention to use have been found to determine the acceptability and actual use of technology in several domains, including mental health [Apolinário-Hagen et al. 2017]. As such, these findings suggest that chatbots have the potential to be an acceptable mental health technology for young adults.

3 Literature Review On Acceptability of Mental Health Conversational Agents

3.1 Methods

Having identified acceptability as an important issue in relation to mental health technologies and found initial evidence of the potential of chatbots in this context, the next phase of the study was a review of the literature in the area. Before undertaking the review, previous recent and relevant systematic reviews were identified to gauge current issues and find open questions. Seven systematic reviews in fields relevant to mental health and conversational agents were identified and examined:
a 2017 review [Hoermann et al. 2017] of text-based mental health interventions that also investigated chatbot-based interventions;
a 2017 review [Provoost et al. 2017] that focused on Embodied Conversational Agents (ECAs) for mental disorders;
a 2018 review [Laranjo et al. 2018] that looked at conversational agents (chatbots and ECAs) in health care, including mental health;
a 2019 review that undertook an analysis of research related to chatbots and ECAs in mental health [Vaidyam et al. 2019];
a 2019 meta-analysis that focused on research, published up to 2017, in “human agents,” also covering chatbots and ECAs, in health care [Ma et al. 2019];
a 2019 systematic review that, similar to Laranjo et al. [2018], looked at conversational agents in health care [Montenegro et al. 2019];
a 2020 systematic review of chatbots and ECAs in mental health [Vaidyam et al. 2020].
All seven systematic reviews drew the following two-fold conclusion: there is great promise in the technology, however, evidence about its suitability is limited. Specifically, Hoermann et al. [2017] argue that several questions remain open, including whether a chatbot is more appropriate for particular mental health problems, short- or long-term interventions, or as adjunct or screening tool to streamline services. The scarcity of evidence results from conversational agents for mental health being an emerging area with most applications in the early stages of development and evaluation [Laranjo et al. 2018; Provoost et al. 2017]. Moreover, evaluation typically focuses on effectiveness, efficacy, and feasibility (e.g., lower depression levels). In significantly fewer cases, studies may also report results around user experience, trust, expectations, attitudes, satisfaction, usability, engagement, and perceptions. These constructs relate to acceptability, so they constituted a good starting point for the review.
The review aimed to synthesise current evidence around the acceptability of mental health2 conversational agents, so only studies published in peer-reviewed journals between 1 January, 2014 and April 2020 were considered. Non-English studies, technical reports, student theses, and studies published in conference proceedings and books were excluded. The search terms used were {(“chatbot” OR “conversational” OR “agent” OR “dialog(ue) system”) AND “mental health”}, within the title, abstract, full text, keyword list, or references section of the article. Following a similar methodology to Laranjo et al. [2018], studies were included if they satisfied the following criteria:
(1)
They focused on individuals with mental health difficulties or mental health professionals.
(2)
They involved a “truly” conversational agent or chatbot: First, the user had to be able to provide unconstrained natural language input, so studies in which the user could only interact with the system through “Yes/No” answers or clicking/tapping a response from a predefined set of choices were excluded. Examples of excluded studies based on this criterion were Burton et al. [2016], Hirano et al. [2017]; Gardiner et al. [2017] and [Martínez-Miranda et al. 2019].3 Second, studies in which the responses of the system did not depend on the user input were also excluded. For example, in Tielman et al. [2017] and Sebastian & Richards [2017], virtual agents presented psychoeducational content but without processing the user input, and, in Lucas et al. [2017], a virtual human administered a mental health assessment and recorded the users’ responses to the questions for later consideration by human staff. Finally, Wizard of Oz studies were excluded, as system responses in those setups were generated by another human (for example, the study by Easton et al. [2019]).
(3)
They performed a user evaluation that measured any aspect linked to acceptability. Studies in which the user evaluation reported only health-related measures such as efficacy or effectiveness were therefore excluded—for example, reduced depression symptoms, as in the excluded study by Suganuma et al. [2018], or accuracy of diagnosis, as in the study of Jungmann et al. [2019]. Studies in which the evaluation focused on technical or performance aspects, such as task completion or speech recognition accuracy, were also excluded.

3.2 Results

This literature review focused on acceptability-related evaluation results of conversational agents in the domain of mental health care, with 13 studies satisfying the selection criteria.

3.2.1 Description of the Conversational Agents in the Reviewed Studies.

The conversational agents discussed in the selected studies were 10 chatbots and three ECAs, which may point to the recent, growing popularity of chatbots. Four of these chatbots were developed as standalone phone apps, two were deployed on an existing platform (such as Facebook or Slack), two were desktop applications, and one was a web app. All three ECAs were implemented as desktop applications. In 12 out of the 13 studies, the conversational agents aimed to assist individuals experiencing mental health difficulties, while one of them supported both individuals and clinicians. A variety of mental health conditions were targeted, with the majority of the conversational agents focusing on depression, anxiety, and stress. Most of these agents sought to treat or alleviate the symptoms of these conditions through psychotherapy and self-help skills training, while one of them also offered diagnosis. Given that the majority of conversational agents in the reviewed studies were chatbots, the interaction was text-based. The three ECAs reported in the studies supported speech communication, with two of them also capable of generating facial and gestural responses. The characteristics of the conversational agents described in the studies are presented in Table 3.
Table 3.
First Author, YearSupportingType of CA (Name)Mental Health ConditionPurpose of CAPlatformModality (Input/Output)
Gaffney et al. [2020]individualsChatbot (MYLO)mental health difficulties (not specified)psychotherapystandalone desktop computer applicationtext/text
Park et al. [2019]individualsChatbot (Bonobot)stressreduce stressstandalone web applicationtext/text
Greer et al. [2019]individualsChatbot (Vivibot)depression, anxietyself-help, positive psychology, CBT deliveryexisting messaging platform (such as Facebook or slack)text/text
Sakurai et al. [2019]individualsECA (VICA)stress, anxietypsychotherapy/ counsellingstandalone desktop computer applicationspeech/speech and visual
Inkster et al. [2018]individualsChatbot (Wysa)depressionimprove well-being, reduce stressstandalone smartphone app, publicly availabletext/text
Fulmer et al. [2018]individualsChatbot (Tess)depression, anxietyreduce symptomsexisting messaging platform (such as Facebook or Slack)text/text
Morris et al. [2018]individualsChatbot (KokoBot)depression, anxietyempathic responsesstandalone smartphone app, publicly availabletext/text
Fitzpatrick et al. [2017]individualsChatbot (Woebot)depression, anxietypsychotherapy, psychoeducation, support, CBT deliverystandalone smartphone app, publicly availabletext/text
Ly et al. [2017]individualsChatbot (Shim)mental health difficulties (not specified)self-help, positive psychology, CBT deliverystandalone smartphone app, publicly availabletext/text
Tielman et al. [2017]individuals, cliniciansECAPTSDpsychotherapystandalone desktop computer applicationspeech/speech
Bresó et al. [2016]individualsECAdepressiondiagnosis, self-helpstandalone desktop computer applicationspeech/speech and visual
Shinozaki et al. [2015]individualsChatbot (CRECA)stress, anxietypsychotherapystandalone desktop computer applicationtext/text
Gaffney et al. [2014]individualsChatbot (MYLO)depression, anxietypsychotherapystandalone desktop computer applicationtext/text
Table 3. Characteristics of the Conversational Agents in the Reviewed Studies

3.2.2 Acceptability Evaluation of the Conversational Agents in the Reviewed Studies.

There was a large diversity in the evaluation methods reported in the selected publications. Experimental parameters such as sample and duration varied widely, while there was a prevalence of quasi-experimental empirical evaluations over Randomised Control Trial (RCT) study designs. Different aspects of “acceptability” were investigated, including satisfaction, usability, engagement, self-awareness, helpfulness, and trust. To measure these acceptability-related aspects, the majority of studies relied on quantitative data derived from Likert-scale questionnaires or on metrics such as frequency of use and number of messages exchanged. Some studies also collected and analysed qualitative user feedback. Despite the differences in methodology, all studies reported positive outcomes, suggesting that conversational agents have the potential to support acceptable and enjoyable interactions.
The remainder of this section provides an overview of the studies with a focus on the evaluation of the conversational agent. The methods, measures, and results of the evaluations are also outlined in Table 4.
Table 4.
First Author, YearUser Evaluation Study MethodsAcceptability Evaluation MeasuresAcceptability Evaluation Results
Gaffney et al. [2020]Chatbot use over a two-week period (one or more interactions); helpfulness scale and follow-up semi-structured interviews about experience (focusing on helpfulness and usability); 15 participants.Helpfulness scale; thematic analysis of interview responses.Chatbot responses were helpful (improved awareness and offered new perspective); good usability.
Park et al. [2019]One-off interaction with chatbot; follow-up semi-structured interviews with 30 participants.Thematic analysis of interview responses.Chatbot was perceived to be helpful, (inspirational and encouraging self-reflection).
Greer et al. [2019]45 young adults who had been treated for cancer; RCT: group interacting with chatbot and control group with access to daily emotion ratings app and delayed access to chatbot; duration: four weeks.Survey to collect perceived helpfulness rating (0–3) and open-ended feedback; engagement (frequency and duration of use).Chatbot was perceived to be helpful and received positive feedback, higher engagement for chatbot group.
Sakurai et al. [2019]Comparison between ELIZA and VICA, two age groups, 14 participants, within-subjects design.Average number of statements per session, trust, and awareness (verbalisation and positive feeling); 7-point Likert scales.VICA had higher engagement and more positive feelings particularly for the older group.
Inkster et al. [2018]Two months’ use, 129 participants, in-app responses to preformatted responses such as “Have I been able to help you?” from 95 participants, while 17 provided free-text feedback.Thematic analysis of the qualitative in-app feedback.Majority of users found the experience favourable and the tools and app helpful and encouraging; pronounced effect for more frequent users.
Fulmer et al. [2018]RCT: Three groups, 25 participants each group. Access to chatbot for either two or four weeks. Control group received link to Ebook. Participants responded to “what was the best/worst thing about your experience with Tess?”Thematic analysis of the qualitative in-app feedback, no direct comparison between groups, number of messages as measure of engagement.High level of engagement and higher overall satisfaction, emotional awareness, relevance to life, comfort and learning compared to control group.
Morris et al. [2018]37,169 one-off interactions.Users rated quality of response (good, ok, bad).Majority of chatbot responses were rated favourably.
Fitzpatrick et al. [2017]RCT: Duration: two weeks; Participants: 70 with depression and anxiety; two groups: Group 1: chatbot and Group 2: educational eBook.Acceptability/Usability Likert scale. Qualitative user statements.High overall satisfaction (4.3/5 Likert scale).
High engagement.
Ly et al. [2017]RCT: pilot, two weeks; 24 participants (two groups), non-clinical population.Number of app opens per day; qualitative feedback from interviews with nine participants.High engagement, positive perceptions about chatbot's empathy, personality, and learning.
Tielman et al. [2017]One day, four participants.Recollection helpfulness Likert scale, ECA's questions usefulness Likert scale, usability (SUS).High usability.
Bresó et al. [2016]Evaluation with 60 academics in sciences who viewed a set of videos showing agent-user interactions5-point Likert questionnaire about usability (SUS), acceptability of content (activities proposed agent) and of agent (appearance and behaviour), as well as free-text feedback.High scores for usability, content, and facial responses of agent.
Shinozaki et al. [2015]14 weeks, within-subjects design where 12 participants interacted with each ELIZA and CRECA once.Trust scale, self-awareness scale, number of interactions.More interactions, higher trust and self-awareness with CRECA.
Gaffney et al. [2014]Participants with distress randomly assigned to MYLO or ELIZA condition; comparison with ELIZA.Helpfulness scale.MYLO was rated more helpful than ELIZA.
Table 4. Methods, Measures, and Results of the Acceptability Evaluation of the Conversational Agents in the Reviewed Studies
In Fitzpatrick et al. [2017], users evaluating Woebot, a chatbot delivering psychotherapy and psychoeducational content to alleviate depression, reported higher levels of satisfaction and emotional awareness, compared to users who had access to an eBook. Users of the chatbot were also more engaged, using the chatbot much more frequently than eBook users. A similar approach was followed in Fulmer et al. [2018]; the content offered by the Tess chatbot, which, too, aimed to reduce depression and anxiety symptoms, was perceived as more relevant to everyday life and made users feel more comfortable with the therapeutic experience, compared to users accessing psychoeducational content in an eBook. High engagement with a mental health chatbot was also reported by Ly, Ly, & Andersson [2017]. Their qualitative analysis of user statements indicated that the most positively perceived aspects were the chatbot's empathy, personality, and learning. In Morris et al. [2018], KokoBot, a chatbot that can offer simple empathic responses, supported one-off interactions. Users rated the majority of its responses favourably. Interestingly, in a separate experiment, users rated responses generated by the chatbot less favourably than responses generated by a human, although, in reality, all responses were human-generated. The authors concluded that, when it comes to empathic interaction, there might always be prejudice against chatbots.
In Inkster et al. [2018], users of the Wysa chatbot, an app to offer depression therapy, selected pre-formatted options to give feedback about the app, with the majority finding it helpful and encouraging. In Park et al. [2019], users, who had engaged in a conversation with Bonobot, a motivational and stress management chatbot, were interviewed and gave positive feedback about the helpfulness of the chatbot's responses. Vivibot, a chatbot aiming to support the mental health of young adults after cancer treatment was also rated as helpful by its users [Greer et al. 2019]. In Gaffney et al. [2014] a chatbot offering counselling, called MYLO, was rated as significantly more helpful than an “ELIZA-like” chatbot. In their most recent study [Gaffney et al. 2020], users interacted with MYLO over a two-week period and evaluated MYLO for helpfulness and usability; user interviews pointed to benefits, such as improved awareness and perspective. In Shinozaki et al. [2015], CRECA, a chatbot also offering counselling, was evaluated using two Likert-scale questionnaires that measured “trust” (defined by the authors as “feeling of harmony and reliance on counselor, including empathic understanding,” using questions such as “I was able to talk to the agent comfortably”) and “self-awareness” (defined as “perception of counseling effectiveness,” using questions such as “I was able to have more positive feelings,” and “feelings of being able to put one's difficulties into words” with questions such as “I was able to clarify the problem that I had”). The chatbot was rated more positively than a second, ELIZA-like conversational agent [Shah et al. 2016]. In Sakurai et al. [2019], VICA—a speech-enabled Embodied Conversational Agent (ECA) offering counselling and a successor of the CRECA chatbot—was more positively perceived by older participants in terms of trust and awareness compared to CRECA and another ELIZA-like agent.
In Tielman et al. [2017] patients suffering from post-traumatic stress disorder used a system for reconstructing memories inside a virtual world through questions and answers with an ECA. The usability of the system overall was positively-rated, based on System Usability Scale (SUS) scores [Brooke 1986], and the ECA's probes were also found useful. Bresó et al. [2016] presented an ECA capable of generating responses and emotions to diagnose and provide support for depression. The evaluation focused on usability and acceptability. Usability was measured using the SUS questionnaire. The acceptability questionnaire targeted the content of the application (for example, “I think the length of the sessions was adequate, allowing the user to complete the sessions on daily basis”) and the virtual agent itself (for example, “the virtual agent inspires trust” and “the behaviour of the virtual agent motivates the daily use of the PrevenDep system”). Qualitative statements were also captured and analysed. High ratings were reported for usability and acceptability, and user feedback was positive.

3.2.3 Discussion.

The research question that the review set out to address was whether conversational agents could present an acceptable solution for people with mental health difficulties and professionals in mental healthcare. Despite heterogeneity in terms of system characteristics and evaluation approaches, all of the studies reported overwhelmingly favourable user perceptions and experience. As such, this review has synthesised a body of evaluation findings that, taken together, can offer a clearly positive, albeit tentative, answer—chatbots can be an acceptable intervention to support mental health.
In addition, further questions and directions for future research have emerged through this review and are discussed below.
Establishing Concepts and Methods for Acceptability. Conversational agents in mental health care is an emerging field [Provoost et al. 2017; Laranjo et al. 2018]. As a result, and as the authors in the reviewed studies acknowledge, more research is needed to evaluate “acceptability.” The review revealed large variation in the “acceptability” aspects—which ranged from usability and trust to engagement and self-awareness—that each study evaluated and the methods that they employed to evaluate those aspects.
Only two of the selected studies fully or partly employed standardised instruments, while none of the reviewed studies used a validated questionnaire targetting acceptability, such as the Acceptability E-scale [Tariman et al. 2011]. The Acceptability E-scale was used in the excluded studies by Philip et al. [2017; 2020]. In these studies, an ECA that asked users “Yes/No” questions to diagnose depression was found to be highly acceptable [Philip et al. 2017] and trustworthy [Philip et al. 2020]. In fact, the review revealed that the concept of “acceptability” may also be misconstrued, as, for example, in one of the excluded studies that evaluated the efficacy of an ECA (in particular, they compared well-being and depression scales scores pre- and post-intervention and between control and treatment groups) but referred to this as “acceptability” [Suganuma et al. 2018].
Most studies were quasi-experimental and relied on ad hoc techniques to measure acceptability. Therefore, it is recommended that future studies place more emphasis on the evaluation stage of system development and use a systematic approach to perform and report it. In particular, future studies should draw from the body of established methods to evaluate acceptability within the fields of clinical research and human-computer interaction. If conversational agents are to fulfill the prediction of transforming healthcare, standards and rigour in evaluation and reporting—matching the ones required for any clinical intervention—will be necessary [Laranjo et al. 2018].
Exploring Effects of Agent Characteristics and Purpose. This review looked at embodied and non-embodied agents that supported spoken and/or textual communication. It could be hypothesised that there are effects relating to embodiment and modality that may also co-vary with the type of mental health issue being targeted or intervention being used. For example, it has been argued that embodiment may be unsuitable for psychosis patients [Bickmore et al. 2010], while one of the reviewed studies suggests that verbal delivery of psychoeducation leads to more engagement than textual delivery [Tielman et al. 2017b]. As such, in addition to the questions in Hoermann et al. [2017], this review motivates further research questions regarding the individual and interaction effects of several variables, including intervention type and purpose, communication modality, platform, and embodiment. Future research should focus on creating sufficient knowledge around these factors so the question about the acceptability of mental health chatbots for individuals with mental health difficulties and clinicians can be fully addressed.
Involving Users in Scoping and Designing Chatbots. Large diversity was also observed in the characteristics, or scope, of each application; for example, in terms of purpose; whether they offer diagnosis, therapy, or education; whether they are complementary to professional support or standalone; and the deployment platform. Because of this heterogeneity, the results of these studies may not suffice to answer specific questions, such as those posed by Hoermann et al. [2017].
In the reviewed study by Greer et al. [2019], the content and delivery of the chatbot were guided by interviews and focus groups with the target user group. None of the remaining studies explicitly mention how the scope and requirements of the systems were derived. However, in a complex socio-technical system such as healthcare, the functions a conversational agent should and could take over, or simply support, should be thoroughly investigated and clearly delineated. A notable example includes the studies by Easton et al. [2019], in which co-design workshops, facilitated by a Wizard-of-Oz setup, informed the development of a chatbot, and by Chen et al. [2020], in which migrants and other stakeholders participated in chatbot co-design activities. Applying UCD in chatbot development can deliver a better understanding of user needs, which is key to facilitate uptake and future use of chatbots [Nadarzynski et al. 2019].
Evaluating Prior Perceptions of Users. All reviewed studies evaluated “acceptability” after users had experienced the application, and not anticipated acceptability. However, Gaffney et al. [2014] found that expectations towards chatbots could predict levels of engagement, perceived helpfulness, and clinical outcomes. Similarly, Nadarzynski et al. [2019] demonstrated that prior attitudes and perceptions of utility and trustworthiness correlated with acceptability of healthcare chatbots. These findings are largely congruent with several “technology acceptance models” (such as TAM and UTAUT) that demonstrate that perceptions and attitudes towards a new technology determine its future adoption [Garavand et al. 2016; Rahimi et al. 2018]. As such, prior perceptions of chatbots should be captured as part of the feasibility and evaluation phases of a chatbot.
Capturing Perceptions of Mental Health Professionals. None of the reviewed studies involved mental health professionals in the evaluations, even the study in which clinicians were identified as a user group. To the authors’ knowledge, there is no study that has explored the perceptions of mental health professionals towards chatbots. However, clinicians’ perspectives and acceptance are found to be major factors underlying adoption and sustainability of new technologies [Wade et al. 2014]. Recently, news about the national health provider in the UK using conversational agents to enable access to its services has raised questions from doctors and experts around effectiveness as well as ethics and data privacy.4,5 As such, focused investigation of clinicians’ perceptions should be prioritised in future research in mental health chatbots.

4 Interviews with Counsellors

Having noted that existing research omits consideration of the acceptability of chatbots from the perspective of mental health professionals, the third, and final, research activity reported in this article produced insights from interviews, anchored on the use of a chatbot prototype and UCD methods, with counsellors whose work is primarily with young adults at the study site university.

4.1 Methods

The 2017 Medical Technologies Evaluation Programme (MTEP) Process Guide of the National Institute for Health and Care Excellence (NICE) recommends that advice from expert advisors (health care professionals with experience of the condition) is sought when developing or appraising technology-based solutions. As stated in the guide, new technologies often have potential benefits and risks that are not yet fully described in scientific literature. Expert advisers, even when not familiar with the technology, may provide advice and opinions based on their clinical or technical experience, and insights into the potential usefulness of the technology in the relevant care pathway, which may complement published evidence, particularly when this is limited [National Institute for Health and Care Excellence 2017]. The counsellors who took part in this study are experts who work directly with young adults to provide mental health interventions, and, as such, are able to provide insights into the acceptability of a chatbot solution. To this end, in-depth, semi-structured interviews with counsellors were conducted.
Further, following the recommendation in the Topol Review, this research activity employed the user-centred design (UCD) techniques of personas, scenarios, cognitive walkthrough, and prototyping to support the interviews. The application of personas, scenarios, and walkthrough techniques is presented in Section 4.1.1.
Given that mental health chatbots are not a mainstream technology, a fully functioning chatbot prototype was developed for use in the study to enhance the quality of participants’ responses. The chatbot's development is briefly outlined in Section 4.1.2.
The semi-structured interviews relied on a set of questions based on the “Expert Adviser Questionnaire,” developed by NICE, to elicit expert opinions on new healthcare technologies. The interview's first part sought to understand the counsellor's familiarity with mental health technologies and capture their initial perceptions around conversational agents. Next, the chatbot prototype was demonstrated; this was facilitated by two client personas and the cognitive walkthrough method. The second part of the interview focused on the suitability of the chatbot application, perceived benefits for young adults with mental health conditions, and potential impact of this technology on current standards of care. The interview questions are included in Appendix B (Figure B.1).

4.1.1 Personas, Scenarios, and Cognitive Walkthrough.

A persona is a rich description of a typical user of the system under development that includes the user's goals, skills, attitudes, tasks, and environment. It is an amalgam of the characteristics of real users that are usually derived from a data-gathering activity. Personas are often used in conjunction with scenarios, which describe when, where, and how the interaction of the persona with the system takes place. Personas and scenarios are widely used, powerful techniques that help designers and developers better understand, and maintain their focus on, the real people who will be using the system, and their needs and goals [Preece et al. 2015, pp. 358–359, 379). For the purposes of this study, two primary personas were developed based on the survey data, presented in Section 3. The personas and scenarios are included in Appendix C (Figures C1 and C2).
Cognitive walkthrough is a technique in which experts evaluate a system from the point of view of the user (usually instantiated by a persona), by stepping through the sequence of actions needed for the users to complete a task in a given scenario and noting problems. As the experts “walk through” the action sequences, they attempt to answer a set of questions. More details about cognitive walkthroughs and how they are performed can be found in Nielsen [1994] and Preece et al. [2015]. The procedure followed in this study was as follows:
(1)
The counsellor reviewed two personas, the scenario, the task to be completed in the context of the scenario, and the sequence of actions to complete the task.
(2)
The counsellor walked through the action sequences using the prototype chatbot and, at each step, they answered the following questions: will the user know what to do?; will the user see how to do it?; will the user understand the feedback they get?
The output of the cognitive walkthroughs—that is, the responses of the counsellors to the questions—is included in Appendix D (Tables D1 and D2). Typically, the aim of the cognitive walkthrough technique is system evaluation, but, in this study, it was primarily used as an activity to enable the counsellors to develop an empirical understanding of what a chatbot is and the possibilities and characteristics of the technology so insights could be gained into the likely acceptability of chatbots in this context.

4.1.2 Chatbot Prototype Development.

As previously mentioned, in the UCD methodology, prototypes are used as a basis for interviews to ground data in a real context [Preece et al. 2015]. The design and implementation of the chatbot included typical features of mental health apps and chatbots, which were derived from reviews of mental health apps [Luxton et al. 2011; Turvey and Roberts 2015; Bakker et al. 2016; Lui et al. 2017], the “useful features” identified by the survey reported in Section 2; and the literature review of mental health chatbots presented in Section 3. In particular, the chatbot prototype specification included the following functions:
Offering structured mental health assessment using standard diagnostic tools such as the Patient Health Questionnaire (PHQ), Perceived Stress Scale (PSS), and General Anxiety Disorder scale (GAD).
Offering self-help therapeutic skills (contingent on diagnosis) such as coaching in skills or activities that are shown to produce mental health benefits, such as Cognitive Behavioural Therapy skills, relaxation and breathing techniques, mindfulness and meditation, which users can practice at any time.
Offering targeted psychoeducational content (contingent on diagnosis) in the form of multimedia content pertaining to the diagnosed mental health condition, including symptomology, causes, prevalence, risk factors or triggers, and treatment options.
Enlisting support (contingent on diagnosis) by facilitating direct contact with a counsellor or professional services.
The chatbot was developed using the Motion.ai development kit6 and was deployed on Facebook, a platform commonly used by the target group.
The user input, in the form of either clickable responses or natural language, determines the “path” of the conversation with the chatbot. The screenshots presented in Figures 610 exemplify the interaction with the user. The conversation starts with the chatbot prompting the user to enter a description of their emotional state or select a clickable response from: “Feeling stressed,” “Feeling depressed/unhappy,” “Feeling anxious/worried” (see Figure 6). Next, the chatbot engages the user in a dialogue that involves questions from the structured mental health assessment tools (the Perceived Stress Scale, General Anxiety Disorder form and Patient Health Questionnaire) in an attempt to diagnose the mental health condition (stress/depression/anxiety) and its severity (mild/severe) (see Figure 7). Then, the chatbot provides psychoeducational content, self-help therapy skills training, or information about available professional support, and/or enlists support (including referring to, and booking appointments with, counsellors), as well as giving some basic empathic responses (see Figures 810).
Fig. 6.
Fig. 6. The chatbot introduces the service and prompts the user to type in a description of their emotional state or select one of the options.
Fig. 7.
Fig. 7. The chatbot initiates a dialogue to diagnose a mental health problem and its severity.
Fig. 8.
Fig. 8. Chatbot logs emotions and factors to refine assessment; provides psychoeducational content about the condition; and suggests guided cognitive behavioural therapy and mindfulness skills training.
Fig. 9.
Fig. 9. Chatbot provides self-help therapeutic skill training in the form of guided meditation and cognitive behavioural therapy.
Fig. 10.
Fig. 10. The chatbot enlists support for the user and books appointment with counsellor.

4.2 Results

Three counsellors (P1–P3) were interviewed (one male and two females). The face-to-face interviews lasted 50–60 minutes and used the questionnaire based on the “Expert Adviser Questionnaire,” developed by NICE, as a guide (see Section 4.1). The interviews were recorded and transcribed (direct quotes from the participants are italicised in this section) and the transcripts were analysed by two coders. The analysis followed Pope, Ziebland, and Mays's [2000] “framework approach” that is both deductive and inductive; that is, the analysis starts deductively from pre-set aims and objectives (these are formulated as focused interview questions) and existing theory (concepts emerging from related literature), while also being “grounded” on the data, with the codes and concepts arising from the views of the participants. Using such framework was appropriate, because the small sample size would have made a purely “grounded”/inductive thematic analysis much less reliable. The qualitative analysis procedure is outlined in Appendix E.
The questions that initiated the discussion aimed to explore the counsellors’ experience with mental health technology, including apps, websites and chatbots. The participants had not seen the prototype chatbot at this stage.

4.2.1 Perceptions of Mental Health Technology.

Each of the counsellors was familiar with “self-care” apps and websites and had used online chat rooms or tools to offer counselling. Apps, websites, and chat tools were said to be an effective way to deliver psychoeducational content. Moreover, they suggested that for some individuals experiencing a mental health problem, online interventions could be more appropriate than face-to-face therapy, addressing the problem of social stigma:
if [clients] do not have to look at you in the eye, it is a lot easier for them to talk about their negative experience […], if you're feeling ashamed or embarrassment you don't have to see the person's reaction […], you wouldn't be put off by the fact that I am three times older than you, ethnicity might not be apparent. (P3)
However, P2 found that people would not engage with chat tools consistently, and P3 pointed out that they may not be suitable for everyone and expressed confidentiality concerns, noting “it didn't feel very safe. Some of the protocols about the technology and confidentiality were not thought out correctly.” P3 also stated that the web-based resource that they “would recommend and make use of is the Big White Wall,” but they also mentioned that their clients did not find it “comfortable” to use. These observations regarding poor engagement and usability are in line with the results of the survey.

4.2.2 Perceptions of Conversational Agents.

Each of the counsellors was familiar with speech-enabled conversational agents, such as Siri, Alexa, and Cortana. However, their previous experiences with the technology had not been positive:
With Siri, [I] find it quite frustrating because it does not always or often pick up on what you are asking unless you are very specific. In the past that didn't work well with language and accent, so the error rate was quite high. (P1)
I am not sure about Siri, it's still not picking up people's languages. (P2)
Indeed, at the moment, general-purpose commercial digital assistants such as Siri and Google Assistant are not designed, and are not able, to recognise and respond appropriately to statements about mental health [Torous et al. 2018; Miner et al. 2016].
Yet, the counsellors predicted that such agents could be an acceptable technology for young adults, because they are “digital natives”:
Of course, [the technology] is still very new; and I can see with [this] generation, it would be easier to take it for granted and build it into systems. I would imagine most […] would be happy to embrace and make good use of conversation interfaces. (P3)
The counsellors perceived useful roles for conversational agents for their clients with mental health issues: a personal digital assistant that would encourage clients to do “the everyday things” and “get them up in the morning,” which “people with anxiety or depression struggle to do” (P3); or, according to P2, since young adults “have their phones on their hands all the time,” a chatbot that they can access at all times through the phone for support and use “as their initial contact” before being directed to professional services would be valuable.

4.2.3 Perceived Benefits of Mental Health Chatbots.

Next, the prototype chatbot was demonstrated through the cognitive walkthrough activity, and the discussion about acceptability continued. Potential benefits of chatbots were identified and are presented below.
Education and Awareness. All counsellors suggested that they anticipated that chatbots would enable their clients to become “more aware of their mental health condition and [that] they can take some action about it” (P3).
This observation aligns with empirical research that found that even simple Q&A with basic CAs can help individuals understand their symptoms and promote help-seeking behaviour [Farzanfar and Finkelstein 2012].
P2 and P3 agreed that the deployment of chatbots on existing social media platforms could facilitate awareness.
Indeed, linking to results from the survey, young adults may not be able or willing (because of the associated stigma) to recognise symptoms of mental health and may lack awareness of the available support and where they can find information:
Of course, you can find the information on a website, if you only know where to go, and only if you type the right keywords into Google. (P1)
P1 and P2 suggested that chatbots may also benefit families and concerned friends by helping them to identify symptoms or understand the mental illness of a young adult.
Proactive and Just-in-time Access to Care. All counsellors suggested that chatbots would be able to offer “proactive” support, contrasting them with counselling services that were characterised as a “reactive service.” Indeed, a significant advantage of chatbots is their availability and immediacy of support:
We have services that are available Monday to Friday office hours, so having something that is available evening, during the night, and even in the weekends that is always there is one of the real advantages of such tools, because mental issues don't crop up only between 9–5. (P2)
Integration and Collaboration with Mental Health Services. All counsellors said that they would recommend chatbots for “mild to moderate mental health conditions” and saw chatbots as complementing their work but did not deem them capable of substituting counsellors.
They identified certain areas of counsellor activity that could be taken over by a chatbot, with the chatbot acting as the “initial contact” (P2), logging data about the client and providing them information and self-care guidance, and referring them to further support, but also as the regular contact “prompting [clients] to look after [their] mental health.” As such, chatbots should be integrated with existing services and “linked with what we already have on offer.” (P2)
But there may also be circumstances in which chatbots can be used “in the absence of a therapist” for the “odd therapy session but not the whole therapy” (P3), or they may even be a more acceptable solution for some cases. This view is supported by empirical research that reports that CAs may be beneficial for those who have difficulty disclosing information to, and building relationships with, clinicians [Farzanfar and Finkelstein 2012].
According to P1 and P2, the technology could reduce demand on the services, which are already overwhelmed. However, P3 suggested that by increasing awareness and access, chatbots would lead to an increase in demand, “because at the end of the day [the technology] is not replacing counsellors, it is enhancing [the services], which is not a bad thing.”
Interactivity and Empathy. All participants agreed that chatbots could be a more acceptable solution than current mental health technology, because they rely on conversation to offer information and support:
I suppose it [is more suitable] and maybe we are heading more towards interactivity, [and] decision-based conversations, [because] people like to take their time and communicate their way. If you can just say it or type it and get an instant response or recognition, that is accurate, [and] that doesn't sound foreign, it can replace [other mental health technology]. (P3)
Interactivity (feedback and understanding of user input) has indeed been found to increase user engagement and positive outcomes [Cavanagh and Millings 2013; Scholten et al. 2017]. Moreover, P2 explained that “it is a lot quicker, a lot simpler, and people would be more engaged with [it], because it is dealing with their issues as they go along and asks them questions.”  Indeed, “interactivity and usability” was one of the “areas for improvement” associated with mental health apps and websites in the survey that has accounted for their low engagement. Moreover, P3 also suggested that chatbots would appear more empathetic than other technologies. Empathy is an essential element binding the relationship between therapists and clients [Paiva et al. 2017] and is a predictor of treatment outcomes [Elliott et al. 2011; Nienhuis et al. 2018], and chatbots can easily simulate human empathetic techniques, such as active listening [Morris et al. 2018].

4.2.4 Potential Barriers to Acceptability.

During the discussion, the counsellors also raised concerns about mental health chatbots that could be seen as barriers to their acceptability.
Technology Limitations. The counsellors expressed doubt about the maturity of the natural language processing technology, mentioning that the success of mental health chatbots would depend on “recognition that is accurate.” (P3)
Personalisation. Moreover, P3 pointed out that chatbot responses need to be tailored to each user, stating that “effective counselling is recognising the uniqueness of the individual.” This observation relates to the need for personalisation in mental health technologies that emerged from the survey. P3 suggested that if chatbots provide generic responses, then such responses might be useful as part of a single interaction, but that after a while the responses will be perceived as non-genuine. According to P3, chatbots should be able to adapt to and “know the person,” and a way to achieve this would be to learn from each conversation and make connections to meaningful information from past conversations.
Overreliance on Chatbot. A potential problem with chatbots is how they may be perceived by young adult clients. In particular, P3 warned that they might be perceived as “having all the answers” and be assigned the role of “problem-solvers” in the relationship (P3). The counsellor provided a parallel to effective counselling, operating with the premise that:
The client has the answers. They [the clients] just don't know it because they are confused, anxious, worried, depressed, or stressed, so they don't know the way forward for themselves so they are asking “what should I do?.” Then [as counsellors] we are looking at “what have you done that was helpful?”[…]. A lot of views about counselling is that someone's got a problem and someone else is going to tell them what to do. (P3)
Similarly, both P2 and P3 pointed out that there is a substantial danger that clients would over-rely on the interactions with the chatbot and be less inclined to seek the mental health support they need. This concern about individuals overusing healthcare chatbots for self-diagnosis and treatment was also expressed by general practitioners in the survey carried out by Palanica et al. [2019]. The “omnipresence” of chatbots may cause clients to develop a dependency on the technology and avoid interactions with professionals [Tielman et al. 2017], leading to the recommendation that chatbots are integrated with existing processes so clients may be promptly referred to professional services, when needed. Indeed, this is an area of concern for mental health technology in general, as recent reviews have found that none of the publicly available apps follow best practice and correct procedures, as in cases of mental health emergencies (such as suicide ideation, overdoses, and self-harm) [Torous et al. 2018]. Design of chatbots that is informed by research and careful consideration of human factors can help mitigate risks of over-reliance on the agent [Sutherland et al. 2016] and overestimation of its capabilities [Knijnenburg and Willemsen 2016].
Data Privacy and Trust. P3 raised questions about how client data generated in the chatbot app would be handled in terms of privacy and confidentiality and shared with other parties, like professional services. Data privacy and confidentiality are assured in interactions with mental health professionals, whereas chatbot users share large amounts of personal data with the companies that provide the chatbots without any legal framework to protect them [Miner et al. 2017; Vaidyam et al. 2019].
Finally, P2 suggested that, before the technology was adopted, or recommended, by professional services, there should be substantial scientific data regarding the effectiveness of chatbots. This resonates with the conclusion of the state-of-the-art review about the lack of evidence regarding the effectiveness of chatbots.

5 Discussion

5.1 Key Outcomes and Links to Existing Research

A set of five common key outcomes can be seen to emerge from the three research activities (survey with young people; literature review; and interviews with counsellors), which are listed in Table 5. In this section, these key outcomes are discussed in relation to previous work in the field.
Table 5.
Key OutcomesResearch Activity
Preliminary support for the acceptability of mental health chatbotsSurveyReviewInterview
Need for stakeholder involvement in requirements specification and design of mental health chatbotsSurveyReviewInterview
Need for robust, evidence-based evaluation of mental health chatbots ReviewInterview
Mental health chatbots/technology can provide immediate access to support in an interactive, empathetic, personalised, and usable waySurvey Interview
Chatbots should complement, integrate with, and facilitate access to professional servicesSurvey Interview
Table 5. Aggregation of Key Outcomes
The first research objective was to understand the perspective and psychosocial context of young adults [Yardley et al. 2015]. This exploratory study adds to our understanding of the current state of young adults’ mental health, their poor perceptions of mental health technology, and its low levels of adoption, and, most importantly, what this group require from these technologies, which motivates the need for a technology-based solution that: provides self-help, education, information and support quickly, and helps them to connect with professional services in an interactive, personalised, and usable way. The value of the approach of involving end-users in gathering requirements has been illustrated by Goodwin et al. [2016] and Greer et al. [2019] and has been argued to be key to the success of mental health technology [Torous et al. 2018]. Finally, the results suggest that young adults are familiar with and have positive attitudes towards mental health chatbots, offering some preliminary support for the acceptability of this technology.
The second research objective was to identify and review the latest studies of conversational agents in mental health that reported user evaluations addressing acceptability aspects. Despite the heterogeneity of the evaluations, all of the studies indicated positive outcomes, providing initial support for the acceptability of this technology. In addition to describing the current state of knowledge, the review also points to priorities towards which research efforts should be directed. In particular, this review identifies the need for the operationalisation of acceptability and the application of standardised methods for user evaluations of mental health chatbots. Moreover, it is argued that, given that healthcare is a complex socio-technical system, appropriate user-centred design (UCD) methodologies should be employed that involve all stakeholders, from requirements gathering to final evaluation. The studies by Chen et al. [2020] and Easton et al. [2019] illustrate how UCD can be applied in the design and development of chatbots; in these studies, stakeholders participated in co-design activities, such as surveys, workshops, empathy probes, and Wizard-of-Oz experiments. Such methodologies are more likely to deliver a nuanced understanding of the role that chatbots could best serve.
The interviews with the counsellors provided support for the acceptability of a mental health chatbot for young adults and offered insights into its utility. Relevant to the questions posed in Hoermann et al. [2017], a chatbot was deemed best suited for mild to moderate mental health conditions. Relevant to the question regarding the role of chatbots, the counsellors saw chatbots as complementary to, and closely integrating with, existing services, and not as a substitute or as a standalone application. Similar to findings from previous research that surveyed non-specialist physicians, the chatbots were perceived by the counsellors as being more suitable within administrative roles, while more complex and “interpersonal” activities, such as treatment, should be carried out by human staff [Palanica et al. 2019]. Offering a more usable, interactive, and proactive platform would also encourage young adults to better understand and attend to their mental health and would facilitate their access to care. Interactivity and empathy were cited by the counsellors, as well as by research, as characteristics crucial for user engagement with mental health technologies [Morris & Aguilera 2012; Bae Brandtzæg et al. 2021].
The counsellors suggested that chatbots have the potential to improve access to mental health-related information, increase awareness, and reduce barriers because of the higher interactivity and usability afforded by the interface. This argument is corroborated by a study that compared a conversational agent-based search interface to a typical search engine interface for finding health-related information; the study found that the conversational agent was associated with better search results and higher user satisfaction and experience [Bickmore et al. 2016]. Another important finding of that study was that the benefit of using the conversational agent was more pronounced for the group with poor “health literacy.” Similarly, an ECA led to improvement in health literacy and helped reduce stigmatisation associated with mental health condition of Anorexia Nervosa in Sebastian & Richards [2017]. Taken together, these results suggest that chatbots can play an important role in improving awareness and diagnosis of mental health conditions, which remain poorly understood, especially within certain populations [Memon et al. 2016].
Along with the identified benefits, the counsellors flagged considerations necessary before the adoption of such technology for mental health. First, they felt that the acceptability of chatbots depends on the capabilities of the underlying technology, in terms of natural language understanding and adaptability to the individual. Second, the counsellors stated that young adults may over-rely on chatbots for their treatment and turn away from professional services, so they suggested that regular assessment of the client's progress and close integration with face-to-face support were required to minimise the possibility of overreliance on only one contact point. Moreover, chatbots should be designed in a way that the role and the capabilities of chatbots are delineated appropriately. Finally, the counsellors emphasised the need for regulation and transparency regarding how data is used, and for research to explore the effectiveness of chatbots. In summary, chatbots are viewed as capable of streamlining administrative tasks, educating, motivating and supporting people, but cannot replace professional services. Legislation, evidence-based evaluation, and integration with existing structures are considered preconditions to their adoption.

5.2 Comparing the Perceptions of Counsellors and Young Adults

It would be interesting to draw on the survey, interviews, and related work to understand the points in which the perspectives of young adults and mental health professionals intersect and diverge. The results of the survey and interviews point to a shared set of perceptions about the role and function of mental health technology/chatbots.
The role of mental health technology/chatbots is to act as an instant and always available source of support; the support should be in the form of self-monitoring; self-care; and information. It was clear from the responses of the survey participants and counsellors that mental health technology/chatbots should not replace professional support, but it should be integrated with counselling and professional services and facilitate access to them, when necessary.
Features seen by both user groups as being useful and desirable include self-help techniques; access to information about mental health conditions; and enlisting professional support. Most importantly, personalisation seems to be the overarching principle for both groups, such that the responses of the chatbot must be tailored to the individual, drawing from user data (for example, user profile and location) and the history of the interactions with the user.
A point of departure in the perceptions of mental health professionals and young adults may lie on the issues of confidentiality and data privacy and protection. The counsellors in this study expressed serious concerns around these issues, in line with past research [Nadarzynski et al. 2019; Palanica et al. 2019]. However, a recent study, which captured the perceptions of young people about chatbots, revealed that data privacy and trust were less important to them; participants here were not concerned about companies handling their personal data and conversations, stated that they could more easily trust, and confide to, a system rather than a human, and felt that chatbots offered them anonymity [Bae Brandtzæg et al. 2021].
Table 6 summarises the perceived functions, roles, and concerns of young adults and mental health professionals.
Table 6.
Perceived FunctionsPerceived RolePerceived Concerns
Personalised  
Young Adultsself-help techniquesinformation about mental healthaccess to professional supportIntegrated with, and complementing, professional servicesImmediate and any-time support--
Counsellors     Data PrivacyTrust
Table 6. Summary of the Perceptions Related to Chatbots of Young Adults and Counsellors in Relation to Functions, Roles, and Concerns

5.3 Limitations and Future Work

There are limitations associated with each of the reported research activities. The first issue relates to the convenience sample used in the survey; the survey was undertaken at a single UK university site, which threatens the validity of the research and raises questions whether the results reflect the UK university population and the young adult population. The first consideration is that over half (52%) of the young adult population in the UK is in higher education [Bolton 2021], suggesting that the findings could be extended to the wider young adult population. To investigate this further, socioeconomic, ethnicity, sexual orientation, and disability demographic information were collected for the study site university, the UK university sector, and general young adult population, given that these factors have been associated with mental health difficulties in previous research. The study site university is largely in line with the sector and young adult population in terms of socioeconomic deprivation level, disability, and sexual orientation. In terms of ethnicity, however, the study site university has a much more diverse student population than the sector and national average, which could be partially explained by the location of the university (London).
Still, the findings of this study are entirely consistent with the NUS-USI survey and explainable by previous research in the mental health of young adults of different nationalities, which gives confidence that the results are valid and generalisable, at least to a certain extent.
A second issue relates to the broad scope of the questionnaire. The survey aimed to explore a range of issues around mental health and mental health technology, and, as such, the questions were not designed to capture fine-grained information about these issues. For example, one of the questions asked respondents to rate the helpfulness of mental health support using a Likert scale. However, any type of mental health support encompasses a multitude of different elements, each of which could be helpful or unhelpful. Future work should employ a questionnaire instrument or data collection approach designed with a finer level of granularity to allow such issues to be more effectively explored.
A third issue relates to the review of the literature, which did not attempt to assess the quality of the user evaluations of the studies that it summarised, for example, in terms of methodology. A systematic literature review with explicit quality criteria around user evaluation could identify best practices and facilitate the development of standardised evaluation methods.
Finally, an important limitation of the study is the interview sample size. The sample was drawn from a small population of six counsellors at a single site. Because of the exploratory nature of this research, a sample of three was not deemed problematic. The small sample permitted the intense scrutiny of the data, which, in turn, produced rich insights and clear concepts that align with previous findings. Most importantly, the conclusions, albeit tentative, serve to create the foundation for focused hypotheses and can instigate further, much-needed, study of the acceptability of mental health chatbots. Still, generalisability cannot be assumed, and the sample is unlikely to represent the full spectrum of counsellors, such as those working at different sites in the UK and in different countries and societies, as well as of mental health professionals in other services. Hence, interviews with counsellors from different areas of the UK and worldwide would be valuable. Similarly, mental health experts and other professionals “in the field,” should also be consulted in a future study.

6 Conclusions

Mental health issues appear to be rising in young adults, and there is evidence that this has accelerated during the COVID-19 pandemic [Mind Charity 2020]. Stigma, lack of awareness, and resource constraints are among the barriers to accessing help. Technology has the potential to transform mental healthcare and tackle these barriers. Usability, interactivity, and personalisation appear to be important characteristics for mental health technology. Putting together primary and secondary research, this article has argued that chatbots may have the potential to provide effective and acceptable mental health care to young adults. Providing such care at scale in a cost-effective way is likely to be increasingly important given the implications for mental health caused by the pandemic. Chatbots may be useful in enabling a growing number of young people to access timely mental health support when the demand for professional services exceeds capacity and the scope to expand services is limited by constrained healthcare budgets. For chatbots to achieve their identified potential, however, future research and development should set and follow standards of evaluating and reporting acceptability, which has to date been little explored. At the same time, the benefits of chatbots, and their endorsement by practitioners, depend on the scope and abilities of the technology and the logistics of its deployment. Evidence-based research appears critical for practitioners to trust and accept the technology. This study has suggested that only through the synergy and involvement of all stakeholders—practitioners, users, and developers, as well as policy makers and institutions—can these challenges be addressed in ways that will enable chatbots to offer suitable and acceptable solutions in mental healthcare.
Appendices

A Survey Study Questionnaire

Fig. A1.
Fig. A1. The questionnaire used in the survey study with young adults.

B Semi-structured Interview Questions

Fig. B1.
Fig. B1. Questions asked during the semi-structured interviews with counsellors.

C Personas AND Scenarios

Fig. C1.
Fig. C1. Katie – Persona, Context and Scenario One.
Fig. C2.
Fig. C2. James – Persona, Context and Scenario Two.

Footnotes

1
“The Topol Review, Preparing the healthcare workforce to deliver the digital future: an independent report on behalf of the Secretary of State for Health and Social Care.”
2
Studies targeting autism spectrum disorders and substance abuse-related disorders were not included.
3
An analysis of such studies, involving ECAs that use clickable responses, can be found in the reviews by Provoost et al. [2017], Ma et al. [2019], and Vaidyam et al. [2019; 2020].
4
BBC, “Amazon Alexa-NHS partnership splits expert opinion,” available at https://www.bbc.co.uk/news/technology-48937663.
5
BBC, “Babylon claims its chatbot beats GPs at medical exam,” available at https://www.bbc.co.uk/news/technology-44635134.
6
https://www.motion.ai/. Motion.ai has been recently acquired by Hubspot, https://www.hubspot.com/products/crm/chatbot-builder.

D Cognitive Walkthrough Results

Table D1.
Persona One (Katie)Will the user know what to do?Will the user see how to do it?Will the user understand the feedback they get?
Step 1.
Katie sends a message to chatbot suggesting that she is feeling unhappy and wants to view the Structured mental health assessments for depression.
P3- Yes: The suggested responses are useful in explaining to the user what to do.
P1 and P2- Yes: There are suggested responses provided to the user.
P1 and P3- Yes: The chatbot provided the patient health questionnaire in clear English.
P2- Yes: However, when we do it, we tend to do a free-flowing risk assessment.
P1, P2, and P3- Yes: There is help text below.
Step 2.
Katie now wants to view Targeted Psychoeducational content about depression.
P1, P2, and P3- Yes: There are suggested responses provided to the user.P1, P2, and P3- Yes: The chatbot provided the resources immediately in text and video form.P1 and P2- Yes: The self-help information is interactive and easy to view.
P3- Yes: The information is interactive.
Step 3.
Katie now wants to view Meditation techniques.
P1, P2, and P3- Yes: There are suggested responses provided to the user.
P1, P2, and P3- Yes: The chatbot provided the resources on the screen.
P1 and P2- Yes: The information is in audio form guiding the user.
P3- Yes: The user is provided guidance in audio form.
Step 4.
Katie wants further support and wants to enlist for support with a counsellor.
P1, P2, and P3- Yes: There are suggested responses provided to the user on how they can find our support.
P1, P2, and P3- Yes: The chatbot provided an immediate responseP1, P2, and P3- Yes: the chatbot guides the user.
Table D1. Cognitive walkthroughs, Persona One (Katie)
Table D2.
Persona Two (James)Will the user know what to do?Will the user see how to do it?Will the user understand the feedback they get?
Step 1.
James sends a message to chatbot suggesting that he is feeling anxious and wants to view the Structured mental health assessments for Anxiety.
P1, P2, and P3- Yes: Similar to before, there are suggested responses provided to the user.P1, P2, and P3- Yes: The chatbot provided the General anxiety form in clear English.P1, P2, and P3- Yes: There is help text below.
Step 2.
James now wants to view Targeted Psychoeducational content about Anxiety
P1, P2, and P3- Yes: There are suggested responses provided to the user.P1, P2, and P3- Yes: The chatbot provided the resources immediately in text and video form.P1, P2, and P3- Yes: The self-help information is interactive and easy to view.
Step 3.
James now wants to view more information mental health information about anxiety using the search function.
P1, P2, and P3- Yes: There are suggested responses provided to the user.
P1, P2, and P3- Yes: The chatbot provided the resources in card form for the user to choose from.
P1, P2, and P3- Yes:
There is an explanation below all the cards.
Step 4.
James wants further support and to enlist for support with a counsellor.
P1, P2, and P3- Yes: There are suggested responses provided to the user on how they can find our support.
P1, P2, and P3- Yes: The chatbot provided an immediate response.P1, P2, and P3- Yes: The chatbot guides the user.
Table D2. Cognitive walkthroughs, Persona Two (James)

E Qualitative Analysis Procedure

The following five stages formed the qualitative analysis procedure applied to the interviews with the counsellors, following the “Framework Approach” described in Pope, Ziebland, and Mays [2000].
(1)
Familiarisation: A verbatim transcription of the audio recordings was made. The transcripts included only linguistic elements and excluded paralinguistic elements such as pauses and disfluencies. The two coders read through the transcript to gain initial understanding of the content. One of the coders was the individual who conducted the interviews. Some key ideas were discussed at this point.
(2)
Coding: The questionnaire items that led the semi-structured interview had a clear thematic focus (e.g., “impact on resources,” “usability”); therefore, the initial coding was guided by the focus of each question and the higher-level groupings of questions (for example, “advantages” and “potential benefits” are semantically related as “positive perceptions”). The initial coding also drew from themes arising from relevant literature (e.g., Nadarzynski et al. 2019; Palanica et al. 2019). The coding involved parsing and colour-coding salient statements and assigning a code to each statement, and/or a comment; the comments were either points of uncertainty/discussion or descriptors for a code.
(3)
Developing the framework: The two coders compared and discussed their codes and comments. Synonymous and semantically related codes were merged, and a nomenclature was agreed. A set of overarching categories was formed to group together the codes. Examples of categories included “educating and raising awareness,” “chatbot abilities,” “data privacy and trust,” and so on, which largely correspond to the subsections in Section 4.2 of the research article.
(4)
Applying the framework: the coders applied the set of codes and categories to the corpus. Statements were summarised and linked together.
(5)
Interpretation: The statement summaries, codes, and categories were reviewed in relation to existing research and the aims of the study, and a structure of reporting and presentation was agreed upon.

F Apps AND Websites for Mental Health Support Cited by Survey Participants

 
App/WebsiteFrequency
7 Cups of tea2
Big white wall4
Code Blue1
Five Ways to Wellbeing1
Google1
Headspace6
Samaritans2
Optimism2
Positive affirmation1
RCPsych3
Lantern1
Self-help Anxiety Management1
Silvercloud2
Talkspace1
WebMD1
WhatsApp1
Total30

References

[1]
J. K. Anderson et al. 2017. A scoping literature review of service-level barriers for access and engagement with mental health services for children and young people. Child. Youth Serv. Rev. 77 (2017), 164–176. DOI:
[2]
N. Aoki. 2020. An experimental study of public trust in AI chatbots in the public sector. Govern. Inform. Quart. 37, 4 (2020), 101490. DOI:
[3]
J. Apolinário-Hagen, J. Kemper, and C. Stürmer. 2017. Public acceptability of e-mental health treatment services for psychological problems: a scoping review. JMIR Ment. Health 4, 2 (2017) e10. DOI:
[4]
Petter Bae Bae Brandtzæg, Marita Skjuve, Kim Kristoffer Kristoffer Dysthe, and Asbjørn Følstad. 2021. When the social becomes non-human: young people's perception of social support in chatbots. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI'21). Association for Computing Machinery, New York, NY, USA, Article 257. 1–13.
[5]
D. Bakker et al. 2016. Mental health smartphone apps: Review and evidence-based recommendations for future developments. JMIR Ment. Health 3, 1 (2016) e7. DOI:
[6]
A. de Barcelos Silva et al. 2020. Intelligent personal assistants: A systematic literature review. Expert Systems with Applications. Elsevier Ltd. DOI:
[7]
T. W. Bickmore et al. 2010. Maintaining reality: Relational agents for antipsychotic medication adherence. Interact. Comput. 22, 4 (2010), 276–288. DOI:
[8]
T. W. Bickmore et al. 2016. Improving access to online health information with conversational agents: A randomized controlled experiment. J. Med. Internet Res. 18, 1 (2016) e1. DOI:
[9]
P. Bolton. 2021. Higher education student numbers. In House of Commons Library. Retrieved from https://commonslibrary.parliament.uk/research-briefings/cbp-7857/.
[10]
A. Bresó et al. 2016. Usability and acceptability assessment of an empathic virtual agent to prevent major depression. Exp. Syst. 33, 4 (2016), 297–312. DOI:
[11]
J. Brooke. 1986. SUS-A Quick and Dirty Usability Scale. Retrieved from https://cui.unige.ch/isi/icle-wiki/_media/ipm:test-suschapt.pdf.
[12]
C. Burton et al. 2016. Pilot randomised controlled trial of help4mood, an embodied virtual agent-based system to support treatment of depression. J. Telemed. Telecare. 22, 6 (2016), 348–355. DOI:
[13]
K. Cavanagh and A. Millings. 2013. (Inter)personal computing: The role of the therapeutic relationship in e-mental health. J. Contemp. Psychothe. Springer US 43, 4 (2013), 197–206. DOI:
[14]
Z. Chen, Y. Lu, M. P. Nieminen, and A. Lucero. 2020. Creating a chatbot for and with migrants: Chatbot personality drives co-design activities. In Proceedings of the ACM Designing Interactive Systems Conference. Retrieved from.
[15]
E. K. Czyz et al. 2013. Self-reported barriers to professional help seeking among college students at elevated risk for suicide. J. Amer. Coll. Health : J. ACH. NIH Public Access 61, 7 (2013), 398–406. DOI:
[16]
S. D'Alfonso et al. 2017. Artificial intelligence-assisted online social therapy for youth mental health. Front. Psychol. 8, 796. DOI:
[17]
B. G. Druss and L. Dimitropoulos. 2013. Advancing the adoption, integration and testing of technological advancements within existing care systems. Gen. Hospit. Psychiat. 35, 4 (2013), 345–348. DOI:
[18]
K. Easton et al. 2019. A virtual agent to support individuals living with physical and mental comorbidities: Co-design and acceptability testing. J. Med. Internet Res. 21, 5 (2019), e12996. DOI:
[19]
Education Policy Institute. 2018. Prevalence of Mental Health Issues within the Student-aged Population - Education Policy Institute. Retrieved from https://epi.org.uk/publications-and-research/prevalence-of-mental-health-issues-within-the-student-aged-population/#_ftn2.
[20]
D. Eisenberg, E. Golberstein, and S. E. Gollust. 2007. Help-seeking and access to mental health care in a university student population. Med. Care 45, 7 (2007), 594–601. DOI:
[21]
R. Elliott et al. 2011. Empathy. Psychotherapy 48, 1 (2011), 43–49. DOI:
[22]
R. Farzanfar and D. Finkelstein. 2012. Evaluation of a workplace technology for mental health assessment: A meaning-making process. Comput. Hum. Behav. 28, 1 (2012), 160–165. DOI:
[23]
K. K. Fitzpatrick, A. Darcy, and M. Vierhile. 2017. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Ment. Health 4, 2 (2017), e19. DOI:
[24]
T. Foley and J. Woollard. 2019. The Digital Future of Mental Healthcare and Its Workforce. Retrieved from https://topol.hee.nhs.uk/wp-content/uploads/HEE-Topol-Review-Mental-health-paper.pdf.
[25]
R. Fulmer et al. 2018. Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: Randomized controlled trial. JMIR Ment. Health 5, 4 (2018), e64. DOI:
[26]
H. Gaffney et al. 2014. Manage your life online (MYLO): A pilot trial of a conversational computer-based intervention for problem solving in a student sample. Behavioural and Cognitive Psychotherapy. Cambridge University Press 42, 06 (2014), 731–746. DOI:
[27]
H. Gaffney, W. Mansell, and S. Tai. 2020. Agents of change: Understanding the therapeutic processes associated with the helpfulness of therapy for mental health problems with relational agent MYLO. Digital Health. SAGE Publications Inc. DOI:
[28]
A. Garavand et al. 2016. Factors influencing the adoption of health information technologies: A systematic review. Electron. Physician, 8, 8 (2016) 2713–2718. DOI:
[29]
P. M. Gardiner et al. 2017. Engaging women with an embodied conversational agent to deliver mindfulness and lifestyle recommendations: A feasibility randomized control trial. Patient Educ. Counsel. 100, 9 (2017), 1720–1729. DOI:
[30]
J. Goodwin et al. 2016. Development of a mental health smartphone app: Perspectives of mental health service users. J. Ment. Health 25, 5 (2016), 434–440. DOI:
[31]
P. Gorczynski et al. 2017. Examining mental health literacy, help seeking behaviours, and mental health outcomes in UK university students. J. Ment. Health Train. Educ. Pract. 12, 2 (2017), 111–120. DOI:
[32]
T. Greenhalgh et al. 2017. Beyond adoption: A new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J. Med. Internet Res. 19, 11 (2017), e367. DOI:
[33]
S. Greer et al. 2019. Use of the chatbot “vivibot” to deliver positive psychology skills and promote well-being among young people after cancer treatment: Randomized controlled feasibility trial. JMIR mHealth uHealth 7, 10 (2019), e15018. DOI:
[34]
A. Gulliver, K. M. Griffiths, and H. Christensen. 2010. Perceived barriers and facilitators to mental health help-seeking in young people: A systematic review. BMC Psychiat. 10, 1 (2010), 113. DOI:
[35]
M. Hirano et al. 2017. Designing behavioral self-regulation application for preventive personal mental healthcare. Health Psychol. Open 4, 1 (2017). DOI:
[36]
A. Ho, J. Hancock, and A. S. Miner. 2018. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. J. Commun. 68, 4 (2018), 712–733. DOI:
[37]
S. Hoermann et al. 2017. Application of synchronous text-based dialogue systems in mental health interventions: Systematic review. J. Med. Internet Res. 19, 8 (2017), e267. DOI:
[38]
House of Commons, C. of P. A. 2019. Mental health services for children and young people Seventy-Second Report of Session 2017–19 Report, together with formal minutes relating to the report. Retrieved from www.parliament.uk.
[39]
B. Inkster, S. Sarda, and V. Subramanian. 2018. An empathy-driven, conversational artificial intelligence agent (WYSA) for digital mental well-being: Real-world data evaluation mixed-methods study. JMIR Mhealth Uhealth 6, 11 (2018), e12106. DOI:
[40]
S. M. Jungmann et al. 2019. Accuracy of a chatbot (Ada) in the diagnosis of mental disorders: comparative case study with lay and expert users. JMIR Format. Res. 3, 4 (2019). DOI:
[41]
R. C. Kessler et al. 2005. Lifetime prevalence and age-of-onset distributions of DSM-IV disorders in the national comorbidity survey replication. Archives Gen. Psychiat. 62, 6 (2005), 593. DOI:
[42]
H.-C. Kim. 2015. Acceptability engineering: The study of user acceptance of innovative technologies. J. Appl. Res. Technol. 13, 2 (2015), 230–237. DOI:
[43]
L. C. Klopfenstein et al. 2017. The rise of bots. In Proceedings of the Conference on Designing Interactive Systems. ACM Press, 555–565. DOI:
[44]
B. P. Knijnenburg and M. C. Willemsen. 2016. Inferring capabilities of intelligent agents from their external traits. ACM Trans. Interact. Intell. Syst. 6 (2016). DOI:
[45]
L. Laranjo et al. 2018. Conversational agents in healthcare: a systematic review. J. Amer. Med. Inform. Assoc. 25, 9 (2018), 1248–1258. DOI:
[46]
M. E. Levin et al. 2016. Web-based self-help for preventing mental health problems in universities: Comparing acceptance and commitment training to mental health education. J. Clin. Psychol. 72, 3 (2016), 207–225. DOI:
[47]
G. M. Lucas et al. 2014. It's only a computer: Virtual humans increase willingness to disclose. Comput. Hum. Behav. 37 (2017), 94–100. DOI:
[48]
G. M. Lucas et al. 2017. Reporting mental health symptoms: Breaking down barriers to care with virtual human interviewers. Front. Robot. AI. 4, 51 (2017). DOI:
[49]
J. H. L. Lui, D. K. Marcus, and C. T. Barry. 2017. Evidence-based apps? A review of mental health mobile applications in a psychotherapy context. Profess. Psychol.: Res. Pract. 48, 3 (2017), 199–210. DOI:
[50]
D. D. Luxton et al. 2011. MHealth for mental health: Integrating smartphone technology in behavioral healthcare. Profess. Psychol.: Res. Pract 42, 6 (2011), 505–512. DOI:
[51]
K. H. Ly, A.-M. Ly, and G. Andersson. 2017. A fully automated conversational agent for promoting mental well-being: A pilot RCT using mixed methods. Internet Interven. 10 (2017), 39–46. DOI:
[52]
T. Ma, H. Sharifi, and D. Chattopadhyay. 2019. Virtual humans in health-related interventions: A meta-analysis. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA'19). Association for Computing Machinery, New York, NY, USA, Paper LBW1717, 1–6.
[53]
A. Macaskill. 2013. The mental health of university students in the United Kingdom. Brit. J. Guid. Counsel. 41, 4 (2013), 426–441. DOI:
[54]
J. Martínez-Miranda et al. 2019. Assessment of users’ acceptability of a mobile-based embodied conversational agent for the prevention and detection of suicidal behaviour. J. Med. Syst. 43, 8 (2019), 1–18. DOI:
[55]
Michael McTear, Zoraida Callejas, and David Griol. 2016. The Conversational Interface: Talking to Smart Devices (1st ed.). Springer Publishing Company, Incorporated.
[56]
A. Memon et al. 2016. Perceived barriers to accessing mental health services among black and minority ethnic (BME) communities: A qualitative study in southeast England. BMJ Open. 6, 11 (2016) e012337. DOI:
[57]
Mind Charity. 2020. Ment. Health Emerg. (2020).
[58]
A. S. Miner et al. 2016. Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health. JAMA Intern. Med. 176, 5 (2016), 619. DOI:
[59]
A. S. Miner, A. Milstein, and J. T. Hancock. 2017. Talking to machines about personal mental health problems. JAMA 318, 13 (2017), 1217. DOI:
[60]
J. L. Z. Montenegro, C. A. da Costa, and R. da Rosa Righi. 2019. Survey of conversational agents in health. Exp. Syst. Applic. 129, 56–67. DOI:
[61]
M. E. Morris and A. Aguilera. 2012. Mobile, social, and wearable computing and the evolution of psychological practice. Professional Psychology, Research and Practice 43, 6 (2012), 622–626. DOI:
[62]
R. R. Morris et al. 2018. Towards an artificially empathic conversational agent for mental health applications: System design and user perceptions. J. Med. Internet Res. 20, 6 (2018), e10148. DOI:
[63]
P. Musiat, P. Goldstone, and N. Tarrier. 2014. Understanding the acceptability of e-mental health-attitudes and expectations towards computerised self-help treatments for mental health problems. BMC Psychiatry 14 (2014), 109. DOI:
[64]
T. Nadarzynski et al. 2019. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Dig. Health 5 (2019). DOI:
[65]
National Institute for Health and Care Excellence. 2017. Medical technologies evaluation programme process guide—Guidance and guidelines. Retrieved from https://www.nice.org.uk/process/pmg34/chapter/who-is-involved-in-the-medical-technologies-evaluation-programme.
[67]
J. B. Nienhuis et al. 2018. Therapeutic alliance, empathy, and genuineness in individual adult psychotherapy: A meta-analytic review. Psychother. Res. 28, 4 (2018), 593–605. DOI:
[68]
NUS-USI StudentWellbeing Research Report 2017@NUS Connect. 2017. NUS-USI, Belfast. Retrieved from https://nusni.unioncloud.org/resources/nus-usi-student-wellbeing-research-report-2017-cc59.
[70]
A. Paiva et al. 2017. Empathy in virtual agents and robots: A survey. ACM Trans. Interact. Intell. Syst 7, 3 (2017). DOI:
[71]
A. Palanica et al. 2019. Physicians’ perceptions of chatbots in health care: Cross-sectional web-based survey. J. Med. Internet Res. 21, 4 (2019), e12887. DOI:
[72]
S. Park et al. 2019. Designing a chatbot for a brief motivational interview on stress management: Qualitative case study. J. Med. Internet Res. 21, 4 (2019), e12231. DOI:
[73]
P. Philip et al. 2017. Virtual human as a new diagnostic tool, a proof of concept study in the field of major depressive disorders. Sci. Rep. 7, 1 (2017), 42656. DOI:
[74]
P. Philip et al. 2020. Trust and acceptamnce of a virtual psychiatric interview between embodied conversational agents and outpatients. Npj Dig. Med. 3, 1 (2020), DOI:
[75]
M. D. Pickard, C. A. Roster, and Y. Chen. 2016. Revealing sensitive information in personal interviews: Is self-disclosure easier with humans or avatars and under what conditions? Comput. Hum. Behav. 65, 23–30. DOI:
[76]
C. Pope, S. Ziebland, and N. Mays. 2000. Qualitative research in health care. Analysing qualitative data. BMJ 320, 7227 (2000), 114–116. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/10625273.
[77]
J. Preece, Y. Rogers, and H. Sharp. 2015. Interaction Design: Beyond Human-computer Interaction. 4th ed. John Wiley & Sons, Chichester. Retrieved from https://www.wiley.com/en-gb/Interaction+Design%3A+Beyond+Human+Computer+Interaction%2C+4th+Edition-p-9781119020752.
[78]
S. Provoost et al. 2017. Embodied conversational agents in clinical psychology: A scoping review. J. Med. Internet Res. 19, 5 (2017), e151. DOI:
[79]
B. Rahimi et al. 2018. A systematic review of the technology acceptance model in health informatics. Appl. Clin. Inform. 9, 3 (2018), 604–634. DOI:
[80]
E. M. Randall and B. M. Bewick. 2016. Exploration of counsellors’ perceptions of the redesigned service pathways: A qualitative study of a UK university student counselling service. Brit. J. Guid. Counsel. 44, 1 (2016), 86–98. DOI:
[81]
D. J. Rickwood and V. A. Braithwaite. 1994. Social-psychological factors affecting help-seeking for emotional problems. Soc. Sci. Med. 39, 4 (1994), 563–572. DOI:
[82]
Y. Sakurai et al. 2019. VICA, a visual counseling agent for emotional distress. J. Amb. Intell. Humaniz. Comput. 1–13. DOI:
[83]
M. R. Scholten, S. M. Kelders, and J. E. Van Gemert-Pijnen. 2017. Self-guided web-based interventions: Scoping review on user needs and the potential of embodied conversational agents to address them. J. Med. Internet Res. 19, 11 (2017), e383. DOI:
[84]
J. Sebastian and D. Richards. 2017. Changing stigmatizing attitudes to mental health via education and contact with embodied conversational agents. Comput. Hum. Behav. 73, 2047 (2017), 479–488. DOI:
[85]
M. Sekhon, M. Cartwright, and J. J. Francis. 2017. Acceptability of healthcare interventions: An overview of reviews and development of a theoretical framework. BMC Health Serv. Res. 17, 1 (2017), 88. DOI:
[86]
M. Sekhon, M. Cartwright, and J. J. Francis. 2018. Acceptability of health care interventions: A theoretical framework and proposed research agenda. Brit. J. Health Psychol. 23, 3 (2018), 519–531. DOI:
[87]
H. Shah et al. 2016. Can machines talk? Comparison of Eliza with modern dialogue systems. Comput. Hum. Behav. 58(C), 278–295. DOI:
[88]
T. Shinozaki, Y. Yamamoto, and S. Tsuruta. 2015. Context-based counselor agent for software development ecosystem. Computing 97, 1 (2015), 3–28. DOI:
[89]
S. Suganuma, D. Sakamoto, and H. Shimoyama. 2018. An embodied conversational agent for unguided internet-based cognitive behavior therapy in preventative mental health: Feasibility and acceptability pilot trial’. JMIR Ment. Health 5, 3 (2018), e10454. DOI:
[90]
S. C. Sutherland et al. 2016. Effects of the advisor and environment on requesting and complying with automated advice. ACM Trans. Interact. Intell. Syst. 6, 4 (2016), 27. DOI:
[91]
J. D. Tariman et al. 2011. Validation and testing of the acceptability e-scale for web-based patient-reported outcomes in cancer care. Appl. Nurs. Res. 24, 1 (2011), 53–58. DOI:
[92]
Myrthe L. Tielman et al. 2017. A therapy system for post-traumatic stress disorder using a virtual agent and virtual storytelling to reconstruct traumatic memories. J. Med. Syst. 41, 8 (2017), 125. DOI:
[93]
Myrthe L. Tielman et al. 2017b. How should a virtual agent present psychoeducation? Influence of verbal and textual presentation on adherence. Technol. Health Care: Offic. J. Eur. Societ. Eng. Med. 25, 6 (2017), 1081–1096. DOI:
[94]
K. I. Toivonen, K. Zernicke, and L. E. Carlson. 2017. Web-based mindfulness interventions for people with physical health conditions: Systematic review. J. Med. Internet Res. 19, 8 (2017), e303. DOI:
[95]
The Topol review. Preparing the healthcare workforce to deliver the digital future. An independent report on behalf of the Secretary of State for Health and Social Care. 2019. Health Education England. Retrieved from https://topol.hee.nhs.uk/the-topol-review/.
[96]
J. Torous et al. 2018. Clinical review of user engagement with mental health smartphone apps: Evidence, theory and improvements. Evid.-based Ment. Health 21, 3 (2018), 116–119. DOI:
[97]
C. L. Turvey and L. J. Roberts. 2015. International review of psychiatry recent developments in the use of online resources and mobile technologies to support mental health care recent developments in the use of online resources and mobile technologies to support mental health care. Int. Rev. Psychiat. 27, 6 (2015), 547–557. DOI:
[98]
A. N. Vaidyam et al. 2019. Chatbots and conversational agents in mental health: A review of the psychiatric landscape. Canad. J. Psychiat. DOI:
[99]
A. N. Vaidyam, D. Linggonegoro, and J. Torous. 2020. Changes to the psychiatric chatbot landscape: A systematic review of conversational agents in serious mental illness: Changements du paysage psychiatrique des chatbots: une revue systématique des agents conversationnels dans la maladie mentale sérieuse. Canad. J. Psychiat. DOI:
[100]
Viswanath Venkatesh et al. 2003. User acceptance of information technology: Toward a unified view. MIS Quart. 27, 3 (2003), 425. DOI:
[101]
V. A. Wade, J. A. Eliott, and J. E. Hiller. 2014. Clinician acceptance is the key factor for sustainable telehealth services. Qualitat. Health Res. 24, 5 (2014), 682–694. DOI:
[102]
World Health Organization. (2013). Mental Health Action Plan 2013–2020. Retrieved from https://apps.who.int/iris/bitstream/handle/10665/8-9966/9789241506021_eng.pdf?sequence=1.
[103]
World Health Organization .(2018). Mental Health Atlas 2017. Retrieved from https://apps.who.int/iris/bitstream/handle/10665/272735/9789241514019-eng.pdf.
[104]
L. Yardley et al. 2015. The person-based approach to intervention development: Application to digital health-related behavior change interventions. J. Med. Internet Res. 17, 1 (2015), e30. DOI:
[105]
K. Yokotani, G. Takagi, and K. Wakashima. 2018. Advantages of virtual agents over clinical psychologists during comprehensive mental health interviews using a mixed methods design. Comput. Hum. Behav. 85 (2018), 135–145. DOI:

Cited By

View all
  • (2025)Interactive Conversational Agents for Perinatal Health: A Mixed Methods Systematic ReviewHealthcare10.3390/healthcare1304036313:4(363)Online publication date: 8-Feb-2025
  • (2025)Chatbot-delivered mental health support: Attitudes and utilization in a sample of U.S. college studentsDIGITAL HEALTH10.1177/2055207624131340111Online publication date: 17-Jan-2025
  • (2025)Use of Artificial-Intelligence-Based Chatbots to Promote Physical Activity: A Systematic Review of InterventionsKinesiology Review10.1123/kr.2024-008114:1(80-92)Online publication date: 1-Feb-2025
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Interactive Intelligent Systems
ACM Transactions on Interactive Intelligent Systems  Volume 12, Issue 2
June 2022
216 pages
ISSN:2160-6455
EISSN:2160-6463
DOI:10.1145/3543990
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 July 2022
Online AM: 24 May 2022
Accepted: 01 September 2021
Revised: 01 June 2021
Received: 01 January 2021
Published in TIIS Volume 12, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Chatbots and conversational agents
  2. artificial intelligence
  3. innovations in mental health systems
  4. user-centred design

Qualifiers

  • Research-article
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)10,958
  • Downloads (Last 6 weeks)1,622
Reflects downloads up to 11 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Interactive Conversational Agents for Perinatal Health: A Mixed Methods Systematic ReviewHealthcare10.3390/healthcare1304036313:4(363)Online publication date: 8-Feb-2025
  • (2025)Chatbot-delivered mental health support: Attitudes and utilization in a sample of U.S. college studentsDIGITAL HEALTH10.1177/2055207624131340111Online publication date: 17-Jan-2025
  • (2025)Use of Artificial-Intelligence-Based Chatbots to Promote Physical Activity: A Systematic Review of InterventionsKinesiology Review10.1123/kr.2024-008114:1(80-92)Online publication date: 1-Feb-2025
  • (2025)Effectiveness of mental health chatbots in depression and anxiety for adolescents and young adults: a meta-analysis of randomized controlled trialsExpert Review of Medical Devices10.1080/17434440.2025.2466742Online publication date: 11-Feb-2025
  • (2024)Applications of New Technology in Operations and Supply Chain ManagementApplications of New Technology in Operations and Supply Chain Management10.4018/979-8-3693-1578-1.ch005(85-97)Online publication date: 15-Mar-2024
  • (2024)El chatbot aplicado a salud. Una revisión bibliométricaThe Chatbot Applied to Health. A Bibliometric ReviewRevista de Comunicación y Salud10.35669/rcys.2025.15.e35515(1-18)Online publication date: 9-Dec-2024
  • (2024)Acceptability of community health worker and peer supported interventions for ethnic minorities with type 2 diabetes: a qualitative systematic reviewFrontiers in Clinical Diabetes and Healthcare10.3389/fcdhc.2024.13061995Online publication date: 21-May-2024
  • (2024)Systematic Review of Empathic Conversational Agent Platform Designs and their Evaluation in the Context of Mental Health. (Preprint)JMIR Mental Health10.2196/58974Online publication date: 30-Mar-2024
  • (2024)Roles, Users, Benefits, and Limitations of Chatbots in Health Care: Rapid ReviewJournal of Medical Internet Research10.2196/5693026(e56930)Online publication date: 23-Jul-2024
  • (2024)Parents’ Perceptions of Their Parenting Journeys and a Mobile App Intervention (Parentbot—A Digital Healthcare Assistant): Qualitative Process EvaluationJournal of Medical Internet Research10.2196/5689426(e56894)Online publication date: 21-Jun-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media