1 Introduction
Mental health conditions are a growing global public health concern, being among the leading causes of disability [World Health Organization
2018]. Young people are particularly vulnerable, with suicide being the second most common cause of death globally and first in the UK among this age group [World Health Organization
2013; Office for National Statistics
2018], while the majority of mental health problems are established by the age of 25 [Kessler et al.
2005]. Moreover, suicide rates and the number of young adults experiencing mental health issues have risen in the past decade [Randall & Bewick
2016; Office for National Statistics
2018]. The prevalence of mental health conditions in young adults, and their upward trend, has been attributed to increasing financial and life pressures and uncertainty [Education Policy Institute
2018] and have been exacerbated by the COVID-19 pandemic [Mind Charity
2020]. Several effective mental health interventions exist, and national health services, further and higher educational institutions, and large employers often provide access to counselling, diagnosis and treatment. However, the majority of young adults with mental health conditions do not seek help [Macaskill
2013; Gorczynski et al.
2017], with several barriers having been found to impede their access to mental health care, including lack of awareness, stigma, and limited availability [Eisenberg Golberstein and Gollust
2007; Gulliver Griffiths and Christensen
2010; Czyz et al.
2013; Levin et al.
2016; D'Alfonso et al.
2017; House of Commons Committee of Public Accounts
2019; Jungmann et al.
2019].
Health organisations, researchers, and professionals have recognised the potential of technology to support and enhance mental health care [Foley & Woollard
2019], particularly of young people [D'Alfonso et al.
2017]. Mental health technologies are either designed as standalone interventions or as tools to complement the services or treatment provided by professionals. Such technologies include online resources, programmes and communities, and mobile phone applications (apps). Of these, websites provide an effective and inexpensive way to deliver information and advice [Levin et al.
2016; Toivonen et al.
2017]. There are also web-based services that offer access to peer support through online communities, resources for self-diagnosis and management, online courses, as well as to therapy sessions with counsellors, which can take place via text messaging, audio, or video. Mental health apps are rapidly expanding in number, functions offered, and popularity. Their popularity is largely driven by the prominence of smartphones in daily life, meaning that these apps may be deployed on a platform that is inherently more personalised, more multimedia-driven, and, most significantly, remains with the individual at all times. Such characteristics are argued to facilitate engagement, motivation, and adherence [Lui et al.
2017].
In recent years, there has also been renewed interest in conversational agents—for example, chatbots and digital/virtual assistants. Conversational agents refer to technology that enables user interactions by means of natural language; mainstream examples include Apple's Siri, Facebook's M, Google Assistant, and Amazon's Alexa. Chatbots, in particular, commonly support text-based conversation or clickable responses and are designed to look like instant messaging applications. Chatbots may be deployed on familiar platforms, such as Facebook and Skype, that people use for social communication with friends and family, and whose interface is well-understood, and particularly popular with young adults [Klopfenstein et al.
2017]. Driven by advances in the underlying technologies, conversational agents hold the promise of enabling natural, “human-like” interactions with the user [McTear et al.
2016] and have been successful in the domains of education and e-commerce. The question that naturally arises is whether conversational agents also have a place in mental health management. There is evidence that they may—a 2019 independent report on behalf of the UK Government singles out chatbots as a key technology poised to transform mental healthcare in the near future [Foley and Woollard
2019, part of “The Topol Review”
1] and envisions chatbots as automated or semi-automated therapeutic and diagnostic tools.
However, numerous issues around the interaction between individuals with mental health difficulties and conversational agents in such contexts remain unknown and require exploration [de Barcelos Silva et al.
2020]; this is because the experience of users—and developers alike—with conversational agents is mostly derived from “task-oriented” interaction domains (e.g., booking a flight, ordering food, playing music, and controlling the heating), which are, across several dimensions, not comparable to the domain of mental health [Morris et al.
2018].
There is a growing number of mental health-oriented chatbots targeting a variety of mental health difficulties and with functions ranging from providing education and self-help techniques to offering diagnosis and counselling [Vaidyam et al.
2019]. Recent, preliminary research shows positive outcomes, particularly in terms of efficacy. For example, the Woebot chatbot reduced symptoms of stress and anxiety in two weeks [Fitzpatrick et al.
2017], while frequent users of the Wysa chatbot also reported lower levels of depression [Inkster et al.
2018]. Woebot and Wysa are examples of chatbots that use approved techniques, such as CBT, and mental health professionals are often part of their development or advisor teams. However, a 2018 systematic review of conversational agents in healthcare found that evaluations around user experience and perceptions, including acceptability, were scarce [Laranjo et al.
2018].
The use of chatbots in mental health is an emerging field of research, and they are an innovative and, arguably, disruptive technology. Unlike websites and mobile phone apps, conversational agents are a much less familiar technology to users, so their acceptability can only be speculated at. Moreover, unlike conversational agents in other domains, in the domain of health, more serious concerns become pertinent; these relate to the performance of the chatbot, safety of the user and data security/privacy, all of which are anticipated to impact acceptability [Palanica et al.
2019; Nadarzynski et al.
2019]. Moreover, the acceptability of AI-based technology, such as chatbots, can be undermined by lack of public trust and arguments that AI poses a threat to human employment [Aoki
2020]. It is, in fact, argued that acceptability is rarely considered when designing innovative technologies [Kim
2015]. Yet, acceptability in the domain of healthcare interventions is a necessary precondition for the effectiveness of the intervention [Sekhon et al.
2017] and, for this reason, it has been increasingly emphasised in the guidelines from health organisations (for example, the Medical Research Council in the UK). Acceptability considerations should encompass all stakeholders and user groups; the Topol report advocates the involvement of mental health patients and staff in the design process of chatbot applications to ensure that the technology is usable, accessible, and acceptable to them. This approach will mitigate the risk of the technology creating new barriers to care for patients and added burdens for staff [Foley and Woollard
2019; Topol Review
2019].
This article therefore aims to explore whether conversational agents, and, in particular, chatbots, present an acceptable solution to support young adults with mental health conditions. The article adopts the following definition of acceptability, developed by Sekhon, Cartwright and Francis [Sekhon et al.
2017,
2018]: “
the extent to which people delivering or receiving a healthcare intervention consider it to be appropriate, based on anticipated or experienced cognitive and emotional responses to the intervention.” In particular, the study focuses on young adults, and counsellors in a university context, and their responses to a chatbot as a mental health intervention.
The study achieves its aim through three exploratory research activities (activities 1 and 3 were approved through the university ethics process):
(1)
Survey study with young adults to enable a better understanding of the “users” by exploring issues around their mental health and their experience with, and perceptions of, mental health technology, including chatbots.
(2)
A literature review to synthesise current empirical evidence relating to the acceptability of mental health chatbots.
(3)
Interviews with counsellors whose work is largely with young adults, based on their use of a chatbot prototype and user-centred design (UCD) methods, to produce insights into the acceptability of chatbots from the perspective of mental health professionals.
5 Discussion
5.1 Key Outcomes and Links to Existing Research
A set of five common key outcomes can be seen to emerge from the three research activities (survey with young people; literature review; and interviews with counsellors), which are listed in Table
5. In this section, these key outcomes are discussed in relation to previous work in the field.
The first research objective was to understand the perspective and psychosocial context of young adults [Yardley et al.
2015]. This exploratory study adds to our understanding of the current state of young adults’ mental health, their poor perceptions of mental health technology, and its low levels of adoption, and, most importantly, what this group require from these technologies, which motivates the need for a technology-based solution that: provides self-help, education, information and support quickly, and helps them to connect with professional services in an interactive, personalised, and usable way. The value of the approach of involving end-users in gathering requirements has been illustrated by Goodwin et al. [
2016] and Greer et al. [
2019] and has been argued to be key to the success of mental health technology [Torous et al.
2018]. Finally, the results suggest that young adults are familiar with and have positive attitudes towards mental health chatbots, offering some preliminary support for the acceptability of this technology.
The second research objective was to identify and review the latest studies of conversational agents in mental health that reported user evaluations addressing acceptability aspects. Despite the heterogeneity of the evaluations, all of the studies indicated positive outcomes, providing initial support for the acceptability of this technology. In addition to describing the current state of knowledge, the review also points to priorities towards which research efforts should be directed. In particular, this review identifies the need for the operationalisation of acceptability and the application of standardised methods for user evaluations of mental health chatbots. Moreover, it is argued that, given that healthcare is a complex socio-technical system, appropriate
user-centred design (UCD) methodologies should be employed that involve all stakeholders, from requirements gathering to final evaluation. The studies by Chen et al. [
2020] and Easton et al. [
2019] illustrate how UCD can be applied in the design and development of chatbots; in these studies, stakeholders participated in co-design activities, such as surveys, workshops, empathy probes, and Wizard-of-Oz experiments. Such methodologies are more likely to deliver a nuanced understanding of the role that chatbots could best serve.
The interviews with the counsellors provided support for the acceptability of a mental health chatbot for young adults and offered insights into its utility. Relevant to the questions posed in Hoermann et al. [
2017], a chatbot was deemed best suited for mild to moderate mental health conditions. Relevant to the question regarding the role of chatbots, the counsellors saw chatbots as complementary to, and closely integrating with, existing services, and not as a substitute or as a standalone application. Similar to findings from previous research that surveyed non-specialist physicians, the chatbots were perceived by the counsellors as being more suitable within administrative roles, while more complex and “interpersonal” activities, such as treatment, should be carried out by human staff [Palanica et al.
2019]. Offering a more usable, interactive, and proactive platform would also encourage young adults to better understand and attend to their mental health and would facilitate their access to care. Interactivity and empathy were cited by the counsellors, as well as by research, as characteristics crucial for user engagement with mental health technologies [Morris & Aguilera
2012; Bae Brandtzæg et al.
2021].
The counsellors suggested that chatbots have the potential to improve access to mental health-related information, increase awareness, and reduce barriers because of the higher interactivity and usability afforded by the interface. This argument is corroborated by a study that compared a conversational agent-based search interface to a typical search engine interface for finding health-related information; the study found that the conversational agent was associated with better search results and higher user satisfaction and experience [Bickmore et al.
2016]. Another important finding of that study was that the benefit of using the conversational agent was more pronounced for the group with poor “health literacy.” Similarly, an ECA led to improvement in health literacy and helped reduce stigmatisation associated with mental health condition of Anorexia Nervosa in Sebastian & Richards [
2017]. Taken together, these results suggest that chatbots can play an important role in improving awareness and diagnosis of mental health conditions, which remain poorly understood, especially within certain populations [Memon et al.
2016].
Along with the identified benefits, the counsellors flagged considerations necessary before the adoption of such technology for mental health. First, they felt that the acceptability of chatbots depends on the capabilities of the underlying technology, in terms of natural language understanding and adaptability to the individual. Second, the counsellors stated that young adults may over-rely on chatbots for their treatment and turn away from professional services, so they suggested that regular assessment of the client's progress and close integration with face-to-face support were required to minimise the possibility of overreliance on only one contact point. Moreover, chatbots should be designed in a way that the role and the capabilities of chatbots are delineated appropriately. Finally, the counsellors emphasised the need for regulation and transparency regarding how data is used, and for research to explore the effectiveness of chatbots. In summary, chatbots are viewed as capable of streamlining administrative tasks, educating, motivating and supporting people, but cannot replace professional services. Legislation, evidence-based evaluation, and integration with existing structures are considered preconditions to their adoption.
5.2 Comparing the Perceptions of Counsellors and Young Adults
It would be interesting to draw on the survey, interviews, and related work to understand the points in which the perspectives of young adults and mental health professionals intersect and diverge. The results of the survey and interviews point to a shared set of perceptions about the role and function of mental health technology/chatbots.
The role of mental health technology/chatbots is to act as an instant and always available source of support; the support should be in the form of self-monitoring; self-care; and information. It was clear from the responses of the survey participants and counsellors that mental health technology/chatbots should not replace professional support, but it should be integrated with counselling and professional services and facilitate access to them, when necessary.
Features seen by both user groups as being useful and desirable include self-help techniques; access to information about mental health conditions; and enlisting professional support. Most importantly, personalisation seems to be the overarching principle for both groups, such that the responses of the chatbot must be tailored to the individual, drawing from user data (for example, user profile and location) and the history of the interactions with the user.
A point of departure in the perceptions of mental health professionals and young adults may lie on the issues of confidentiality and data privacy and protection. The counsellors in this study expressed serious concerns around these issues, in line with past research [Nadarzynski et al.
2019; Palanica et al.
2019]. However, a recent study, which captured the perceptions of young people about chatbots, revealed that data privacy and trust were less important to them; participants here were not concerned about companies handling their personal data and conversations, stated that they could more easily trust, and confide to, a system rather than a human, and felt that chatbots offered them anonymity [Bae Brandtzæg et al.
2021].
Table
6 summarises the perceived functions, roles, and concerns of young adults and mental health professionals.
5.3 Limitations and Future Work
There are limitations associated with each of the reported research activities. The first issue relates to the convenience sample used in the survey; the survey was undertaken at a single UK university site, which threatens the validity of the research and raises questions whether the results reflect the UK university population and the young adult population. The first consideration is that over half (52%) of the young adult population in the UK is in higher education [Bolton
2021], suggesting that the findings could be extended to the wider young adult population. To investigate this further, socioeconomic, ethnicity, sexual orientation, and disability demographic information were collected for the study site university, the UK university sector, and general young adult population, given that these factors have been associated with mental health difficulties in previous research. The study site university is largely in line with the sector and young adult population in terms of socioeconomic deprivation level, disability, and sexual orientation. In terms of ethnicity, however, the study site university has a much more diverse student population than the sector and national average, which could be partially explained by the location of the university (London).
Still, the findings of this study are entirely consistent with the NUS-USI survey and explainable by previous research in the mental health of young adults of different nationalities, which gives confidence that the results are valid and generalisable, at least to a certain extent.
A second issue relates to the broad scope of the questionnaire. The survey aimed to explore a range of issues around mental health and mental health technology, and, as such, the questions were not designed to capture fine-grained information about these issues. For example, one of the questions asked respondents to rate the helpfulness of mental health support using a Likert scale. However, any type of mental health support encompasses a multitude of different elements, each of which could be helpful or unhelpful. Future work should employ a questionnaire instrument or data collection approach designed with a finer level of granularity to allow such issues to be more effectively explored.
A third issue relates to the review of the literature, which did not attempt to assess the quality of the user evaluations of the studies that it summarised, for example, in terms of methodology. A systematic literature review with explicit quality criteria around user evaluation could identify best practices and facilitate the development of standardised evaluation methods.
Finally, an important limitation of the study is the interview sample size. The sample was drawn from a small population of six counsellors at a single site. Because of the exploratory nature of this research, a sample of three was not deemed problematic. The small sample permitted the intense scrutiny of the data, which, in turn, produced rich insights and clear concepts that align with previous findings. Most importantly, the conclusions, albeit tentative, serve to create the foundation for focused hypotheses and can instigate further, much-needed, study of the acceptability of mental health chatbots. Still, generalisability cannot be assumed, and the sample is unlikely to represent the full spectrum of counsellors, such as those working at different sites in the UK and in different countries and societies, as well as of mental health professionals in other services. Hence, interviews with counsellors from different areas of the UK and worldwide would be valuable. Similarly, mental health experts and other professionals “in the field,” should also be consulted in a future study.