Europe PMC
  Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

This website requires cookies, and the limited processing of your personal data in order to function. By using the site you are agreeing to this as outlined in our privacy notice and cookie policy.

Abstract 


Anxiety disorders are among the most common psychiatric problems in youth, fail to spontaneously remit, and place some youth at risk for additional behavioral and emotional difficulties. Efforts to target anxiety have resulted in evidence-based interventions but the resulting prevention effects are relatively small, often weakening over time. Mobile health (mHealth) tools could be of use to strengthen the effects of anxiety prevention efforts. Although a large number of mHealth apps have been developed, few have been evaluated in terms of usability prior to clinical effectiveness testing. Because usability is one of the main barriers to mHealth usage and adoption, the objective of this research was to evaluate the usability of a smartphone application (app) corresponding to an indicated prevention and early intervention targeting youth anxiety. To accomplish this, 132 children (M age = 9.65; 63% girls) and 45 service providers (M age = 29.13, 87% female) rated our app along five established dimensions of usability (ease of use, ease of learning, quality of support information, satisfaction, and stigma) using a standardized group-based testing protocol. Findings showed that the app was highly and positively rated by both youth and providers, with some variations (lower ratings when errors occurred). Path analyses findings also showed that system understanding was significantly related to greater system satisfaction, but that such relation occurred through the quality of support information offered by the app.

Free full text 


Logo of nihpaLink to Publisher's site
Cogn Behav Pract. Author manuscript; available in PMC 2018 Nov 1.
Published in final edited form as:
PMCID: PMC5648068
NIHMSID: NIHMS764095
PMID: 29056845

Usability of a Smartphone Application to Support the Prevention and Early Intervention of Anxiety in Youth

Abstract

Anxiety disorders are among the most common psychiatric problems in youth, fail to spontaneously remit, and place some youth at risk for additional behavioral and emotional difficulties. Efforts to target anxiety have resulted in evidence-based interventions but the resulting prevention effects are relatively small, often weakening over time. Mobile health (mHealth) tools could be of use to strengthen the effects of anxiety prevention efforts. Although a large number of mHealth apps have been developed, few have been evaluated in terms of usability prior to clinical effectiveness testing. Because usability is one of the main barriers to mHealth usage and adoption, the objective of this research was to evaluate the usability of a smartphone application (app) corresponding to an indicated prevention and early intervention targeting youth anxiety. To accomplish this, 132 children (M age = 9.65; 63% girls) and 45 service providers (M age = 29.13, 87% female) rated our app along five established dimensions of usability (ease of use, ease of learning, quality of support information, satisfaction, and stigma) using a standardized group-based testing protocol. Findings showed that the app was highly and positively rated by both youth and providers, with some variations (lower ratings when errors occurred). Path analyses findings also showed that system understanding was significantly related to greater system satisfaction, but that such relation occurred through the quality of support information offered by the app.

Introduction

Anxiety disorders are among the most prevalent psychiatric problems in children with rates ranging from 5% to 10% and as high as 25% in adolescents (Angold et al., 1999; Kessler et al., 2005; Turner, Beidel, & Costello, 1987). Moreover, anxiety disorders cause significant impairment, fail to spontaneously remit, and are prospectively linked to clinical depression and problematic substance use for some youth (Aschenbrand, Kendall, Webb, Safford, & Flannery-Schroeder, 2003; Beidel & Turner, 1988; Beidel et al., 2007; Cummings, Caporino, Kendall, 2013). As a result, considerable strides have been made to develop strategies for the prevention of anxiety disorders (Anticich, Barrett, Silverman, Lacherez, & Gillies, 2013; Lowry-Webster, Barrett, & Dadds, 2001; Pina, Zerr, Villalta, & Gonzales, 2012. Despite progress, effect sizes for anxiety prevention interventions are relatively small to moderate, often attenuating overtime (Fisak, Richard, & Mann, 2011; Teubert & Pinquart, 2011). We believe, however, that prevention effects could be dramatically improved by increasing the dosage of intervention components theorized to disrupt pathways associated with child anxiety disorder development (e.g., reducing avoidant coping, Essau, Conradt, Sasagawa, & Ollendick, 2012; reducing negative self-talk, Kendall & Treadwell, 2007; Treadwell & Kendall, 1996). In fact, increasing dosage of intervention components could be achieved via mobile health (mHealth) tools because these can offer: (a) on-demand access to review strategies, (b) notifications designed to promote practice, (c) gamification to increase engagement and appropriate use of strategies for managing anticipated anxiety provoking situations, (d) personalized and tailored intervention schedules, and (e) data-driven corrective feedback. Despite these advantages, the large majority of mHealth tools (for anxiety or otherwise) have not have been studied (Curioso & Mechael, 2010; Nielsen et al., 2012). Thus, the objective of this research was to conduct an initial evaluation of a smartphone application (app) corresponding to an indicated prevention and early intervention targeting anxiety in youth by focusing on its usability.

The REACH mHealth application

REACH for Success (hereafter referred to as REACH) is an indicated prevention and early intervention targeting anxiety in youth. REACH is an exposure-based cognitive-behavioral protocol delivered in 6 sessions, each 20-30 minutes in length, and administered in a group format. REACH uses the core exposure-based cognitive and behavioral procedures common to the protocols typically evaluated via randomized controlled trials (RCTs) (Pina et al., 2012; Silverman, Kurtines, Jaccard, & Pina, 2009; Silverman & Kurtines, 1996). This first generation of the REACH app was designed to provide support for out-of-session practice of intervention skills rather than act as a stand-alone platform, as some have suggested that implementation of child anxiety interventions probably requires interventionist involvement (e.g., relevant to training in cognitive restructuring) (Pramana, Parmanto, Kendall, & Silk, 2014). Our efforts in developing the REACH app were guided by a User and Subject Matter Expert Centered Design (Galer, Harker, & Ziegler, 1992) that utilized personas, prototyping with an iterative process, and expert feedback from an advisory board comprised of practicing social workers, school psychologists, and counselors (see Patwardhan et al., 2015 for more details). The Android app was self-contained; it did not rely on communication services to offload data storage or real-time processing. Instead, the focus was on leveraging the device as a dosage vehicle for intervention and data collection. In terms of technology features, we included speech capture, thematic and age-appropriate media, gaming (e.g. progressive reward incentives), notifications presented to the target user in fixed (daily time-based) and adaptive (based on user interactions) schedules, password based authentication for adults (e.g., interventionists, parent, teacher), on-device database to store user responses and actions (e.g., to estimate alarm fatigue, motivation, clinical content such as ratings of distress associated with an anxiety provoking situation), and a data export feature (csv files).

Interaction Design and Information Modeling

Turning to user interaction design and content, and as shown in Figure 1, when a user selects the REACH app from the home screen, the landing page shows five activities (Relaxation, Daily Diary, S.T.O.P, S.T.I.C.s, and Worryheads). In the design, Relaxation is delivered via audio (e.g., breathing, muscle relaxation) (see Figure 1a) while Daily Diary and S.T.O.P (Silverman & Kurtines, 1996; Silverman & Pina, 2008) are fillable forms that use speech capture, keyboard, or both with each response stored in a SQLite database on the device (see Figure 1b and 1c). S.T.I.C (Show That I Can) (Kendall & Barmish, 2007) scenarios present a list of events or situations that are typically anxiety-provoking to youth (e.g., read aloud in front of the class, ask the teacher a question or for help) based on the Anxiety Disorders Interview Schedule for Children (Silverman & Albano, 1996) (see Figure 1d) with a password based unlock feature for adults who provide electronic “stamps of approval” when S.T.I.C.s are successfully completed by users. Worryheads is an activity with preselected ambiguous situations and possible negative thoughts (“S” and “T”) based on the Children's Negative Cognitive Errors Questionnaire (Leitenberg, Yost, & Carroll-Wilson, 1986) in response to which the user is asked to select an appropriate alternative thought from a prepopulated menu (see figure 1e).

An external file that holds a picture, illustration, etc.
Object name is nihms-764095-f0001.jpg

REACH app Content and Activites

A gender-neutral and animated character in the form of a blob guides the five activities, delivers notifications, and praises the user (see Figure 1f). In addition, the user can tap directly on the blob and be taken to a table-oriented layout of progressive and leveled “tricks” the blob can perform (see Figure 1g) only when the user completes an intervention activity (e.g., Relaxation). The design of the blob incorporated proven intervention methodology known as the “proteus effect” which posits that animated representations that reward the user for positive behavior provide increased motivation to perform activities that promote the desired behavior change (Yee & Bailenson, 2007). Overdue activities are highlighted by a soft gold pulsing glow on the landing page to provide a visual cue for the user (see Figure 1h). Further, and as shown in Figure 2, the app includes a specific multi-tap sequence combined with a password that unlocks configuration settings controlling the export of data, establishing a start date (see Figure 2a), enabling/disabling activities (see Figure 2b), modifying the planned dosage (e.g., number of times relaxation should be practiced) for that week (see Figure 2c), assigning notification time (see Figure 2d), and scheduling trick release (see Figure 2e).

An external file that holds a picture, illustration, etc.
Object name is nihms-764095-f0002.jpg

REACH app Admin Options

REACH Usability Evaluation

The International Organization for Standardization (ISO) 9241-210 standard (ISO, 2009) and the ISO and International Electrotechnical Commission (IEC) 9126 standard (ISO/IEC, 2001) guided the initial user experience design of the REACH app. Based on these standards, we operationalized usability evaluation as the degree to which a user of the app can achieve the goals of the REACH protocol with effectiveness, efficiency, and satisfaction. According to the ISO/IEC standards and conceptual models of mHealth development, usability is a characteristic of quality of use and has several measurable sub-dimensions (Brown, Yen, Rojas, & Schnall, 2013; Nielsen, 1994; Matthews, Doherty, Coyle, & Sharry, 2008). The sub-dimensions include ease of use, ease of learning, quality of support information, satisfaction, and social acceptability and these are the dimensions we examined via quantitative analytics. With this approach, we wanted to answer two pragmatic questions: Is the REACH app usable? Which aspects of the youth user experience could be targeted to improve the REACH app?

Conducting usability evaluation in the early phases of technology design and development is important for several reasons. First, poor usability is one of the main barriers to adoption and usage, especially in the case of mobile apps for youth users (Chiu & Eysenbach, 2010; Sheehan, Lee, Rodriguez, Tiase, & Schnall, 2012). Second, poor usability typically reflects difficult to learn, poorly designed, and complicated systems to the extent that these systems can lead to reduced engagement and usage because critical content may not be presented effectively (Jaspers, 2009; Maguire, 2001). Third, usability evaluation can inform the need for additional input from users and/or experts (e.g., prevention specialists, health care professionals), the nature of iterations that might be considered, the necessity for user training and support, and the extent to which greater in-depth testing is required prior to examination in larger scale RCTs (Jaspers, 2009; Jacobs & Graham, 2015; Zapata, Fernandez-Aleman, Idri, & Toval, 2015). Collectively, this evaluation of usability for the REACH app was viewed as a necessary initial step to ensure the functionality is optimized to be appropriately designed, acceptable, and usable with the target population prior to evaluating clinical effectiveness (Brown et al., 2013, O’Malley et al., 2014, Wolf et al., 2013).

Methods

Participant recruitment and procedure

With Institutional Review Board approval, a total of 390 parents (primary caregivers, legal guardians) and 74 providers received a letter explaining the nature of this research and the two-week timeline for participation. From those contacted, 34% provided child consent and every child with parent consent provided assent (n = 132); Approximately 61% of providers consented to participate in the usability trial. These rates of consent reflect that the entire study (i.e., send recruitment letter, receive consent/assent, conduct usability evaluation) was scheduled to start and end over the course of two weeks, prior to summer vacation. Youth with consent/assent were escorted by a school liaison to a classroom where usability evaluation procedures were implemented by three trained research assistants; providers assembled at a classroom or office for the study. Usability evaluation activities with both youth and providers were conducted in a group format.

Participants were given an envelope containing a smartphone device preloaded with the REACH app and a questionnaire. Instructions and usability items were read aloud across nine administrations of the procedures (6 with youth; 3 with providers). Participants were directed to: (1) listen to the Relaxation mp3; (2) play the Worryheads game, (3) respond to part 1 of the survey, (4) write a Diary or S.T.O.P. entry, (5) respond to part 2 of the survey, (6) interact with the blob, and (7) respond to part 3 of the survey. Procedures 1, 2, 4, and 6 lasted 2 minutes each while responding to survey items was not timed; each implementation of the testing procedures lasted 20 to 30 minutes. A total of 29 users encountered one or more difficulties related to software, hardware, and/or user knowledge during the evaluation procedures. When difficulties occurred, users were assisted by a trained research assistant who resolved the issue. Every such instance was documented by a research assistant, including participant ID and nature of the issue, and was considered in the analyses.

Usability Measures

The Usefulness, Satisfaction, and Ease of Use Questionnaire (USE; Lund, 2001) and the Reactions to Program Scale (RPS; stigma subscale; Rapee et al., 2006) were slightly modified and combined into one measure to assess the five dimensions of usability outlined by the ISO/IEC and typical standards noted in the literature (Brown et al., 2013; Nielsen, 1994; Matthews et al., 2008): ease of use, quality of support information, ease of learning, satisfaction, and social acceptability. The latter was measured as stigma and via RPS items given the mental health content nature of the REACH app. Table 1 shows the items used in this research to ascertain usability in language youth could understand and specifically for the technology we developed. Respondents used the 10-point rating scale (1 = “not at all” to 10 = “very much”) from the RPS to rate each item. Consistent with past research, alpha reliabilities were excellent in the present sample and responses to items were summed to yield the following indices: system ease of use (11 items; α = 0.90), quality of support information (3 items; α = 0.78), system ease of learning (4 items; α = 0.91), system satisfaction (4 items; α = 0.88), and stigma (4 items; α = 0.83) scale scores. The overall usability score (22 items; α = 0.93) was calculated by subtracting stigma scores from a sum of the system ease of use, quality of support information, system ease of learning, and system satisfaction scores.

Table 1

Items Used to Assess the Five Usability Dimensions

System Ease of Use
    1. It is easy to use.
    2. It is simple to use.
    3. It is easy to understand.
    4. In a few steps it does what I want.
    5. It lets me do several things.
    6. Using it requires no effort.
    7. I can use it without written instructions.
    8. I don't notice any problems as I use it.
    9. People using it once or many times would like.
    10. Mistakes can be fixed quickly and easily.
    11. I can use it well every time.
Quality of Support Information
    12. The instructions and messages are easy to understand.
    13. The messages to fix problems are clear.
    14. The instructions and messages are clear.
System Ease of Learning
    15. I quickly became good at it.
    16. I easily remember how to use it.
    17. It is easy to learn to use it.
    18. I learned to use it quickly.
System Satisfaction
    19. I am happy with this app.
    20. I would tell a friend about this app.
    21. This app is fun to use.
    22. This app works the way I would want it to work.
Stigma
    23. Would you be embarrassed to have this app?
    24. Would you get teased or picked on by other kids for having this app?
    25. Would you get any criticism or hassles at home for having this app?
    26. Would you get any criticism or hassles at school for having this app?

Note: Items 1-22 adapted from the Usefulness, Satisfaction, and Ease of Use Questionnaire (USE; Lund, 2001); Items 23-26 adapted from the stigma subscale of the Reactions to Program Scale (RPS; stigma subscale; Rapee et al., 2006); A 10-point rating scale (i.e., 1 = “not at all” to 5 = “somewhat” to 10 = “very much”) is used to respond to each item.

Results

Participant characteristics

A total of 177 users (132 youth and 45 providers) from public schools participated in the present study. Youth ages ranged from 8 to 12 years old (M = 9.65, SD = 0.82), 63% were female and 29% were Hispanic/Latino (32% White; 10% African American/Black; 5% Asian/Pacific Islander; 24% Native American or mixed ethnicity/race). The median household income for the youth participants’ families was $46,460. Turning to providers, 26 were Bachelor's level behavior interventionists, 13 served youth as school psychologists or school social workers, and 6 were Master's or PhD level clinicians working in community mental health clinics or the local children's hospital. For providers, ages ranged from 22 to 49 years old (M = 29.13, SD = 6.32), 87% were female, and 80% were Non-Hispanic/Latino. Lastly, providers reported working with youth between 2 and 22 years (M = 8.30, SD = 5.83).

Preliminary Analyses

Preliminary analyses were conducted to identify outliers that might be distorting trends in the data, evaluate missing data, and test data distributions. No meaningful outliers were found and thus all cases were retained. Less than 1% of data were missing, and missingness was not correlated with any sociodemographic characteristics or focal variables. Therefore, missingness was assumed to have occurred at random (missing-at-random; Enders, 2011). Four of the focal variables exceeded conventional cutoffs of |2| for skewness and/or |7| for kurtosis (West, Finch, & Curran, 1995): system ease of use (−2.31 skewness; 6.87 kurtosis), quality of support information (−2.08 skewness; 5.03 kurtosis), system ease of learning (−2.82 skewness; 9.12 kurtosis), and system satisfaction (−2.21 skewness, 5.20 kurtosis). To maintain assumptions of normality moving forward, bootstrapping methods were used for all preliminary analyses and primary tests of significance in SPSS version 22 (i.e., ANOVA tests, independent t-tests) and in Mplus version 7.1 (Muthén & Muthén, 2010) (i.e., path model analysis). Table 2 presents means and standard deviations for the focal variables, as well as correlations controlling for event errors/assistance during the testing protocol. As shown in the Table, the overall usability score was good (M = 33.35; possible range is 0 to 40) and mean estimates for the five dimensions of usability were excellent with stigma being low (M = 2.41; possible range is 0 to 10). Correlations among the usability dimensions were in the expected directions with most coefficients being statistically significant and stigma negatively correlated with system satisfaction and system ease of use.

Table 2

Means, Standard Deviations, and Correlations for the Five Usability Dimensions

Mean SD 12345
Overall Usability35.764.68
    1. System Ease of Use9.041.58--.67**.80**.40**−.15
    2. Quality of Support Information8.931.49--.69**.34**−.06
    3. System Ease of Learning9.041.58--.30**−.09
    4. System Satisfaction9.091.41--−.29**
    5. Stigma2.412.05--

Note: N= 177; Overall Usability ranges from 0 to 40; System Ease of Use, Quality of Support Information, System Ease of Learning, System Satisfaction, and Stigma range from 0 to 10; Correlations between dimensions of usability control for event errors/assistance during usability protocol

*p<.05

**p<.01

Does the REACH app targeting anxiety yield adequate usability ratings?

Relevant to the first research question, focusing on youth participants, the app was highly and positively rated on overall usability (M = 33.30 out of 40, SD = 5.88) and each usability dimension (possible range is 0 to 10): system ease of use (M = 8.57, SD = 1.53), quality of support information (M = 8.99, SD = 1.52), system ease of learning (M =8.96, SD = 1.72), and system satisfaction (M = 9.18, SD = 1.47). In addition, stigma was low (M = 2.39 out of 10, SD = 2.15) suggesting adequate social acceptability. Next, analyses of variance (ANOVAs) were conducted to estimate the influence of sociodemographic characteristics on each of the usability dimensions. Results showed no influence of grade (3rd vs. 4th vs. 5th), sex (boys vs. girls), or ethnicity/race (Hispanic/Latino vs. non-Hispanic/Latino) on any of the usability ratings for youth. There were no significant two- or three-way interactions between grade, sex, and ethnicity (e.g., sex: boys, girls by ethnicity; Hispanic, non-Hispanic).

Pragmatically, overall usability scores were transformed into a traditional “grade” scale and showed that the REACH app earned an “A+” grade from 7% of youth, “A” from 27%, “A-” from 14% “B+” from 8%, “B” from 5%, and failing grades of “C-” or less from 17% (or 23 youth). Focusing on youths who rated the app with “C-” or less, 10 youth encountered one or more software, hardware, and/or user knowledge errors during the testing protocol. Of those, 3 youth encountered software errors, 3 hardware error, and 4 user knowledge errors. Software errors included: app suddenly quitting in the middle of use (2 youth) and extraneous notifications or pop ups interfering with using the app (1 youth). Hardware errors included: Android smartphone restarting in the middle of use (2 youth) and headphone jack of smartphone not working properly (1 youth). User knowledge errors included: users having difficulty finding correct buttons or activities within the app (3 youth), users having no knowledge of the Android operating system (4 youth), and users couldn’t turn on or unlock the Android smartphone device (2 youth). Table 3 presents results from independent t-tests that showed youth who encountered either a software, hardware, or user knowledge error during the testing protocol rated usability significantly lower than youth who did not encounter any errors (no significant differences between type of error encountered were found).

Table 3

Results of t-test for Outcome Measures by Having Errors during Testing Protocol for Youth

Experienced Errors (Software, Hardware, or User Knowledge)
Yes
No
M SD n M SD n t-value d.f95% CI
Overall Usability33.037.972336.533.921093.14*130[1.29, 5.71]
    System Ease of Use8.152.11238.871.171092.29*130[0.10, 1.34]
    Quality of Support Information8.352.19239.151.291092.34*130[0.12, 1.47]
    System Ease of Learning8.172.41239.131.481092.49*130[0.20, 1.72]
    System Satisfaction8.362.23239.391.171093.18*130[0.39, 1.66]
    Stigma3.422.79232.151.93109−2.64*130[−2.23, −0.32]

Note: Software errors = app suddenly quit, extraneous notifications or pop ups interfering with using the app; Hardware errors = device suddenly restarted/turned off, headphones didn't work; User knowledge errors = difficulty pressing or finding correct app buttons; did not understand how to use the app; user couldn't turn on device.

*p < .05

**p < .01

Focusing on providers, the REACH app was highly and positively rated in terms of overall usability (32.54 out of 40, SD = 3.87) and along each usability dimension (possible range is 0 to 10): system ease of use (M = 9.12, SD = 1.07), quality of support information (M = 8.74, SD = 1.37), system ease of learning (M = 9.27, SD = 0.99), and system satisfaction (M = 8.83, SD = 1.19). In addition, concerns of stigma for youth when using the app was low (M = 2.48 out of 10, SD = 1.75), suggesting high social acceptability. Errors were encountered by six providers (errors: 2 software, 2 hardware, 2 user knowledge) but results from independent t-tests showed that users who encountered either a software, hardware, or user knowledge error during the testing protocol did not rate usability significantly lower than providers who encountered no errors.

Which aspects of the youth user experience could be targeted to improve the REACH app?

Relevant to the second research question, an exploratory path model was tested in MPlus (software version 7.1) to examine relations between system ease of use, system ease of learning, quality of support information, and event errors on the satisfaction variable, controlling for perceived level of stigma (see Figure 3) for youth users. Full information maximum likelihood (FIML; Enders & Bandolos, 2001) was used to calculate path coefficients and handle missing data. In addition, given the moderate to high correlation between scores for system ease of learning and system ease of use (r = 0.66), a latent construct of system understanding was created (see Figure 3). For these analyses, path model fit was evaluated against the following established criteria for good and acceptable fit (Hu & Bentler, 1999): a non-significant chi-square test of exact fit, root-mean-square error of approximation (RMSEA) less than 0.05 (0.08 for acceptable), comparative fit index (CFI) greater than 0.95 (0.90 for acceptable), and standardized root-mean-square residual (SRMR) less than 0.05 (0.08 for acceptable). Based on our data, the proposed model showed acceptable approximate fit (chi-square fit χ (7) = 10.40, p = 0.17 or ns; RMSEA = 0.06 with 95% C.I. = 0.00 to 0.13; CFI = 0.99, SRMR = 0.06). Moreover, and as shown in Figure 3, we evaluated a model to estimate direct effects of the system understanding latent construct, quality of support information, number of event errors, and stigma on the system satisfaction variable. We used the products of coefficients estimator and bias-corrected bootstrap sampling distributions provided by RMediation (Tofighi & MacKinnon, 2011) to estimate the significance of indirect effects of quality of support information as well as the effects of number of event errors on the system understanding and system satisfaction relations. In terms of findings relevant to the tested variable relations, system ease of use (e.g., remember how to use it) and system ease of learning (e.g., using it requires no effort) loaded positively on the system understanding latent factor (standardized factor loadings were 0.89 and 0.91). Stigma (e.g., teased or picked on by other kids for having this app) was negatively and significantly related to system satisfaction (e.g., fun to use). The path from event errors to quality of support information was trivial (or non-significant). In terms of the indirect effects, results showed that system understanding had a significant indirect effect on satisfaction via quality of support information (e.g., the instructions and messages are easy to understand) (indirect effect = 0.37; 95% C.I. = 0.14 to 0.60) in that for every one standard deviation increase in system understanding, system satisfaction increased by 0.37 standard deviation units via quality of support information. System understanding did not have a significant indirect effect on satisfaction via errors (indirect effect = 0.12; 95% C.I. = −0.03 to 0.23).

An external file that holds a picture, illustration, etc.
Object name is nihms-764095-f0003.jpg

Hypothesized Model of Usability and Satisfaction

Discussion

Principal Findings

Despite the increasing proliferation of mHealth technology, research evaluating the usability of these technologies is severely lacking (Curioso & Mechael, 2010; Nielsen et al., 2012). For example, in a review of the available smartphone apps for youth anxiety on the Google Play and Apple App Store, we identified 55 apps but no corresponding usability research was found in the literature. With regard to apps not found on Google Play or the Apple App Store, our search of the literature showed four studies reporting on usability for apps targeting child behavior problems (i.e., Pramana et al., 2014; O’Malley et al., 2014; Dixon, Dehlinger, & Dixon, 2013; Tang, Jheng, Chien, Lin, & Chen, 2013), with one focused on problematic child anxiety. More specifically, Pramana et al. (2014), described the SmartCat app for the treatment of anxiety in diagnosed youth and used a single item to estimate usability of the app. The present study is therefore the first of its kind to report findings from an in-depth evaluation of usability corresponding to an empirically-informed child anxiety prevention and early intervention smartphone app. For this reason, and in light of our findings, the present study is important as it may set the stage for future research given that poor usability has been identified as one of the biggest barriers to mHealth impact (Matthews et al., 2008; Sheehan et al. 2012).

Relevant to the primary objectives of the present study, results showed that each dimension of usability measured for the REACH anxiety prevention and early intervention app was highly and positively rated by providers and most youth. In addition, stigma associated with using the app was rated low. The REACH app was found to be relatively easy to use and easy to learn; messages deployed by the technology were rated as helpful and clear; and the app yielded high satisfaction and social acceptability. These findings are encouraging and generally similar to those reported in the handful of studies that have reported usability tests of mHealth tools for youth (O’Malley et al., 2014; Dixon et al., 2013; Tang et al., 2013). Focusing on knowledge gained for improving the REACH app, it is important to note that almost one-fourth (or 17%) of youth sampled showed low enthusiasm about the app and this may have occurred for several reasons. First, software, hardware, and user knowledge errors that youth encountered during the evaluation protocol were significantly related to lower satisfaction and thus need to be addressed. Second, lower enthusiasm could be related to the fact that, anecdotally, some youth were expecting a game app for a smartphone rather than a psychoeducational app. Third, lower satisfaction could have been related to the evaluation procedures of usability implemented for this research as some youth probably would have preferred to engage in “unrestricted play” with the app. If the last two points are true, then it would be important to clearly explain to youth the nature and use of the REACH app prior to providing them with the technology. Fourth, there is a possibility that lower satisfaction for some youth could be have been related to the design itself as other approaches might be preferable. For example, some youth might prefer collaborative learning (e.g., peer-to-peer interactions), more human support (e.g., direct and immediate responses from an adult mental health provider), and/or simply more complex graphics and gamification features (e.g., enhanced user-to-blob interaction and progressive reward incentives). These are possibilities that would need to be explored in our future research efforts.

Broadly, and of plausibly greater interest to other investigators working in mHealth, is a core finding from the present study. That is, our results suggest that future efforts toward improving satisfaction with technology probably need to carefully consider the dynamic relations between system understanding and support information. This is the case because path analyses of youth reported data indicated that greater system understanding (i.e., system ease of use, system ease of learning) was significantly related to greater system satisfaction, but that such relation occurred via the quality of support information offered by the app (e.g., the instructions are easy to understand; messages are helpful in fixing mistakes). Although ratings of the quality of support information and system understanding for the REACH app were high, moving forward it would be important to continue evaluating the messages and instructions offered by the app as a means of further optimizing and improving its overall usability and satisfaction. While no direct test of these relations has been conducted to date, these findings appear consistent with conceptual models of mHealth technologies suggesting that information need, learnability (e.g., ease of learning), and efficiency of smartphone apps (e.g., ease of use) are critical to improving user satisfaction, usability, and adherence during efficacy or effectiveness stages of testing (Brown et al., 2013; Matthews et al., 2008; Harison et al., 2013). Also, findings from the present study showed that some youth and providers experienced roadblocks when trying to use the technology (e.g., did not know how to navigate the app menus). Therefore, it might be the case that brief training in using devices of choice and even the app could help decrease the frequency of operational errors and their impact on usability. For example, in an evaluation of a smartphone app for adolescent depression, youth were provided with a training session outlining the functions of the app prior to the start of the intervention (Mohr et al., 2013). This is consistent with human computer interaction “best practices” suggesting that short training sessions with users could be highly beneficial to minimizing barriers to usage (Matthews et al., 2008).

Limitations and Future Directions

Contributions notwithstanding, some limitations need to be considered. First, our findings are limited in that usability was assessed via self-reported quantitative ratings during a brief standardized and structured protocol. While this methodology is consistent with past research (e.g., Jaspers, 2009), user interaction data might provide valuable in-depth information that could be applied to future iterations. For example, completion time (amount of time to start and end activity), transition time (time to transition from one activity to another), and click-tracing sequence (how user navigated from one activity to another) data might help identify inefficiencies in how the REACH app content is organized (e.g., button locations not intuitively placed for users) and also help determine if user interactions followed the sequences planned as part of the design (Farley, 2013). Second, while youth and provider ratings of overall usability, satisfaction, and acceptability were high, natural patterns of engagement with the REACH app over time is unknown. This is important to assess in future examinations because high rates of attrition among mHealth tools is a significant methodological challenge in efficacy and effectiveness evaluations of these technologies (Baggett et al., 2010). Finally, this research is limited because while the REACH app is focused on the prevention and early intervention of child anxiety, no data about the participant children's anxiety levels was gathered. In this regard, it might be the case that usability ratings from anxious youth could vary from those reported in this research, although there is no reason to believe that anxious youth respond to technology differently from their non-anxious counterparts.

Conclusions

The present study is the first to report findings from an in-depth evaluation of usability relevant to an empirically-informed smartphone app designed to support the prevention and early intervention of youth anxiety. Findings from this research provided strong initial support for the usability of the REACH app and emphasizes the need for conducting this type of testing, early in the development process of mHealth tools, to inform necessary iterations prior to applying the technology for intervention purposes (e.g., efficacy, effectiveness). This research identified areas for improvement (e.g., stabilizing app functions, moving button locations) and offered knowledge about the extent to which users need to be trained and supported (O’Malley et al., 2014; Mohr et al., 2013). Because smartphone apps for mobile health have great potential for improving the management of public health initiatives, including lessening healthcare costs, restrictions in the provision of care (e.g., time, geographical location), barriers associated with other types of technologies (e.g., web-based tools), and even affecting change in economically disparate populations (Baggett et al., 2010; Mulvaney, Anders, Smith, Pittel, & Johnson, 2012), new and better tools are likely to be found in the research area and in the marketplace. It, therefore, might be viewed as “best-practice” to integrate usability testing into the design and development process of mHealth tools to ensure that technologies are usable in ways that can enable sustainability and large-scale diffusion capabilities of evidence-based interventions.

Acknowledgements:

We gratefully acknowledge Derek Hamel, Mandar Patwardhan, Lindsay Holly, Henry Wynne, Julia Parker, Amanda Chiapa, and Bobbi Bromich for their valuable contributions to the development of the app and this research.

Funding/Support: This work was supported in part by grant number K01MH086687 awarded to A. Pina as well as a prevention and implementation science fellowship awarded to R. Stoll, T32 DA039772 01 from the National Institute on Drug Abuse. The content is solely the responsibility of the authors and does not represent the official views of the funding agency.

References

  • Angold A, Costello EJ, Erkanli A. Comorbidity. Journal of Child Psychology and Psychiatry and Allied Disciplines. 1999;40(1):57–87. [Abstract] [Google Scholar]
  • Anticich SAJ, Barrett PM, Silverman W, Lacherez P, Gillies R. The prevention of childhood anxiety and promotion of resilience among preschool-aged children: A universal school based trial. Advances in School Mental Health Promotion. 2013;6(2):93–121. [Google Scholar]
  • Aschenbrand SG, Kendall PC, Webb A, Safford SM, Flannery-Schroeder E. Is Childhood Separation Anxiety Disorder a Predictor of Adult Panic Disorder and Agoraphobia? A Seven-Year Longitudinal Study. Journal of the American Academy of Child & Adolescent Psychiatry. 2003;42(12):1478–1485. [Abstract] [Google Scholar]
  • Baggett KM, Davis B, Feil EG, Sheeber LL, Landry SH, Carta JJ, Leve C. Technologies for expanding the reach of evidence-based interventions: Preliminary results for promoting social-emotional development in early childhood. Topics in Early Childhood Special Education. 2010;29:226–238. [Europe PMC free article] [Abstract] [Google Scholar]
  • Beidel DC, Turner SM. Comorbidity of test anxiety and other anxiety disorders in children. Journal of Abnormal Child Psychology. 1988;16(3):275–287. [Abstract] [Google Scholar]
  • Beidel DC, Turner SM, Young BJ, Ammerman RT, Sallee FR, Crosby L. Psychopathology of adolescent social phobia. Journal of Psychopathology and Behavioral Assessment. 2007;29(1):47–54. [Google Scholar]
  • Brown W, Yen PY, Rojas M, Schnall R. Assessment of the Health IT Usability Evaluation Model (Health-ITUEM) for evaluating mobile health (mHealth) technology. Journal of Biomedical Informatics. 2013;46(6):1080–1087. [Europe PMC free article] [Abstract] [Google Scholar]
  • Chiu TM, Eysenbach G. Stages of use: consideration, initiation, utilization, and outcomes of an internet-mediated intervention. BMC medical informatics and decision making. 2010;10(1):73. [Europe PMC free article] [Abstract] [Google Scholar]
  • Cummings CM, Caporino NE, Kendall PC. Comorbidity of anxiety and depression in children and adolescents: 20 years after. Psychological Bulletin. 2014;140(3):816. [Europe PMC free article] [Abstract] [Google Scholar]
  • Curioso WH, Mechael PN. Enhancing 'M-Health' With South-To-South Collaborations. Health Affairs. 2010;29(2):264–267. [Abstract] [Google Scholar]
  • Dixon J, Dehlinger J, Dixon SD. Human-Computer Interaction. Applications and Services. Springer; Berlin Heidelberg: 2013. Designing, Implementing and Testing a Mobile Application to Assist with Pediatric-to-Adult Health Care Transition. pp. 66–75. [Google Scholar]
  • Enders CK. Missing not at random models for latent growth curve analyses. Psychological Methods. 2011;16(1):1–16. [Abstract] [Google Scholar]
  • Enders CK, Bandalos DL. The relative performance of full information maximum likelihood estimation for missing data in structural equation models. Structural Equation Modeling. 2001;8(3):430–457. [Google Scholar]
  • Essau CA, Conradt J, Sasagawa S, Ollendick TH. Prevention of anxiety symptoms in children: Results from a universal school-based trial. Behavior Therapy. 2012;43:450–464. [Abstract] [Google Scholar]
  • Farley H. Facilitating Immersion in Virtual Worlds: An Examination of the Physical. Outlooks and Opportunities in Blended and Distance Learning. 2013;189 [Google Scholar]
  • Fisak BJ, Richard D, Mann A. The prevention of child and adolescent anxiety: a meta-analytic review. Prevention Science. 2011;12(3):255–268. [Abstract] [Google Scholar]
  • Galer M, Harker S, Ziegler J, Galer M, editors. Methods and tools in user-centered design for information technology. Vol. 9. Elsevier; 2013. [Google Scholar]
  • Hu LT, Bentler PM. Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria Versus New Alternatives. Structural Equation Modeling-a Multidisciplinary Journal. 1999;6(1):1–55. [Google Scholar]
  • ISO. ISO 9241-210. Ergonomics of human system interaction-Part 210: Human-centered design for interactive systems (formerly known as 13407) International Organization for Standardization (ISO); Switzerland: 2009. [Google Scholar]
  • International Organization for Standardization/International Electrotechnical Commission . ISO/IEC 9126-1 Standard, Software Engineering, Product Quality, Part 1: Quality Model. Author; Geneva: 2001. [Google Scholar]
  • Jacobs MA, Graham AL. Iterative development and evaluation methods of mHealth behavior change interventions. Current Opinion in Psychology. 2016;9:33–37. [Google Scholar]
  • Jaspers MWM. A comparison of usability methods for testing interactive health technologies: Methodological aspects and empirical evidence. International Journal of Medical Informatics. 2009;78(5):340–353. [Abstract] [Google Scholar]
  • Jones DJ, Forehand R, Cuellar J, Parent J, Honeycutt A, Khavjou O, Newey GA. Technology-enhanced program for child disruptive behavior disorders: Development and pilot randomized control trial. Journal of Clinical Child and Adolescent Psychology. 2014;43(1):88–101. [Europe PMC free article] [Abstract] [Google Scholar]
  • Kendall PC, Barmish AJ. Show-that-I-can (homework) in cognitive-behavioral therapy for anxious youth: Individualizing homework for Robert. Cognitive and Behavioral Practice. 2007;14(3):289–296. [Google Scholar]
  • Kendall PC, Treadwell KRH. The role of self-statements as a mediator in treatment for youth with anxiety disorders. Journal of Counseling and Clinical Psychology. 2007;75(3):380–389. [Abstract] [Google Scholar]
  • Kessler RC, Berglund P, Demler O, Jin R, Merikangas KR, Walters EE. Lifetime prevalence and age-of-onset distributions of DSM-IV disorders in the National Comorbidity Survey Replication. Archives of general psychiatry. 2005;62(6):593–602. [Abstract] [Google Scholar]
  • Leitenberg H, Yost LW, Carroll-Wilson M. Negative cognitive errors in children: Questionnaire development, normative data, and comparisons between children with and without self-reported symptoms of depression, low self-esteem, and evaluation anxiety. Journal of Counseling and Clinical Psychology. 1986;54(4):528–536. [Abstract] [Google Scholar]
  • Lowry-Webster HM, Barrett PM, Dadds MR. A universal prevention trial of anxiety and depressive symptomatology in childhood: Preliminary data from an Australian study. Behaviour Change. 2001;18(1):36–50. [Google Scholar]
  • Lund A. Measuring usability with the USE questionnaire. Usability interface. 2001;8(2):3–6. [Google Scholar]
  • Maguire M. Methods to support human-centred design. International journal of human-computer studies. 2001;55(4):587–634. [Google Scholar]
  • Matthews M, Doherty G, Coyle D, Sharry J. Designing mobile applications to support mental health interventions. Handbook of research on user interface design and evaluation for mobile technology. 2008:635–656. [Google Scholar]
  • Mohr DC, Burns MN, Schueller SM, Clarke G, Klinkman M. Behavioral Intervention Technologies: Evidence review and recommendations for future research in mental health. General Hospital Psychiatry. 2013;35(4):332–338. [Europe PMC free article] [Abstract] [Google Scholar]
  • Mulvaney SA, Anders S, Smith AK, Pittel EJ, Johnson KB. A pilot test of a tailored mobile and web-based diabetes messaging system for adolescents. Journal of telemedicine and telecare. 2012;18(2):115–118. [Europe PMC free article] [Abstract] [Google Scholar]
  • Muthèn LK, Muthèn BO. Mplus user's guide: The comprehensive modeling program for applied researchers. 5th ed. Muthèn & Muthèn; Los Angeles, CA: 2010. [Google Scholar]
  • Nielsen J. Usability inspection methods. In Conference companion on Human factors in computing systems. ACM.; Apr, 1994. pp. 413–414. [Google Scholar]
  • Nilsen W, Kumar S, Shar A, Varoquiers C, Wiley T, Riley WT, Atienza AA. Advancing the science of mHealth. Journal of health communication. 2012;17(sup1):5–10. [Abstract] [Google Scholar]
  • O'Malley G, Dowdall G, Burls A, Perry IJ, Curran N. Exploring the usability of a mobile app for adolescent obesity management. JMIR mHealth and uHealth. 2014;2(2) [Europe PMC free article] [Abstract] [Google Scholar]
  • Patwardhan M, Stoll R, Hamel DB, Amresh A, Gary KA, Pina A. Proceedings of the conference on Wireless Health. ACM.; Oct, 2015. Designing a mobile application to support the indicated prevention and early intervention of childhood anxiety. p. 8. [Google Scholar]
  • Pina AA, Zerr AA, Villalta IK, Gonzales NA. Indicated prevention and early intervention for childhood anxiety: A randomized trial with Caucasian and Hispanic/Latino youth. Journal of Consulting and Clinical Psychology. 2012;80(5):940–946. [Europe PMC free article] [Abstract] [Google Scholar]
  • Pramana G, Parmanto B, Kendall PC, Silk JS. The SmartCAT: An m- Health Platform for Ecological Momentary Intervention in Child Anxiety Treatment. Telemedicine and E-Health. 2014;20(5):419–427. [Europe PMC free article] [Abstract] [Google Scholar]
  • Rapee RM, Wignall A, Sheffield J, Kowalenko N, Davis A, McLoone J, Spence SH. Adolescents' reactions to universal and indicated prevention programs for depression: perceived stigma and consumer satisfaction. Prevention Science. 2006;7(2):167–177. [Abstract] [Google Scholar]
  • Sheehan B, Lee Y, Rodriguez M, Tiase V, Schnall R. A Comparison of Usability Factors of Four Mobile Devices for Accessing Healthcare Information by Adolescents. Applied Clinical Informatics. 2012;3(4):356–366. [Europe PMC free article] [Abstract] [Google Scholar]
  • Silverman WK, Albano AM. The anxiety disorders interview schedule for children (ADIS-C/P) Psychological Corporation; San Antonio, TX: 1996a. [Google Scholar]
  • Silverman WK, Kurtines WM. Anxiety and phobic disorders: A pragmatic approach. Springer Science & Business Media; 1996b. [Google Scholar]
  • Silverman WK, Kurtines WM. Transfer of control: A psychosocial intervention model for internalizing disorders in youth Psychosocial treatments for child and adolescent disorders: Empirically based strategies for clinical practice. American Psychological Association; Washington, DC.: 1996c. pp. 63–81. [Google Scholar]
  • Silverman WK, Kurtines WM, Jaccard J, Pina AA. Directionality of change in youth anxiety treatment involving parents: An initial examination. Journal of Consulting and Clinical Psychology. 2009;77(3):474–485. [Europe PMC free article] [Abstract] [Google Scholar]
  • Silverman WK, Pina AA. Handbook of Evidence-Based Therapies for Children and Adolescents. Springer; US.: 2008. Psychosocial treatments for phobic and anxiety disorders in youth. pp. 65–82. [Abstract] [Google Scholar]
  • Tang HH, Jheng CM, Chien ME, Lin NM, Chen MY. Orange Technologies (ICOT), 2013 International Conference on. IEEE.; Mar, 2013. iCAN: A tablet-based pedagogical system for improving the user experience of children with autism in the learning process. pp. 177–180. [Google Scholar]
  • Teubert D, Pinquart M. A meta-analytic review on the prevention of symptoms of anxiety in children and adolescents. Journal of Anxiety Disorders. 2011;25(8):1046–1059. [Google Scholar]
  • Tofighi D, MacKinnon DP. RMediation: An R package for mediation analysis confidence intervals. Behavior Research Methods. 2011;43(3):692–700. [Europe PMC free article] [Abstract] [Google Scholar]
  • Treadwell KRH, Kendall PC. Self-talk in youth with anxiety disorders: States of mind, content specificity, and treatment outcome. Journal of Consulting and Clinical Psychology. 1996;64(5):941–950. [Abstract] [Google Scholar]
  • Turner SM, Beidel DC, Costello A. Psychopathology in the offspring of anxiety disorders patients. Journal of Consulting and Clinical Psychology. 1987;55(2):229–235. [Abstract] [Google Scholar]
  • West SG, Finch JF, Curran PJ. Structural equation models with nonnormal variables. Structural equation modeling: Concepts, issues, and applications. 1995:56–75. [Google Scholar]
  • Wolf JA, Moreau JF, Akilov O, Patton T, English JC, Ho J, Ferris LK. Diagnostic inaccuracy of smartphone applications for melanoma detection. JAMA dermatology. 2013;149(4):422–426. [Europe PMC free article] [Abstract] [Google Scholar]
  • Yee N, Bailenson J. The Proteus effect: The effect of transformed self-representation on behavior. Human communication research. 2007;33(3):271–290. [Google Scholar]
  • Zapata BC, Fernandez-Aleman JL, Idri A, Toval A. Empirical Studies on Usability of mHealth Apps: A Systematic Literature Review. Journal of Medical Systems. 2015;39(2) [Abstract] [Google Scholar]

Citations & impact 


Impact metrics

Jump to Citations

Citations of article over time

Alternative metrics

Altmetric item for https://www.altmetric.com/details/15588387
Altmetric
Discover the attention surrounding your research
https://www.altmetric.com/details/15588387

Smart citations by scite.ai
Smart citations by scite.ai include citation statements extracted from the full text of the citing article. The number of the statements may be higher than the number of citations provided by EuropePMC if one paper cites another multiple times or lower if scite has not yet processed some of the citing articles.
Explore citation contexts and check if this article has been supported or disputed.
https://scite.ai/reports/10.1016/j.cbpra.2016.11.002

Supporting
Mentioning
Contrasting
0
25
0

Article citations


Go to all (21) article citations

Similar Articles 


To arrive at the top five similar articles we use a word-weighted algorithm to compare words from the Title and Abstract of each citation.


Funding 


Funders who supported this work.

NIDA NIH HHS (1)

NIMH NIH HHS (1)