Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3698062.3698093acmotherconferencesArticle/Chapter ViewFull TextPublication PageswsseConference Proceedingsconference-collections
research-article
Open access

Analyzing Use Intentions for Health-Diagnostic Chatbots: An Extended Technology Acceptance Model Approach

Published: 08 December 2024 Publication History

Abstract

This study investigates the factors influencing the intention to use AI-powered health diagnostic chatbots. A research model was proposed based on the Technology Acceptance Model, subjective norms, perceived trust, perceived risk, and self-efficacy. Using Partial Least Squares Structural Equation Modeling, the study assessed the model with 274 valid responses. The results revealed that perceived usefulness, subjective norms, perceived trust, and self-efficacy significantly influence the intention to use AI-powered health diagnostic chatbots. However, perceived ease of use and perceived risk did not impact the intention to use such chatbots. The study also discusses how these findings can assist developers in promoting the long-term adoption of AI-powered health diagnostic chatbots among potential users.

1 Introduction

MHealth, short for mobile health applications, refers to software applications that run on digital devices such as smartphones, tablets, and laptops. These applications support various aspects of healthcare, including public health and clinical practice [1-3]. The scope of services offered through mHealth has expanded, covering a wide range of functions, from facilitating user-to-health-system communication to enabling health surveillance, tracking, monitoring, and providing access to essential health information.
However, despite its numerous benefits for clinicians and patients, the adoption of mHealth has raised concerns. Scholars like Vos and Parker [4] have emphasized the need for caution, highlighting the potential risks associated with the misuse or careless use of mHealth. Consequently, they stress the importance of implementing regulatory measures to protect users and establish a secure and risk-free environment within the mHealth landscape [4].
Recent studies, such as the one conducted by Bhatt et al. [5], indicate a significant surge in mHealth usage following the COVID-19 pandemic. Bhatt and the team's research delve into AI-powered mHealth (AIM), aiming to gain a deeper understanding of AI applications in the healthcare sector. This area has received limited attention from previous scholars. With advancements in AI technology, there has been a notable improvement in the accuracy and efficiency of insights and results, alongside an increased emphasis on security and user privacy [5]. Consequently, AI techniques are now widely employed for tasks like disease detection, identification, prediction, and diagnosis [6, 7].
Furthermore, AI has been effectively integrated into the healthcare industry, assisting healthcare professionals in making informed decisions related to preventive healthcare [8]. Point-of-care support and diagnostic tools, as highlighted by Secinaro et al [9], play a crucial role in obtaining initial health information from users or patients, serving as a quick reference for healthcare providers and enabling timely responses and treatments. Additionally, real-time tracking systems, as stated by Mohammed et al. [10], offer valuable tools for monitoring the health status of both clinicians and patients, helping prevent emergencies. These real-time tracking services are often integrated into emergency medical response systems. Previous studies provide substantial evidence that AI plays a significant role in the healthcare sector, effectively integrated into various aspects of healthcare [11-13].
Nevertheless, there is a noticeable research gap regarding the intention to adopt mobile AI-based health diagnostic applications as standalone healthcare delivery platforms, as emphasized by Thurner [14]. While previous studies have explored various aspects, including the relationships and influences of external stimuli, such as users’ perceptions of limitations in traditional healthcare and medical services and their views on technology-based services [14], further research is essential to gain a deeper understanding of the factors influencing the intention to adopt mobile AI-based health diagnostic applications.
Understanding the factors that influence the intention to use a health-diagnostic chatbot is essential for maximizing its potential. This study employs the Technology Acceptance Model (TAM), subjective norms, perceived trust, perceived risk, and self-efficacy as a research framework to investigate this intention. Thus, the research question is: How do the components of the Technology Acceptance Model (TAM), subjective norms, perceived trust, perceived risk, and self-efficacy collectively impact the intention to use a Health-Diagnostic Chatbot among Malaysian users?

2 Theoretical Framework

2.1 Technology Acceptance Model

The Technology Acceptance Model (TAM) is a theory derived from the Theory of Reasoned Action (TRA), which aims to predict and assess user acceptance of new technology [15]. According to the TAM model, two main factors shape users' intentions to adopt a new technology: ease of use and usefulness [15]. Perceived ease of use indicates that users are more likely to consider a technology if it demands minimal effort to learn or operate. On the other hand, perceived usefulness suggests that users are inclined towards a technology if it enhances their task performance [15, 16].
The perceived ease of use is a significant factor influencing users' willingness to adopt technology, particularly when they perceive it as effortless to achieve their goals [15, 16]. Research indicates that ease of use directly impacts users' willingness to adopt mHealth [17]. A study by Siebert et al.[18] found a significant relationship between ease of use and user intention to use mHealth. Alam et al. [19] emphasized the importance of ease of use in mobile health app development. Based on a meta-analysis [20] revealed a positive relationship between perceived ease of use and both perceived usefulness and behavioral intention in adopting mHealth. A study from Korea found that perceived usefulness was significantly affected by perceived ease of use among adults using AI-powered medical chatbots [21]. Additionally, the perceived usefulness of AI medical chatbots is predicted by the perceived ease of use [22, 23]. A study showed that perceived ease of use positively impacted users' attitude toward using AI medical chatbots among Portuguese users [24]. Patil and Kulkarni [25] highlighted that perceived ease of use directly influences trust in AI medical chatbots. Additionally, user satisfaction levels and continuance intention toward AI medical chatbots are significantly influenced by perceived ease of use [26]. Sitthipon et al. [27] also identified perceived ease of use as a key factor determining Thai users' intention to use healthcare chatbots and applications. Thus,
H1: There is a significant relationship between perceived ease of use and the intention to adopt AI-based health diagnostic chatbot.
On the other hand, perceived usefulness is also a crucial factor for users when considering the adoption of a new technology, as outlined in the Technology Acceptance Model [15, 16]. Particularly during the Covid-19 pandemic, the perceived usefulness of mHealth apps positively influences behavioral intentions [28]. Users' willingness to adopt a new technology often hinges on its ability to meet their needs. Binyamin and Zafar [20] found that the user's intention to use mHealth is positively impacted by the technology's usefulness. A study by Palos-Sanchez et al. [29] demonstrated that perceived usefulness significantly influences the adoption of mHealth. It's worth noting that perceived usefulness serves as a precursor to user satisfaction with mHealth [30]. Pak and Kim [31] showed a positive association between perceived usefulness and user behavioral intention toward the use of mHealth. The usefulness of an mHealth application strongly determines its adoption by users [32]. In a study by Shahsavar and Choudhury, one of the factors influencing the intention to adopt AI-based mHealth is the usefulness of the application [33]. Similarly, a study discovered that perceived usefulness significantly impacts the user adoption intention of an AI-powered health chatbot in Taiwan [34]. Furthermore, researchers have found that the intention to reuse an AI-based chatbot is significantly predicted by perceived usefulness [24]. Hence,
H2: There is a significant relationship between perceived usefulness and the intention to adopt AI-based health diagnostic chatbot.

2.2 Subjective Norm

Subjective norm refers to an individual's perception of whether the people significant to them believe they ought to or should not engage in a particular behaviour [35]. This perception becomes particularly significant when users lack sufficient knowledge or experience with the technology, leading them to rely on the opinions of others for guidance or recommendations [36]. Hsu et al. [37] discovered that subjective norm significantly and positively influences users’ intention to consult chatbots for health-related matters. Oloveze et al. [35] identified subjective norm as the most crucial factor influencing users' intention to recommend mHealth to others. Nie et al. [38] found that subjective norm indirectly affects the continuous usage intention of mobile health services through perceived service quality and perceived information quality. Similarly, Chang et al. [39] stated that subjective norm is significantly and positively related to the intention to use AI-powered medical chatbots. A meta-analysis stated that subjective norm significantly influences the intention to adopt mHealth services[41, 42]. Yee et al. [42] discovered a positive correlation between subjective norm and patients' intention to use mHealth applications. Therefore,
H3: There is a significant relationship between subjective norms and the intention adopt AI-based health diagnostic chatbot.

2.3 Perceived Trust

Perceived trust, a behavioral perception, has consistently shown a positive relationship with users' intention to adopt specific technologies [43]. In the context of robots, trust is defined as the user's willingness to accept the results and suggestions produced by the robot [44]. Recognized as a significant factor, perceived trust plays a crucial role in attracting new users and retaining existing ones [45]. Meta-analyses have affirmed a positive association between perceived trust and the behavioral intention to use mHealth [40]. Meng et al. [46] demonstrated that perceived trust positively influences the intention to use mHealth among elderly users Studies [47, 48, 49] advised mHealth developers and implementers to prioritize factors that foster perceived trust to ensure continued usage of mHealth applications. Van Haasteren et al. [50] also suggest that application developers must understand the variables conducive to creating a trustworthy health application. Octavius and Antonio asserted that perceived trust in mHealth is the most critical determinant for user adoption [51]. Klaver et al. [52] suggested that perceived trust may significantly and positively affect the intention to use mHealth among older users in the Netherlands. Li et al. [53] discovered that perceived trust plays a significant mediating role in predicting user adoption intention of mHealth. Seitz et al. [54, 55] emphasized the importance of building user trust toward AI-based medical chatbots. Deng et al. [56] revealed that perceived trust strongly predicts patients’ adoption intention toward mHealth. Stated formally,
H4: There is a significant relationship between perceived trust and the intention to adopt AI-based health diagnostic chatbot.

2.4 Perceived Risk

Perceived risk emerges as a significant factor when examining artificial intelligence products such as AI-powered medical chatbots or mHealth [40]. The collection and use of private information have significantly raised user concerns about privacy invasion [39]. Perceived risk may occur when users' personal health information is not verified through the mobile health application [43]. Nadarzynski et al. [57] stated that users' hesitance to incorporate AI-powered medical chatbots into their healthcare routines due to potential miscommunication or inaccurate information provided by the chatbots [58]. Lai et al. [59] underscored perceived risk as the most significant factor contributing to users' resistance to using AI medical chatbots. Cheng and Jiang [60] discovered that perceived risk negatively impacts user satisfaction with chatbot services. Rajak and Shaw [61] found that perceived risk has a negative impact on the adoption of mHealth. del Río-Lanza et al. [62] suggested that mHealth designers should consider perceived risk during the development phase to increase user preference for mHealth. Reviews have consistently identified perceived risk as a significant determinant influencing user behavioral intention toward mHealth and medical chatbots[40], [63]. The following hypothesis is proposed below,
H5: There is a significant relationship between perceived risk and the intention to adopt adopt AI-based health diagnostic chatbot.

2.5 Self-efficacy

Self-efficacy refers to individuals' judgment of their own ability or capacity to operate an application to fulfill their needs [64]. Mensah et al. [65] observed that self-efficacy significantly moderates both performance expectancy and effort expectancy in the adoption of mHealth services. Balapour et al. [66] found that self-efficacy positively influences patients' perceived intentions to adopt mHealth apps provided by clinics or hospitals. Liu et al. [66] discovered that self-efficacy could facilitate users' intention to adopt mHealth, noting its significant positive effects on perceived ubiquity, effort expectancy, performance expectancy, and subjective norm. Ghahramani and Wang [67] found that self-efficacy has a significant effect on user intention to adopt mHealth applications. Vinnikova et al. [68] found that self-efficacy positively influences behavioral intention to use mHealth. They also highlighted that users with high self-efficacy demonstrate a higher level of self-regulation, leading to the execution of their set goals. Zhang et al. [69] reported a positive association between self-efficacy and perceived ease of use, while also noting its significant moderation of the impact of perceived usefulness on adoption intention in mHealth. Thus,
H6: There is a significant relationship between self-efficacy and the intention to adopt adopt AI-based health diagnostic chatbot.

3 Method

3.1 Data Collection and Sample

The current study applies a cross-sectional study by using an online questionnaire for primary data collection [70]. All response data that has been gathered can be quantitatively analyzed to produce inferential and descriptive statistics [70]. A Google Form was used to create the online survey. In the preface section at the beginning of the questionnaire, there was a brief explanation of the study along with a highlighted statement stating that all information would be used in confidential and solely for this study. Prior to answering the questionnaire confirming the participation agreement, informed consent was obtained. Before completing the survey, a 40-second video explaining the idea of mobile AI-based health diagnostic applications was included for clarification.
According to a recent Internet user survey reported by the Malaysian Communications and Multimedia Commission, Malaysians are active users of social media. As of 2022, 87.5% of Malaysian respondents were on Facebook, and over 90% of Malaysian Internet users preferred WhatsApp as their communication app [71]. Therefore, a snowball method was used to post the survey link on social media (such as Facebook and WhatsApp) in order facilitate in data collection. A non-probability snowball sampling technique would encourage respondents to disseminate the survey link among their social circle [72].
The minimum sample size required to achieve an 80% power was determined using the G-power program. For the a priori power computation, an effect size of 0.15, a significance level of 0.05, and six predictors were applied. A minimum of 98 respondents were required as the sample size.
This study uses SmartPLS 4 to analyze the research findings. Partial Least Square Structural Equation Modelling (PLS-SEM) was used in the current study. This is because the objective of the present research is prediction-based [73], specifically to determine the variables predict the intention to use mobile AI-based health diagnostic applications.
The measurement items in the present research were all adapted from past research and modified to fit the current research objective, as summarized in Table 1. All items were evaluated using a five-point Likert scale ("1 = Strongly disagree, and 5 = Strongly agree").
Table 1:
 ItemsSource/Reference
Intention to Use (IU)Adapted from [74]
IU1 - I intend to use mobile AI-based health diagnostic applications in order to maintain my health. 
IU2 - I believe I will continue using mobile AI-based health diagnostic applications in order to maintain my health. 
IU3 - If any healthcare provider will ask me to report personal health data using mobile AI-based health diagnostic applications, I will do so. 
Perceived Ease of use (PEOU)Adapted from [73]
PEOU1 - Learning to use mobile AI-based health diagnostic applications is easy for me. 
PEOU2 - The interface of mobile AI-based health diagnostic applications is clear and understandable. 
PEOU3 - It is easy for me to become skillful at using mobile AI-based health diagnostic applications. 
PEOU4 - I find mobile AI-based health diagnostic applications easy to use. 
Perceived Risk (PR)Adapted from [75]
PR1 - I believe the privacy of mobile AI-based health diagnostic applications user is protected. 
PR2 - I believe personal information stored in mobile AI-based health diagnostic applications systems is safe. 
PR3 - I believe mobile AI-based health diagnostic applications keep participants’ information secure. 
Perceived Trust (PT) 
PT1 - I know that mobile AI-based health diagnostic applications are trustworthy.Adapted from [75]
PT2 - I know that mobile AI-based health diagnostic applications are not opportunistic. 
PT3 - I know that mobile AI-based health diagnostic applications keep their promises to their users. 
PT4 - The content of mobile AI-based health diagnostic applications is reliable. 
Perceived usefulness (PU)Adapted from [73]
PU1 - Using mobile AI-based health diagnostic applications improves my health performance. 
PU2 - Using mobile AI-based health diagnostic applications enhances my effectiveness in getting healthier. 
PU3 - Using mobile AI-based health diagnostic applications makes it easier to keep a healthy habit. 
PU4 - I find mobile AI-based health diagnostic applications useful for me to keep a healthy lifestyle. 
Self-Efficacy (SE)Adapted from [75]
SE1 - It is convenient for me to use mobile AI-based health diagnostic applications. 
SE2 - I have the capability to use mobile AI-based health diagnostic applications. 
SE3 - I could take healthcare services using mobile AI-based health diagnostic applications if there was no one around to tell me what to do. 
SE4 - I could complete a health service using mobile AI-based health diagnostic applications if I have never used a system like it before. 
Subjective Norm (SN)Adapted from [75]
SN1 - People who are important to me think that I should use mobile AI-based health diagnostic applications. 
SN2 - People who influence my behaviour think that I should use mobile AI-based health diagnostic applications. 
SN3 - People whose opinions I value prefer that I used mobile AI-based health diagnostic applications. 
Table 1: Styles available in the Word template

4 Result

4.1 Demographic analysis

A total of the 300 responses that were collected, 26 records were excluded due to straight-lining responses [76]. The remaining 274 data sets included 212 female respondents (77.4%) and 62 male respondents (22.6%), with the majority being Chinese (n = 228, 83.2%), followed by Malay/Bumiputera (n = 25, 9.1%) and Indian (n = 21, 7.7%). The majority of participants were between the ages of 18 and 22 (n = 133, 48.5%), followed by those who were 23 to 27 (n = 96, 35%), 28 to 32 (n = 26, 9.5%), and 33 and above (n = 18, 6.9%). A Bachelor's degree had by the majority of the 274 respondents (n = 209, 76.3%) was followed by Diploma/Foundation/STPM (n = 28, 10.2%), Masters (n = 25, 9.1%), O-Level/SPM and below (10, 3.6%) and Doctorate/PhD (n = 2, 0.7%).
The majority of respondents have heard about and are familiar with the mobile AI-based health diagnostic applications. The majority of respondents (n = 98, 35.8%) recognized the AI ADA health app, followed by Diagnose AI app (n = 82, 29.9%), WebMD healthcare app (n = 52, 19.0%), Prognosis: Your Diagnosis app (n = 21, 7.7%), and Epocrates Plus app (n = 16, 5.8%). Three respondents (1.1%) said they were unaware of any mobile AI-based health diagnostic apps. One respondent reported using an Apple health mobile app, which was developed by Apple Inc., while another reported using the Google-operated Health Connect app. More than half of the respondents (n = 213, 77.7%) reported using a mobile AI-based health diagnostic application for less than a year, followed by those reporting one to three years (n = 47, 17.2%) and those reporting more than three years (n = 14, 5.1%).

4.2 Single source bias

All research constructs, including dependent and independent variables, were regressed onto a common variable to determine their variance inflation factors (VIF) in order to test the full collinearity. All VIF values were less than 3.3 [77], as indicated in Table 2. The results showed that single source bias — which can cause common method bias and common method variance is not an issue in the current data set.
Table 2:
ConstructIUPEOUPRPTPUSESN
VIF2.7042.3482.1903.2792.4552.7971.992
Table 2: Full Collinearity

4.3 Normality Assumptions

The multivariate normality of the data was evaluated using "Web power" software [76], [78]. The data's normal distribution assumption had been violated as evidenced by the Mardia's multivariate kurtosis (β = 106.51, p < 0.01) and skewness (β = 15.96, p < 0.01) values exceeding the multivariate normality requirements of ±20 and ±3 [78]. Therefore, PLS, a non-parametric tool, was a suitable tool for being utilized for performing the bootstrapping procedure for correcting the standard errors [73].

4.4 Measurement model assessment

4.4.1 Convergent validity

The composite reliability (CR), average variance extracted (AVE), and outer loadings were used to evaluate the measurement model's convergent validity. A threshold range of 0.70 to 0.95 was used to determine CR [73]. All research construct's AVE value achieved a minimum of 0.50. According to Table 3, all outer loadings, CR value, and AVE value exceeded 0.50, 0.70, and 0.50 [4] respectively. Both the convergent validity and reliability were thus confirmed.
Table 3:
ItemsOuter LoadingsCRAVER2
Intention to Use (IU) 0.8480.6510.630
IU10.854   
IU20.858   
IU30.699   
Perceived Ease of use (PEOU) 0.8680.623 
PEOU10.819   
PEOU20.837   
PEOU30.733   
PEOU40.766   
Perceived Risk (PR) 0.8940.737 
PR10.867   
PR20.875   
PR30.833   
Perceived Trust (PT) 0.8660.618 
PT10.818   
PT2.0.733   
PT30.808   
PT40.783   
Perceived usefulness (PU) 0.8790.644 
PU10.818   
PU20.797   
PU30.786   
PU40.809   
Self-Efficacy (SE) 0.8500.585 
SE10.770   
SE20.768   
SE30.753   
SE40.770   
Subjective Norm (SN) 0.9130.778 
SN10.884   
SN20.889   
SN30.872   
Table 3: Measurement model

4.4.2 Discriminant validity

The Fornell and Larkers [9] criterion was used to assess the discriminant validity. As shown in Table 4, all the AVEs’ squared roots values were higher than the correlations with other research constructs. The findings support the research constructs' discriminant validity [79].
Table 4:
 1234567
1. Intention to Use0.807      
2. Perceived Ease of Use0.5650.790     
3. Perceived Risk0.5510.6130.858    
4. Perceived Trust0.6980.6030.6850.786   
5. Perceived Usefulness0.6920.5860.4870.6690.803  
6. Self-Efficacy0.6630.7050.6170.6940.6050.765 
7. Subjective Norm0.6270.4570.4710.6420.6100.5120.882
Table 4: Discriminant validity
Note: The bold diagonal are the square roots of the AVEs of the individual constructs; off-diagonal values are the correlations between constructs

4.5 Structural model assessment

4.5.1 Hypothesis Testing

The results of the hypothesis are summarized in Table 5. Perceived ease of use (β = 0.007, ρ > 0.05) and perceived risk (β = 0.048, ρ > 0.05) did not have a significant relationship with intention to use AI-based health diagnostic applications. Thus, Hypothesis H1 and H5 were not supported. Hypothesis H2, H3, H4, H6 were supported, indicating that perceived usefulness (β = 0.284, ρ < 0.001), subjective norm (β = 0.188, ρ < 0.01), perceived trust (β = 0.193, ρ < 0.01), and self-efficacy (β = 0.227, ρ < 0.01) had a positive and significant relationship with the intention to use AI-based health diagnostic applications.
Overall, all research constructs explain 63% of the variance in the intention to use AI-based health diagnostic applications. A substantial model was indicated by the R2 value of 0.630, which was higher than Cohen's [80] 0.26 threshold.
Table 5 shows that there is no issue with multicollinearity among the independent variables, as all of the VIF values are less than 3.3 [76]. Self-efficacy, perceived risk, perceived usefulness, subjective norm, and perceived trust all have a small effect (below 0.15) [80] on explaining the variance in the intention to use AI-based health diagnostic applications. Perceived ease of use had an effect size of 0.000, which was below the specified small effect threshold of 0.02 [80]. This confirms the weak influence of perceived ease of use on the intention to use AI-based health diagnostic applications.

4.5.2 PLS-Predict

In the current study, PLSpredict was utilized to investigate the predictive relevance [81] of intention to use AI-based health diagnostic applications. Predictive relevance can be determined by the evidence that the value of Q2 predicts was found to be larger than zero [81]. As summarized in Table 6, all items in the RMSE values of the PLS-LM had a lower value, indicating the high predictive power of the current research model [81].
Table 5:
HypothesisStd. Beta (β)Std. Errort-valueConfidence Interval Bias Corrected (BCI)p-valueResultsf2Effect SizeVIF (≤ 3.3)
     5%95%     
H1PEOU → IU0.007ns0.0700.103-0.1050.1200.459Not Supported0.000None2.348
H2PU → IU0.284***0.0803.5320.1500.411< 0.001Supported0.097Small2.237
H3SN → IU0.188**0.0792.3870.0680.3270.009Supported0.050Small1.897
H4PT → IU0.193**0.0802.4180.0620.3240.008Supported0.032Small3.178
H5PR → IU0.048ns0.0570.846-0.0490.1370.199Not Supported0.003Small2.184
H6SE → IU0.227**0.0782.9070.0960.3520.002Supported0.052Small2.658
Table 5: Hypotheses Summary
Notes: PEOU = Perceived Ease of Use; IU = Intention to Use; PU = Perceived Usefulness; SN = Subjective Norm; PT = Perceived Trust; PR = Perceived Risk; SE = Self-Efficacy; ns = not significant; *** = p < 0.001; ** = p < 0.01
Table 6:
ConstructQ2_predict   
Intention to Use (IU)0.590   
ItemPLS-RMSELM-RMSEPLS-LM RMSEQ²predict
IU10.6270.660-0.0330.395
IU20.6650.698-0.0340.433
IU30.6900.703-0.0120.313
Table 6: PLS-Predict
Notes: RMSE = root mean squared error; PLS = partial least squares path model; LM = linear regression model; Q2 predict = predictive relevancy

5 Discussion

Contrary to expectations, perceived ease of use does not exhibit a significant positive relationship with the intention to adopt AI-powered medical chatbots. This result is in line with Kelly et al.'s work [82]. One plausible explanation for this discrepancy could be that users prioritize other aspects of AI-powered medical chatbots, such as functionality, user experience, and data analysis, over ease of use. They may also place greater emphasis on the accuracy of data and analysis rather than the ease of use of the application. Moreover, the perception of ease of use may vary among users, influenced by their individual experiences, backgrounds, and other factors. Therefore, it would be beneficial for researchers to delve deeper into how these individual differences impact the perceived ease of use and its influence on the intention to adopt AI-powered medical chatbots.
This study unveils that perceived usefulness exhibits the most substantial positive correlation with the intention to utilize AI-based health diagnostic chatbots, as demonstrated in the present research. This finding aligns with Shahsavar and Choudhury [33] finding that perceived usefulness positively influences the adoption intention of AI-based health diagnostic chatbots. Huang et al. [34] further reinforce these findings, emphasizing the significant association between perceived usefulness and the intention to adopt AI mHealth solutions.
This study uncovers that subjective norm exerts a positive influence on the intention to adopt AI-powered health diagnostic chatbots. This finding aligns with Chang et al. [39] works, which highlights the significant effect of subjective norm on the intention to adopt mHealth. Similar studies [41, 42] have also yielded comparable results, thereby strengthening the observed impact of subjective norm on the intention to adopt mHealth. The subjective norm may create a sense of pressure, leading users to feel compelled to adopt or use the AI-powered health diagnostic chatbot to avoid dissatisfaction from significant individuals in their lives.
This study revealed that perceived trust positively influenced the intention to adopt AI-based health diagnostic chatbots. This finding agrees with Octavius and Antonio [51] and Kalver et al. [52] finding. The user's trust in the reliability and accuracy of AI-based health diagnostic chatbots plays a crucial role. Ensuring consistent delivery of accurate diagnoses and data is key to fostering trust among users. Moreover, trust can be bolstered by implementing robust security and privacy measures to safeguard users' sensitive data. Providing a consistently positive user experience, characterized by useful recommendations and a user-friendly interface, may also enhance trust development over time.
This study noticed that perceived risk does not influence the intention to adopt AI-based health diagnostic chatbots. This finding contradicts with the findings documenting that perceived risk will influence adoption intention [40, 63]. One possible explanation could be that the level of trust in AI-based health diagnostic chatbots has a weaker effect on perceived risk, thereby influencing the intention to adopt them. Additionally, increased familiarity and exposure to AI-based technology in other domains might also lower the perceived risks associated with adopting AI-based health diagnostic chatbots. Individuals accustomed to using AI-driven services in other aspects of their lives may perceive these technologies as less risky within the healthcare context. Users may weigh the potential benefits of AI-based health diagnostic chatbots, such as convenience, accessibility, and potentially improved health outcomes, more heavily than the perceived risks. This perceived benefit-risk trade-off may lead individuals to focus more on the advantages of adopting the technology rather than its potential risks.
This study noted that self-efficacy has a positive relationship toward the intention to adopt AI-based health diagnostic chatbots. This finding is coherent with the respective works [67, 68, 69, 70] on adopting mhealth. Users with high self-efficacy tend to possess greater confidence in their ability to interact with and explore technology effectively and efficiently. This confidence can lead to a greater willingness to adopt and explore new technologies such as AI-based health diagnostic chatbots. Furthermore, users who have previously successfully explored or adopted AI-powered tools in other domains may be more confident in adopting new AI-powered applications like AI-based health diagnostic chatbots.

6 Managerial implications

The findings of this study offer managerial strategies for developers of AI-powered health diagnostic chatbots. Given the significance of the technology's functionality aspect (perceived usefulness), developers should prioritize demonstrating the chatbot's utility by showcasing its ability to provide accurate diagnoses, recommendations, and improve healthcare outcomes. Customizing the features and functions of the chatbot to address specific healthcare needs is essential, which can be achieved through surveys and research to identify key areas for improvement. Additionally, providing comprehensive training and support to users can enhance their understanding of how to effectively utilize the capabilities of the AI-powered health diagnostic chatbot, thereby enhancing their perception of its usefulness.
AI-powered health diagnostic chatbot developers can capitalize on the effects of subjective norms by encouraging satisfied users to share their experiences and recommendations about the chatbot. This could involve initiating a referral program, establishing an online community, and organizing meetings to facilitate user discussions and knowledge exchange. Additionally, developers can leverage social influence by highlighting endorsements from healthcare professionals, influential figures, and others to reinforce the effect of subjective norms on the adoption of AI-based health diagnostic chatbots. Collaborating with these individuals to promote the benefits of the application and address any user concerns can further enhance acceptance and adoption rates.
Developers should prioritize considering perceived trust, as it significantly impacts the adoption of AI-powered health diagnostic chatbots. They must focus on building trust among users by implementing robust data security measures and transparently demonstrating how they safeguard users' sensitive data. Compliance with rules and regulations should be emphasized, along with mitigation strategies for potential risks such as data breaches and misuse. Maintaining open communication channels with users is crucial, as developers should proactively address user concerns and provide clear information. Additionally, developers can enhance perceived trust by providing evidence of the chatbot's accuracy through empirical evidence and certification, particularly for users who have already utilized the AI-powered health diagnostic chatbot.
Developers should update and simplify the design of the AI-powered health diagnostic chatbot to reduce user cognitive load and facilitate ease of use. Clear instructions, guided tutorials, and user-friendly interfaces can enhance users' confidence in their ability to navigate the application. Additionally, developers can enhance user self-efficacy by celebrating user successes and acknowledging their achievements. Highlighting instances of successful interactions with the chatbot can boost users' confidence in their ability to effectively utilize the technology and achieve positive outcomes.

Acknowledgments

The authors thanks all the participants who generously dedicated their time and shared insights critical to this research.

References

[1]
J. G. Kahn, J. S. Yang, and J. S. Kahn, “‘Mobile’ Health Needs And Opportunities In Developing Countries,” Health Aff, vol. 29, no. 2, pp. 252–258, Feb. 2010.
[2]
O. Rivera-Romero, E. Gabarron, J. Ropero, and K. Denecke, “Designing personalised mHealth solutions: An overview,” J Biomed Inform, vol. 146, Oct. 2023.
[3]
S. P. Rowland, J. E. Fitzgerald, T. Holme, J. Powell, and A. McGregor, “What is the clinical value of mHealth for patients?,” NPJ Digit Med, vol. 3, no. 1, Dec. 2020.
[4]
Jeanine Vos and Chuck Parker, “Medical Device Regulation mHealth Policy and Position Ensuring continued patient safety whilst enabling medical device innovation in mobile health,” 2012.
[5]
P. Bhatt, J. Liu, Y. Gong, J. Wang, and Y. Guo, “Emerging Artificial Intelligence-Empowered mHealth: Scoping Review,” JMIR mHealth and uHealth, vol. 10, no. 6. JMIR Publications Inc., Jun. 01, 2022.
[6]
J. R. Zech, M. A. Badgeley, M. Liu, A. B. Costa, J. J. Titano, and E. K. Oermann, “Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study,” PLoS Med, vol. 15, no. 11, Nov. 2018.
[7]
C. Sabanayagam et al., “A deep learning algorithm to detect chronic kidney disease from retinal photographs in community-based populations,” Lancet Digit Health, vol. 2, no. 6, pp. e295–e302, Jun. 2020.
[8]
P. Mamoshina et al., “Oncotarget 5665 www.impactjournals.com/oncotarget Converging blockchain and next-generation artificial intelligence technologies to decentralize and accelerate biomedical research and healthcare,” 2018. [Online]. Available: www.impactjournals.com/oncotarget/
[9]
S. Secinaro, D. Calandra, A. Secinaro, V. Muthurangu, and P. Biancone, “The role of artificial intelligence in healthcare: a structured literature review,” BMC Med Inform Decis Mak, vol. 21, no. 1, Dec. 2021.
[10]
K. I. Mohammed et al., “Real-Time Remote-Health Monitoring Systems: a Review on Patients Prioritisation for Multiple-Chronic Diseases, Taxonomy Analysis, Concerns and Solution Procedure,” Journal of Medical Systems, vol. 43, no. 7. Springer New York LLC, Jul. 01, 2019.
[11]
C. Farrar and K. Worden, Structural health monitoring: A Machine Learning Perspective. Chichester, UK: John Wiley & Sons, 2012.
[12]
J. Wiens and E. S. Shenoy, “Machine Learning for Healthcare: On the Verge of a Major Shift in Healthcare Epidemiology,” Clinical Infectious Diseases, vol. 66, no. 1, pp. 149–153, Jan. 2018.
[13]
A. Esteva et al., “A guide to deep learning in healthcare,” Nature Medicine, vol. 25, no. 1. Nature Publishing Group, pp. 24–29, Jan. 01, 2019.
[14]
T. Thurner, “THE INFLUENCE FACTORS OF THE PATIENTS’ USAGE INTENTION OF AI-BASED PRELIMINARY DIAGNOSIS TOOLS: THE CASE STUDY OF ADA.”
[15]
W. R. Malatji, R. van Eck, and T. Zuva, “Understanding the usage, modifications, limitations and criticisms of technology acceptance model (TAM),” Advances in Science, Technology and Engineering Systems, vol. 5, no. 6, pp. 113–117, 2020.
[16]
G. A. Putri, A. K. Widagdo, and D. Setiawan, “Analysis of financial technology acceptance of peer to peer lending (P2P lending) using extended technology acceptance model (TAM),” Journal of Open Innovation: Technology, Market, and Complexity, vol. 9, no. 1, Mar. 2023.
[17]
S. Majumdar and V. Pujari, “Exploring usage of mobile banking apps in the UAE: a categorical regression analysis,” Journal of Financial Services Marketing, vol. 27, no. 3, pp. 177–189, Sep. 2022.
[18]
J. N. Siebert et al., “A mobile device app to reduce prehospital medication errors and time to drug preparation and delivery by emergency medical services during simulated pediatric cardiopulmonary resuscitation: Study protocol of a multicenter, prospective, randomized controlled trial,” Trials, vol. 20, no. 1, Nov. 2019.
[19]
M. Z. Alam, M. R. Hoque, W. Hu, and Z. Barua, “Factors influencing the adoption of mHealth services in a developing country: A patient-centric study,” Int J Inf Manage, vol. 50, pp. 128–143, Feb. 2020.
[20]
S. S. Binyamin and B. A. Zafar, “Proposing a mobile apps acceptance model for users in the health area: A systematic literature review and meta-analysis,” Health Informatics J, vol. 27, no. 1, 2021.
[21]
A. J. Kim, J. Yang, Y. Jang, and J. S. Baek, “Acceptance of an informational antituberculosis chatbot among korean adults: Mixed methods research,” JMIR Mhealth Uhealth, vol. 9, no. 11, Nov. 2021.
[22]
I. Iancu and B. Iancu, “Interacting with chatbots later in life: A technology acceptance perspective in COVID-19 pandemic situation,” Front Psychol, vol. 13, Jan. 2023.
[23]
D. Y. Park and H. Kim, “Determinants of Intentions to Use Digital Mental Healthcare Content among University Students, Faculty, and Staff: Motivation, Perceived Usefulness, Perceived Ease of Use, and Parasocial Interaction with AI Chatbot,” Sustainability (Switzerland), vol. 15, no. 1, Jan. 2023.
[24]
F. A. Silva, A. S. Shojaei, and B. Barbosa, “Chatbot-Based Services: A Study on Customers’ Reuse Intention,” Journal of Theoretical and Applied Electronic Commerce Research, vol. 18, no. 1, pp. 457–474, Mar. 2023.
[25]
K. Patil and M. Kulkarni, “Can we trust Health and Wellness Chatbot going mobile? Empirical research using TAM and HBM,” in 2022 IEEE Region 10 Symposium, TENSYMP 2022, Institute of Electrical and Electronics Engineers Inc., 2022.
[26]
M. Ashfaq, J. Yun, S. Yu, and S. M. C. Loureiro, “I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents,” Telematics and Informatics, vol. 54, Nov. 2020.
[27]
T. Sitthipon, S. Siripipatthanakul, B. Phayaprom, S. Siripipattanakul, and P. Limna, “Determinants of Customers’ Intention to Use Healthcare Chatbots and Apps in Bangkok, Thailand,” 2022. [Online]. Available: https://ssrn.com/abstract=4045661
[28]
M. M. D. Alam, M. Z. Alam, S. A. Rahman, and S. K. Taghizadeh, “Factors influencing mHealth adoption and its impact on mental well-being during COVID-19 pandemic: A SEM-ANN approach,” J Biomed Inform, vol. 116, Apr. 2021.
[29]
P. R. Palos-Sanchez, J. R. Saura, M. Á. R. Martin, and M. Aguayo-Camacho, “Toward a better understanding of the intention to use mhealth apps: Exploratory study,” JMIR mHealth and uHealth, vol. 9, no. 9. JMIR Publications Inc., Sep. 01, 2021.
[30]
H. H. Lu, W. S. Lin, C. Raphael, and M. J. Wen, “A study investigating user adoptive behavior and the continuance intention to use mobile health applications during the COVID-19 pandemic era: Evidence from the telemedicine applications utilized in Indonesia,” Asia Pacific Management Review, vol. 28, no. 1, pp. 52–59, Mar. 2023.
[31]
J. Pak and H.-S. Kim, “Exploring the role of user empowerment in shaping behavioral intention and actual use of mHealth: An empirical study of an extended Technology Acceptance Model,” 2023.
[32]
F. R. T. van Elburg, N. S. Klaver, A. P. Nieboer, and M. Askari, “Gender differences regarding intention to use mHealth applications in the Dutch elderly population: a cross-sectional study,” BMC Geriatr, vol. 22, no. 1, Dec. 2022.
[33]
Y. Shahsavar and A. Choudhury, “User Intentions to Use ChatGPT for Self-Diagnosis and Health-Related Purposes: Cross-sectional Survey Study,” JMIR Hum Factors, vol. 10, 2023.
[34]
C. Y. Huang, M. C. Yang, I. M. Chen, and W. C. Hsu, “Modeling Consumer Adoption Intention of an AI-Powered Health Chatbot in Taiwan: An Empirical Perspective,” International Journal of Performability Engineering, vol. 18, no. 5, pp. 338–349, May 2022.
[35]
A. O. Oloveze, P. A. Ugwu, V. C. Okeke, K. Chukwuoyims, and E. O. Ahaiwe, “Factors motivating end-users’ behavioural intention to recommend m-health innovation: multi-group analysis,” Health Economics and Management Review, vol. 3, no. 3, pp. 17–31, 2022.
[36]
W. Wang and K. Siau, “Living with Artificial Intelligence-Developing a Theory on Trust in Health Chatbots,” 2018.
[37]
Y.-P. Hsu, Y. Chih-Hsi, and W.-C. Hsu, “Factors Influencing Users’ Willingness to Consult Chatbots for Health Information,” 2019.
[38]
L. Nie, B. Oldenburg, Y. Cao, and W. Ren, “Continuous usage intention of mobile health services: model construction and validation,” BMC Health Serv Res, vol. 23, no. 1, Dec. 2023.
[39]
I. C. Chang, Y. S. Shih, and K. M. Kuo, “Why would you use medical chatbots? interview and survey,” Int J Med Inform, vol. 165, Sep. 2022.
[40]
Y. Zhao, Q. Ni, and R. Zhou, “What factors influence the mobile health service adoption? A meta-analysis and the moderating role of age,” International Journal of Information Management, vol. 43. Elsevier Ltd, pp. 342–350, Dec. 01, 2018.
[41]
Md. A. Kaium, Y. Bao, M. Z. Alam, N. Hasan, and Md. R. Hoque, “Understanding the insight of factors affecting mHealth adoption,” International Journal of Research in Business and Social Science (2147- 4478), vol. 8, no. 6, pp. 181–200, Oct. 2019.
[42]
T. S. Yee, L. C. Seong, and W. S. Chin, “Patient's Intention to Use Mobile Health App,” Journal of Management Research, vol. 11, no. 3, p. 18, May 2019.
[43]
R. Schnall, T. Higgins, W. Brown, A. Carballo-Dieguez, and S. Bakken, “Trust, Perceived Risk, Perceived Ease of Use and Perceived Usefulness as Factors Related to mHealth Technology Use,” 2015.
[44]
P. A. Hancock, D. R. Billings, and K. E. Schaefer, “Can you trust your robot?,” Ergonomics in Design, vol. 19, no. 3, pp. 24–29, Jul. 2011.
[45]
D. J. Kim, D. L. Ferrin, and H. R. Rao, “A trust-based consumer decision-making model in electronic commerce: The role of trust, perceived risk, and their antecedents,” Decis Support Syst, vol. 44, no. 2, pp. 544–564, Jan. 2008.
[46]
F. Meng, X. Guo, Z. Peng, K. H. Lai, and X. Zhao, “Investigating the adoption of mobile health services by elderly users: Trust transfer model and survey study,” JMIR Mhealth Uhealth, vol. 7, no. 1, Jan. 2019.
[47]
K. Sowon and W. Chigona, “Trust in mHealth: How do Maternal Health Clients Accept and Use mHealth Interventions?,” in ACM International Conference Proceeding Series, Association for Computing Machinery, Sep. 2020, pp. 189–197.
[48]
L. Alam and S. Mueller, “Examining the effect of explanation on satisfaction and trust in AI diagnostic systems,” BMC Med Inform Decis Mak, vol. 21, no. 1, Dec. 2021.
[49]
A. V. Prakash and S. Das, “Association for Information Systems Association for Information Systems AIS Electronic Library (AISeL) AIS Electronic Library (AISeL) Would you Trust a Bot for Healthcare Advice? An Empirical Would you Trust a Bot for Healthcare Advice? An Empirical Investigation Investigation,” PACIS, 2020. [Online]. Available: https://aisel.aisnet.org/pacis2020/62
[50]
A. van Haasteren, F. Gille, M. Fadda, and E. Vayena, “Development of the mHealth App Trustworthiness checklist,” Digit Health, vol. 5, 2019.
[51]
G. S. Octavius and F. Antonio, “Antecedents of Intention to Adopt Mobile Health (mHealth) Application and Its Impact on Intention to Recommend: An Evidence from Indonesian Customers,” Int J Telemed Appl, vol. 2021, 2021.
[52]
N. S. Klaver, J. Van de Klundert, R. J. G. M. Van den Broek, and M. Askari, “Relationship between perceived risks of using mhealth applications and the intention to use them among older adults in the netherlands: Cross-sectional study,” JMIR Mhealth Uhealth, vol. 9, no. 8, Aug. 2021.
[53]
Y. Li, R. Liu, J. Wang, and T. Zhao, “How does mHealth service quality influences adoption?,” Industrial Management and Data Systems, vol. 122, no. 3, pp. 774–795, Mar. 2022.
[54]
L. Seitz, S. Bekmeier-Feuerhahn, and K. Gohil, “Can we trust a chatbot like a physician? A qualitative study on understanding the emergence of trust toward diagnostic chatbots,” International Journal of Human Computer Studies, vol. 165, Sep. 2022.
[55]
L. Seitz et al., “Towards a Model for Building Trust and Acceptance of Artificial Intelligence Aided Medical Assessment Systems,” 2020.
[56]
Z. Deng, Z. Hong, C. Ren, W. Zhang, and F. Xiang, “What predicts patients’ adoption intention toward mhealth services in China: Empirical study,” JMIR Mhealth Uhealth, vol. 6, no. 8, Aug. 2018.
[57]
T. Nadarzynski, O. Miles, A. Cowie, and D. Ridge, “Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study,” Digit Health, vol. 5, Aug. 2019.
[58]
A. Palanica, P. Flaschner, A. Thommandram, M. Li, and Y. Fossat, “Physicians’ perceptions of chatbots in health care: Cross-sectional web-based survey,” J Med Internet Res, vol. 21, no. 4, Apr. 2019.
[59]
Y. Lai, E. Lioliou, and P. Panagiotopoulos, “Understanding Users’ switching Intention to AI-Powered Healthcare Chatbots.” [Online]. Available: https://aisel.aisnet.org/ecis2021_rp/51
[60]
Y. Cheng and H. Jiang, “How Do AI-driven Chatbots Impact User Experience? Examining Gratifications, Perceived Privacy Risk, Satisfaction, Loyalty, and Continued Use,” J Broadcast Electron Media, vol. 64, no. 4, pp. 592–614, 2020.
[61]
M. Rajak and K. Shaw, “An extension of technology acceptance model for mHealth user adoption,” Technol Soc, vol. 67, Nov. 2021.
[62]
A. B. del Río-Lanza, A. Suárez-Vázquez, L. Suárez-Álvarez, and V. Iglesias-Argüelles, “Mobile health (mhealth): facilitators and barriers of the intention of use in patients with chronic illnesses,” J Commun Healthc, vol. 13, no. 2, pp. 138–146, Apr. 2020.
[63]
S. Birkmeyer, B. W. Wirtz, and P. F. Langer, “Determinants of mHealth success: An empirical investigation of the user perspective,” Int J Inf Manage, vol. 59, Aug. 2021.
[64]
T. I. Vaughan-Johnston and J. A. Jacobson, “Theory Issues in Measurement Self-efficacy Theory,” 2020.
[65]
I. K. Mensah, G. Zeng, and D. S. Mwakapesa, “The behavioral intention to adopt mobile health services: The moderating impact of mobile self-efficacy.”.
[66]
Y. Liu, X. Lu, G. Zhao, C. Li, and J. Shi, “Adoption of mobile health services using the unified theory of acceptance and use of technology model: Self-efficacy and privacy concerns,” Front Psychol, vol. 13, Aug. 2022.
[67]
F. Ghahramani and J. Wang, “Intention to adopt mhealth apps among informal caregivers: Cross-sectional study,” JMIR Mhealth Uhealth, vol. 9, no. 3, Mar. 2021.
[68]
A. Vinnikova, L. Lu, J. Wei, G. Fang, and J. Yan, “The use of smartphone fitness applications: The role of self-efficacy and self-regulation,” Int J Environ Res Public Health, vol. 17, no. 20, pp. 1–16, Oct. 2020.
[69]
X. Zhang, X. Han, Y. Dang, F. Meng, X. Guo, and J. Lin, “User acceptance of mobile health services from users’ perspectives: The role of self-efficacy and response-efficacy in technology acceptance,” Inform Health Soc Care, vol. 42, no. 2, pp. 194–206, Apr. 2017.
[70]
M. Saunders, P. Lewis, A. Thornhill, S. • Lewis, and • Thornhill, “Research methods for business students fi fth edition.” [Online]. Available: www.pearsoned.co.uk
[71]
MCMC, “Malaysian Communications and Multimedia Commission,” https://mcmc.gov.my/skmmgovmy/media/General/IUS-2022.pdf.
[72]
M. Alsobhi, H. S. Sachdev, M. F. Chevidikunnan, R. Basuodan, K. U. Dhanesh Kumar, and F. Khan, “Facilitators and Barriers of Artificial Intelligence Applications in Rehabilitation: A Mixed-Method Approach,” Int J Environ Res Public Health, vol. 19, no. 23, Dec. 2022.
[73]
J. F. Hair, J. J. Risher, M. Sarstedt, and C. M. Ringle, “When to use and how to report the results of PLS-SEM,” European Business Review, vol. 31, no. 1. Emerald Group Publishing Ltd., pp. 2–24, Jan. 14, 2019.
[74]
M. K. Cain, Z. Zhang, and K. H. Yuan, “Univariate and multivariate skewness and kurtosis for measuring nonnormality: Prevalence, influence and estimation,” Behav Res Methods, vol. 49, no. 5, pp. 1716–1735, Oct. 2017.
[75]
C. Fornell and D. F. Larcker, “Evaluating Structural Equation Models with Unobservable Variables and Measurement Error,” 1981.
[76]
A. Balapour, I. Reychav, R. Sabherwal, and J. Azuri, “Mobile technology identity and self-efficacy: Implications for the adoption of clinically supported mobile health apps,” Int J Inf Manage, vol. 49, pp. 58–68, Dec. 2019.
[77]
M. Yan, R. Filieri, E. Raguseo, and M. Gorton, “Mobile apps for healthy living: Factors influencing continuance intention for health apps,” Technol Forecast Soc Change, vol. 166, May 2021.
[78]
M. Z. Alam, W. Hu, M. A. Kaium, M. R. Hoque, and M. M. D. Alam, “Understanding the determinants of mHealth apps adoption in Bangladesh: A SEM-Neural network approach,” Technol Soc, vol. 61, May 2020.
[79]
N. Kock, “Common method bias in PLS-SEM: A full collinearity assessment approach,” International Journal of e-Collaboration, vol. 11, no. 4, pp. 1–10, Oct. 2015.
[80]
J. Cohen, Statistical power analysis for the behavioral sciences. L. Erlbaum Associates, 1988.
[81]
G. Shmueli et al., “Predictive model assessment in PLS-SEM: guidelines for using PLSpredict,” Eur J Mark, vol. 53, no. 11, pp. 2322–2347, Sep. 2019.
[82]
S. Kelly, S. A. Kaye, and O. Oviedo-Trespalacios, “What factors contribute to the acceptance of artificial intelligence? A systematic review,” Telematics and Informatics, vol. 77, Feb. 2023.

Index Terms

  1. Analyzing Use Intentions for Health-Diagnostic Chatbots: An Extended Technology Acceptance Model Approach

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    WSSE '24: Proceedings of the 2024 The 6th World Symposium on Software Engineering (WSSE)
    September 2024
    289 pages
    ISBN:9798400717086
    DOI:10.1145/3698062

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 December 2024

    Check for updates

    Author Tags

    1. AI-powered health diagnostic chatbot
    2. TAM
    3. perceived trust
    4. self-efficacy
    5. subjective norm

    Qualifiers

    • Research-article

    Conference

    WSSE 2024

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 242
      Total Downloads
    • Downloads (Last 12 months)242
    • Downloads (Last 6 weeks)141
    Reflects downloads up to 26 Jan 2025

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media