April 2024 | XIAO ZHAN, NOURA ABDI, WILLIAM SEYMOUR, JOSE SUCH
Healthcare Voice AI Assistants (HVAs) are increasingly used in healthcare, offering convenient access to services. Trust is crucial for their adoption, but factors influencing trust in HVAs are under-researched. This study explores how functional, personal, and risk factors affect trust and adoption of HVAs, using a survey of 300 users. Results show that trust in HVAs is significantly influenced by functional factors (usefulness, content credibility, service quality), security, privacy risks, and personal stance in technology. Trust in HVAs also influences users' intention to use them. The study develops a partial least squares structural model, revealing that trust in HVAs is substantially explained by these factors. The research highlights the unique challenges of healthcare, such as data sensitivity and the need for accurate diagnoses, which differ from general-purpose voice assistants. The findings suggest that trust in HVAs is influenced by their perceived usefulness, credibility, and service quality, as well as security and privacy concerns. The study also emphasizes the importance of addressing these factors to promote user trust and adoption of HVAs. The research contributes to understanding the factors affecting trust and adoption in the healthcare domain, with implications for the design and development of future HVAs.Healthcare Voice AI Assistants (HVAs) are increasingly used in healthcare, offering convenient access to services. Trust is crucial for their adoption, but factors influencing trust in HVAs are under-researched. This study explores how functional, personal, and risk factors affect trust and adoption of HVAs, using a survey of 300 users. Results show that trust in HVAs is significantly influenced by functional factors (usefulness, content credibility, service quality), security, privacy risks, and personal stance in technology. Trust in HVAs also influences users' intention to use them. The study develops a partial least squares structural model, revealing that trust in HVAs is substantially explained by these factors. The research highlights the unique challenges of healthcare, such as data sensitivity and the need for accurate diagnoses, which differ from general-purpose voice assistants. The findings suggest that trust in HVAs is influenced by their perceived usefulness, credibility, and service quality, as well as security and privacy concerns. The study also emphasizes the importance of addressing these factors to promote user trust and adoption of HVAs. The research contributes to understanding the factors affecting trust and adoption in the healthcare domain, with implications for the design and development of future HVAs.