• Bmc Med Inform Decis · Jul 2020

    Use of AI-based tools for healthcare purposes: a survey study from consumers' perspectives.

    • Pouyan Esmaeilzadeh.
    • Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, 33199, USA. pesmaeil@fiu.edu.
    • Bmc Med Inform Decis. 2020 Jul 22; 20 (1): 170.

    BackgroundSeveral studies highlight the effects of artificial intelligence (AI) systems on healthcare delivery. AI-based tools may improve prognosis, diagnostics, and care planning. It is believed that AI will be an integral part of healthcare services in the near future and will be incorporated into several aspects of clinical care. Thus, many technology companies and governmental projects have invested in producing AI-based clinical tools and medical applications. Patients can be one of the most important beneficiaries and users of AI-based applications whose perceptions may affect the widespread use of AI-based tools. Patients should be ensured that they will not be harmed by AI-based devices, and instead, they will be benefited by using AI technology for healthcare purposes. Although AI can enhance healthcare outcomes, possible dimensions of concerns and risks should be addressed before its integration with routine clinical care.MethodsWe develop a model mainly based on value perceptions due to the specificity of the healthcare field. This study aims at examining the perceived benefits and risks of AI medical devices with clinical decision support (CDS) features from consumers' perspectives. We use an online survey to collect data from 307 individuals in the United States.ResultsThe proposed model identifies the sources of motivation and pressure for patients in the development of AI-based devices. The results show that technological, ethical (trust factors), and regulatory concerns significantly contribute to the perceived risks of using AI applications in healthcare. Of the three categories, technological concerns (i.e., performance and communication feature) are found to be the most significant predictors of risk beliefs.ConclusionsThis study sheds more light on factors affecting perceived risks and proposes some recommendations on how to practically reduce these concerns. The findings of this study provide implications for research and practice in the area of AI-based CDS. Regulatory agencies, in cooperation with healthcare institutions, should establish normative standard and evaluation guidelines for the implementation and use of AI in healthcare. Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors of AI-based services.

      Pubmed     Free full text   Copy Citation     Plaintext  

      Add institutional full text...

    Notes

     
    Knowledge, pearl, summary or comment to share?
    300 characters remaining
    help        
    You can also include formatting, links, images and footnotes in your notes
    • Simple formatting can be added to notes, such as *italics*, _underline_ or **bold**.
    • Superscript can be denoted by <sup>text</sup> and subscript <sub>text</sub>.
    • Numbered or bulleted lists can be created using either numbered lines 1. 2. 3., hyphens - or asterisks *.
    • Links can be included with: [my link to pubmed](http://pubmed.com)
    • Images can be included with: ![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
    • For footnotes use [^1](This is a footnote.) inline.
    • Or use an inline reference [^1] to refer to a longer footnote elseweher in the document [^1]: This is a long footnote..

    hide…

What will the 'Medical Journal of You' look like?

Start your free 21 day trial now.

We guarantee your privacy. Your email address will not be shared.