Preparing healthcare organisations for using artificial intelligence effectively
Ian A. Scott A B * , Anton van der Vegt
A
B
C
D
Abstract
Healthcare organisations (HCOs) must prepare for large-scale implementation of artificial intelligence (AI)-enabled tools that can demonstrably achieve one or more aims of better care, improved efficiency, enhanced professional and patient experience, and greater equity. Failure to do so may disadvantage patients, staff, and the organisation itself. We outline key strategies Australian HCOs should enact in maximising successful AI implementations: (1) establish transparent and accountable governance structures tasked to ensure responsible use of AI, including shifting organisational culture towards AI; (2) invest in delivering the human talent, technical infrastructure, and organisational change management that underpin a sustainable AI ecosystem; (3) gain staff and patient trust in using AI tools by virtue of their value to real world care and minimal threats to patient safety and privacy, existence of reliable governance, provision of appropriate training and opportunity for user co-design, transparency in AI tool use and consent, and retention of user agency in responding to AI generated advice; (4) establish risk assessment and mitigation processes that delineate unacceptable, high, medium, and low risk AI tools, based on task criticality and rigour of performance evaluations, and monitor and respond to any adverse impacts on patient outcomes; and (5) determine when and how liability for patient harm associated with a specific AI tool rests with, or is shared between, staff, developers, and the deploying HCO itself. In realising the benefits of AI, HCOs must build the necessary AI infrastructure, literacy, and cultural adaptation with foresighted planning and procurement of resources.
Keywords: artificial intelligence, governance, healthcare organisation, investment, liability, preparedness, risk, trust.
Introduction
Artificial Intelligence (AI)-enabled tools, including generative AI, can potentially revolutionise health care.1 Despite few AI tool implementations in Australian healthcare settings to date,2 healthcare organisations (HCOs) must prepare for wider-scale adoption or risk disadvantaging their patients, staff, and the organisation itself. We outline key strategies HCOs should enact to maximise successful AI implementations.
Establish artificial intelligence governance structures
A multidisciplinary governance group capable of performing functions that underpin responsible use of AI must be established (Box 1), comprising clinicians, data scientists, information technology (IT) personnel, managers, ethicists, legal experts, and health consumers,3 and a chief health AI officer appointed as chair.4 The group must have appropriate skills for the tasks required (Box 2), deeply understand why AI implementations fail (Box 3), and proactively monitor performance and impacts of deployed AI tools over their life cycle rather than ‘set and forget.’5–7 Achieving an organisational cultural shift to seeing AI as a service delivery enhancer, job creator, and skill set amplifier, not a threat to displace clinical, managerial, or administrative staff, is key.8 Without strong governance, HCOs face reputational loss, legal and ethical liabilities, and workforce disengagement.
Invest in a sustainable artificial intelligence ecosystem
Adopting AI at scale requires long-term investment in finance and human capital, technical infrastructure, and continuous organisational learning.9,10 Investment is needed to: access and ingest high-quality data for model training; acquire software; upgrade servers; use cloud computing; establish AI laboratories for tool development and testing; hire clinical and technical experts; contract with vendors; and deliver ongoing workforce training, change management, patient education, and community engagement.11,12
HCOs must strategically select and invest in those AI tools that meaningfully address known deficiencies in current service delivery,13 meet organisational evaluation criteria,14 and satisfy prevailing regulatory standards. Any AI strategy requires clear objectives that support existing HCO goals for achieving better care, improved efficiency, enhanced professional and patient experience, and greater equity.15 Initially selecting high impact, low risk, easier to implement use cases allows testing of governance and decisional processes while limiting risk, ensures vendor priorities align with HCO needs, and attracts end-user buy-in.16 Taking on multiple use cases simultaneously or doing nothing because of seemingly insurmountable operational challenges should be avoided.17,18 Any investment strategy must remain adaptive to regulatory changes and advances in AI that increase safety and financial and environmental sustainability.19
Gain staff and patient trust in using artificial intelligence tools
Both HCO staff and patients will more likely use AI tools if six requirements are met.20–24 First, the tool adds value in reliably improving service delivery in real-world settings.25 Second, tool use poses minimal risk to patient safety, privacy, user autonomy, personal liability, or organisational reputation. Third, users know identifiable governance office-holders are being held accountable for responsible use of AI.26 Fourth, HCOs provide both staff and patients, particularly the former, with protected, paid time to train and become proficient in AI,27 and, working with developers, co-produce user-centred, fit for purpose AI tools. Fifth, HCOs must be transparent in ensuring staff and patients are aware of when, how, and why AI tools are being used, obtain consent, and communicate who has access to any data collected through the tool.28 For high-stakes decision-making, explanations or rationales for AI outputs should be provided, wherever possible, in formats interpretable by both parties. Finally, staff and patients must retain agency in being able to over-ride AI advice if perceived as wrong or inappropriate, and provide feedback to the governance group on such instances.29,30
Establish robust risk management processes
HCOs must implement risk assessment and mitigation processes31 aligned with government frameworks and regulatory standards (Box 4). Risk-tiered frameworks may delineate unacceptable, high, medium, and low risk tools, wherein totally autonomous tools in patient-sensitive domains may be deemed unacceptable, medium to high risk clinical decision support tools require high quality evidence of benefit (ideally pragmatic clinical trials using real-time data),32 and low risk administrative support tools only require pre-post observational studies. Medium to high risk tools will require robust data governance protocols, validation on local populations, disclosure of capabilities to users, regulatory approval, and human oversight standards.33 HCOs will need to regularly assess the impacts of AI tools on patient outcomes and system-wide quality and safety,34 decide criteria for revising or decommissioning an AI tool,35 and designate personnel to switch off an AI tool when necessary and manage organisational functions that have become dependent on that tool.
Box 4.AI risk management frameworks |
|
Determine liability for artificial intelligence-induced patient harm
HCOs must determine when liability for AI-related patient harm rests with, or is shared between, staff, developers, and the deploying HCO itself.36–40 To date, case law in Australia and overseas that informs guidance on liability is virtually absent given the infancy of healthcare AI (see Box 5 for an example). For now, prime responsibility will probably remain with staff (and their HCO through vicarious liability provisions), with claims arbitrated by both AI and domain experts. This applies particularly to where AI developers/manufacturers rely on, and regulators accept, liability exclusion clauses designating their tools as only assistive and always requiring ‘human in the loop’ judgement.
Box 5.Liability considerations for AI tools |
A challenging case |
Patient A has been diagnosed with cancer B to which a ‘black box’ AI tool recommends a treatment highly personalised for Patient A, given all the data it has relied on, while noting potential contraindications. Based on experience with similar cases, clinician X expects this treatment will likely confer benefit, but does not fully understand, and is unable to articulate in meaningful terms to the patient, how the AI tool arrived at the advice it has. |
Does the clinician act on the advice of the AI tool and invoke therapeutic privilege, or admit the inability to adequately explain the decision and risk not receiving patient consent for what could well be life-preserving treatment? |
Current legal approach to determining liability |
At the outset, the clinician should ensure the AI tool has received regulatory approval based on its intended use which, if present, markedly offsets clinician liability. The tool manufacturer is obligated to provide information, comprehensible to the clinician as its primary user, that explains the correct use of the tool, how to interpret its outputs, and any context-related limits to its performance. |
Proving liability for an act of negligence through use of an unapproved tool, or an approved tool outside of its intended use indications, requires three criteria to be met: the clinician (or employing institution) owes the patient a duty of care; that duty has been breached by failing to meet the requisite standard of care; and the patient has suffered damage because of that breach of duty.A The clinician has a duty to warn a patient of a material risk inherent in the proposed treatment; a risk is material if, given the circumstances of the case, a reasonable person in the patient’s position, if warned of the risk, would likely attach significance to it, or if the clinician is or should reasonably be aware that the particular patient, if warned of the risk, would likely attach significance to it.B |
However, liability law recognises therapeutic privilege whereby a clinician does not breach their duty of care if it is established they acted in a way, at the time care was provided, that was widely accepted by a significant number of respected peer clinicians as being competent professional practice.C |
Patient consent and AI transparency |
In using AI to inform care, transparency in obtaining patient consent has to account for what explanations clinicians are reasonably able to provide, and patients can reasonably understand, about how ‘black box’ AI tools work and produce their results.36 Comprehensible explanations should be provided whenever possible, although the attainable level of understanding for both parties will vary from case to case depending on the clinical context, tool complexity, and participant capabilities. |
AI developers and deploying HCOs must take steps to provide the assistance clinicians require, as the primary tool users, in each specific situation. While all patients should receive a broad overview, a hardline insistence on providing detailed ‘under-the-hood’ explanations of how an AI tool works could overwhelm patients with incomprehensible technical detail that may disadvantage their care by causing confusion and anxiety, and possibly impairing the judgement of their clinicians.37 Demanding full explainability is not feasible, may be misleading to both parties, and is not applied to other domains in medicine.38,39 For example, clinicians do not necessarily need to understand the specific mechanism of action of a particular drug in order to responsibly prescribe it to a patient – they draw instead from evidence of efficacy and safety in clinical studies and an understanding of patient populations and circumstances in which the drug was evaluated (e.g. stage in disease trajectory, comorbidities, potential side effects, etc).39 |
Accordingly, what patients should be fully informed of, as far as possible, are the benefits, risks, and mitigations of specific tools in issuing advice, as determined by rigorous clinical validation studies of their accuracy and outcome impacts, and transparency around AI tool development and quality assurance processes.40 Moreover, the tool output, by itself, should never be the only factor influencing decision-making; the totality of data available to the clinician, contextualised to the patient’s specific situation (care goals, values and preferences, risk factors, demographics), must be considered.39,40 |
Clinicians using AI should consider what would be accepted as reasonable decisions under the law, including if AI was not used at all. Courts are unlikely to simply ask ‘was the AI tool approved for this type of use and, if so, did you follow its advice?’. A more nuanced approach to shared decision-making and deep engagement with patients, when done properly, underpins the social licence for AI use, informs standards around its operations, governance and ethics, and promotes trust, health literacy, and information sharing. |
Any AI tool directly influencing clinical care is subject to approval by the Therapeutics Goods Administration. In addition, poorly designed or malfunctioning tools may expose developers to negligence provisions of Australian consumer law applicable to all consumer products.41 Appropriately trained staff who knowingly misuse tools or unjustifiably reject or accept accurate or inaccurate AI outputs respectively will be liable, but failing to provide the governance, infrastructure, or user training required for safe tool use renders the HCO liable. Users and their HCO will assume greater liability if an approved and deployed tool degrades in performance over time due to data shifts, but which remains undetected due to absent tool monitoring and auditing.42
In conclusion, in realising the benefits of AI, HCOs must undertake significant foundational work in establishing governance, infrastructure, training, and cultural adaptation. HCOs that approach AI with appropriate expectations, planning, and resources greatly improve their odds of successful implementation.
Data availability
Data sharing is not applicable as no new data were generated or analysed during this study.
Disclaimer
The views expressed in this publication are those of the author(s) and do not necessarily represent those of, and should not be attributed to the publisher, the journal owner or CSIRO.
Author contributions
IAS: conceptualised the article, reviewed relevant literature, and drafted the manuscript. AVV: provided comments on technical infrastructure, data governance, and regulatory approval. SC: provided comments on organisational governance and investment strategy. PN: provided comments on medicolegal liability. KP: provided comments on patient and consumer perspectives towards AI. All authors reviewed the final manuscript and approved its submission.
References
1 Haug CJ, Drazen JM. Artificial intelligence and machine learning in clinical medicine, 2023. N Engl J Med 2023; 388(13): 1201-1208.
| Crossref | Google Scholar | PubMed |
2 Janssen AB, Kavisha S, Johnson A, Marinic A, Teede H, Shaw T. Implementation of artificial intelligence tools in Australian healthcare organisations: Environmental scan findings. Stud Health Technol Inform 2024; 310: 1136-1140.
| Crossref | Google Scholar | PubMed |
3 Apfelbacher T, Koçman SE, Prokosch HU, Christoph J. A governance framework for the implementation and operation of AI tools in a university hospital. Stud Health Technol Inform 2024; 316: 776-780.
| Crossref | Google Scholar | PubMed |
4 Beecy AN, Longhurst CA, Singh K, et al. The Chief Health AI Officer — An emerging role for an emerging technology. NEJM AI 2024; 1(7): AIp2400109.
| Crossref | Google Scholar |
5 Hassan M, Borycki EM, Kushniruk AW. Artificial intelligence governance framework for healthcare. Healthc Manage Forum 2025; 38(2): 125-130.
| Crossref | Google Scholar | PubMed |
6 Scott IA, Abdel-Hafez A, Barras M, Canaris S. What is needed to mainstream artificial intelligence in health care. Aust Health Rev 2021; 45: 591-596.
| Crossref | Google Scholar | PubMed |
7 Loufek B, Vidal D, McClintock DS, et al. Embedding internal accountability into health care institutions for safe, effective, and ethical implementation of artificial intelligence into medical practice: A Mayo Clinic case study. Mayo Clin Proc Digit Health 2024; 2(4): 574-583.
| Crossref | Google Scholar | PubMed |
8 Alves M, Seringa J, Silvestre T, Magalhães T. Use of Artificial Intelligence tools in supporting decision-making in hospital management. BMC Health Serv Res 2024; 24(1): 1282.
| Crossref | Google Scholar | PubMed |
9 Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: a perspective for healthcare organizations. Artif Intell Med 2024; 151: 102861.
| Crossref | Google Scholar | PubMed |
10 Roppelt JS, Kanbach DK, Kraus S. Artificial intelligence in healthcare institutions: a systematic literature review on influencing factors. Technol Soc 2024; 76: 102443.
| Crossref | Google Scholar |
11 Callahan A, Ashley E, Datta S, et al. The Stanford Medicine data science ecosystem for clinical and translational research. JAMIA Open 2023; 6: ooad054.
| Crossref | Google Scholar | PubMed |
12 Corbin CK, Maclay R, Acharya A, et al. DEPLOYR: a technical framework for deploying custom real-time machine learning models into the electronic medical record. J Am Med Inform Assoc 2023; 30: 1532-1542.
| Crossref | Google Scholar | PubMed |
13 Stretton B, Koovor JG, Hains L, et al. How will the artificial intelligence algorithm work within the constraints of this healthcare system? Intern Med J 2024; 54(1): 190-191.
| Crossref | Google Scholar | PubMed |
14 Economou-Zavlanos NJ, Bessias S, Cary MP, Jr, et al. Translating ethical and quality principles for the effective, safe and fair development, deployment and use of artificial intelligence technologies in healthcare. J Am Med Inform Assoc 2024; 31(3): 705-713.
| Crossref | Google Scholar | PubMed |
15 Nundy S, Cooper LA, Mate KS. The Quintuple Aim for health care improvement: A new imperative to advance health equity. JAMA 2022; 327(6): 521-522.
| Crossref | Google Scholar | PubMed |
16 Davenport T, Bean R. Clinical AI gets the headlines, but administrative AI may be a better bet. MIT Sloan Manag Rev. 2022. Available at https://sloanreview.mit.edu/article/clinical-ai-gets-the-headlines-but-administrative-ai-may-be-a-better-bet/ [accessed 10 April].
17 Hassan M, Kushniruk A, Borycki E. Barriers to and facilitators of artificial intelligence adoption in health care: scoping review. JMIR Hum Factors 2024; 11: e48633.
| Crossref | Google Scholar | PubMed |
18 Alami H, Lehoux P, Papoutsi C, et al. Understanding the integration of artificial intelligence in healthcare organisations and systems through the NASSS framework: a qualitative study in a leading Canadian academic centre. BMC Health Serv Res 2024; 24: 701.
| Crossref | Google Scholar | PubMed |
19 Gonzalez A, Crowell T, Lin SY. AI Code of Conduct—Safety, inclusivity, and sustainability. JAMA Intern Med 2025; 185(1): 12-13.
| Crossref | Google Scholar | PubMed |
20 Kinney M, Anastasiadou M, Naranjo-Zolotov M, Santos V. Expectation management in AI: A framework for understanding stakeholder trust and acceptance of artificial intelligence systems. Heliyon 2024; 10(7): e28562.
| Crossref | Google Scholar | PubMed |
21 Lahusen C, Maggetti M, Slavkovik M. Trust, trustworthiness and AI governance. Sci Rep 2024; 14(1): 20752.
| Crossref | Google Scholar | PubMed |
22 Isaacks DB, Borkowski AA. Implementing trustworthy AI in VA high reliability health care organizations. Fed Pract 2024; 41(2): 40-43.
| Crossref | Google Scholar | PubMed |
23 Kim M, Sohn H, Choi S, Kim S. Requirements for trustworthy artificial intelligence and its tool in healthcare. Healthc Inform Res 2023; 29(4): 315-322.
| Crossref | Google Scholar | PubMed |
24 Bergquist M, Rolandsson B, Gryska E, Laesser M, Hoefling N, Heckemann R, Schneiderman JF, Björkman-Burtscher IM. Trust and stakeholder perspectives on the implementation of AI tools in clinical radiology. Eur Radiol 2024; 34(1): 338-347.
| Crossref | Google Scholar | PubMed |
25 Hennrich J, Ritz E, Hofmann P, Urbach N. Capturing artificial intelligence tools’ value proposition in healthcare: a qualitative research study. BMC Health Serv Res 2024; 24(1): 420.
| Crossref | Google Scholar | PubMed |
26 Mahmood U, Shukla-Dave A, Chan HP, et al. Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing. BJR Artif Intell 2024; 1(1): ubae003.
| Crossref | Google Scholar | PubMed |
27 Gazquez-Garcia J, Sánchez-Bocanegra CL, Sevillano JL. AI in the health sector: Systematic review of key skills for future health professionals. JMIR Med Educ 2025; 11: e58161.
| Crossref | Google Scholar | PubMed |
28 Rose SL, Shapiro D. An ethically supported framework for determining patient notification and informed consent practices when using artificial intelligence in health care. Chest 2024; 166(3): 572-578.
| Crossref | Google Scholar | PubMed |
29 Cavalcante Siebert L, Lupetti ML, Aizenberg E, et al. Meaningful human control: actionable properties for AI system development. AI Ethics 2023; 3: 241-255.
| Google Scholar |
30 Khanijahani A, Iezadi S, Dudley S, et al. Organizational, professional, and patient characteristics associated with artificial intelligence adoption in healthcare: a systematic review. Health Policy Technol 2022; 11(1): 100602.
| Google Scholar |
31 Ranjbar A, Mork EW, Ravn J, et al. Managing risk and quality of AI in healthcare: Are hospitals ready for implementation? Risk Manag Healthc Policy 2024; 17: 877-882.
| Crossref | Google Scholar | PubMed |
32 Jin MF, Noseworthy PA, Yao X. Assessing artificial intelligence solution effectiveness: The role of pragmatic trials. Mayo Clin Proc Digit Health 2024; 2(4): 499-510.
| Crossref | Google Scholar | PubMed |
33 Dixit A, Quaglietta J, Gaulton C. Preparing for the future: how organizations can prepare boards, leaders, and risk managers for artificial intelligence. Healthc Manage Forum 2021; 34: 346-352.
| Crossref | Google Scholar | PubMed |
34 Ratwani RM, Bates DW, Classen DC. Patient safety and artificial intelligence in clinical care. JAMA Health Forum 2024; 5(2): e235514.
| Crossref | Google Scholar | PubMed |
35 Ansari S, Baur B, Singh K, Admon AJ. Challenges in the postmarket surveillance of clinical prediction models. NEJM AI 2025; 2(5): AIp2401116.
| Crossref | Google Scholar |
36 Nolan P, Matulionyte R. Artificial Intelligence in medicine: Issues when determining negligence. J Law Med 2023; 30: 593-615.
| Google Scholar | PubMed |
37 Nolan P. Artificial intelligence in medicine – is too much transparency a good thing? Med Leg J 2023; 91(4): 193-197.
| Crossref | Google Scholar | PubMed |
38 Mello MM, Guha N. Understanding liability risk from using health care artificial intelligence tools. N Engl J Med 2024; 390(3): 271-278.
| Crossref | Google Scholar | PubMed |
39 McCradden MD, Stedman I. Explaining decisions without explainability? Artificial intelligence and medicolegal accountability. Future Healthc J 2024; 11: 100171.
| Crossref | Google Scholar | PubMed |
40 Matulionyte R, Nolan P, Magrabi F, Beheshti A. Should AI-enabled medical devices be explainable? Int J Law Inform Technol 2022; 30(2): 151-180.
| Google Scholar |
42 Liu X, Glocker B, McCradden MM, et al. The medical algorithmic audit. Lancet Digit Health 2022; 4(5): e384-e397.
| Crossref | Google Scholar | PubMed |