Register      Login
Australian Health Review Australian Health Review Society
Journal of the Australian Healthcare & Hospitals Association
RESEARCH ARTICLE (Open Access)

Preparing healthcare organisations for using artificial intelligence effectively

Ian A. Scott A B * , Anton van der Vegt https://orcid.org/0000-0001-5642-5188 C , Stephen Canaris A , Paul Nolan D and Keren Pointon B
+ Author Affiliations
- Author Affiliations

A Digital Health and Informatics, Metro South Hospital and Health Service, Woolloongabba, Brisbane, Qld, Australia. Email: stephen.canaris@health.qld.gov.au

B Queensland Digital Health Centre (QDHeC), Centre for Health Services Research, The University of Queensland, Qld, Australia. Email: k.pointon@uq.edu.au

C Centre for Health Services Research, The University of Queensland, Herston, Brisbane, Qld, Australia. Email: a.vandervegt@uq.edu.au

D Australian Bar Association, Selborne Chambers, Sydney, NSW, Australia. Email: paulnolan@barristerchambers.com.au

* Correspondence to: ian.scott@health.qld.gov.au

Australian Health Review 49, AH25102 https://doi.org/10.1071/AH25102
Submitted: 17 May 2025  Accepted: 2 July 2025  Published: 28 July 2025

© 2025 The Author(s) (or their employer(s)). Published by CSIRO Publishing on behalf of AHHA. This is an open access article distributed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND)

Abstract

Healthcare organisations (HCOs) must prepare for large-scale implementation of artificial intelligence (AI)-enabled tools that can demonstrably achieve one or more aims of better care, improved efficiency, enhanced professional and patient experience, and greater equity. Failure to do so may disadvantage patients, staff, and the organisation itself. We outline key strategies Australian HCOs should enact in maximising successful AI implementations: (1) establish transparent and accountable governance structures tasked to ensure responsible use of AI, including shifting organisational culture towards AI; (2) invest in delivering the human talent, technical infrastructure, and organisational change management that underpin a sustainable AI ecosystem; (3) gain staff and patient trust in using AI tools by virtue of their value to real world care and minimal threats to patient safety and privacy, existence of reliable governance, provision of appropriate training and opportunity for user co-design, transparency in AI tool use and consent, and retention of user agency in responding to AI generated advice; (4) establish risk assessment and mitigation processes that delineate unacceptable, high, medium, and low risk AI tools, based on task criticality and rigour of performance evaluations, and monitor and respond to any adverse impacts on patient outcomes; and (5) determine when and how liability for patient harm associated with a specific AI tool rests with, or is shared between, staff, developers, and the deploying HCO itself. In realising the benefits of AI, HCOs must build the necessary AI infrastructure, literacy, and cultural adaptation with foresighted planning and procurement of resources.

Keywords: artificial intelligence, governance, healthcare organisation, investment, liability, preparedness, risk, trust.

Introduction

Artificial Intelligence (AI)-enabled tools, including generative AI, can potentially revolutionise health care.1 Despite few AI tool implementations in Australian healthcare settings to date,2 healthcare organisations (HCOs) must prepare for wider-scale adoption or risk disadvantaging their patients, staff, and the organisation itself. We outline key strategies HCOs should enact to maximise successful AI implementations.

Establish artificial intelligence governance structures

A multidisciplinary governance group capable of performing functions that underpin responsible use of AI must be established (Box 1), comprising clinicians, data scientists, information technology (IT) personnel, managers, ethicists, legal experts, and health consumers,3 and a chief health AI officer appointed as chair.4 The group must have appropriate skills for the tasks required (Box 2), deeply understand why AI implementations fail (Box 3), and proactively monitor performance and impacts of deployed AI tools over their life cycle rather than ‘set and forget.’57 Achieving an organisational cultural shift to seeing AI as a service delivery enhancer, job creator, and skill set amplifier, not a threat to displace clinical, managerial, or administrative staff, is key.8 Without strong governance, HCOs face reputational loss, legal and ethical liabilities, and workforce disengagement.

Box 1.Responsibilities of HCO AI governance groups
  • Oversee the development and endorsement of an AI strategic plan that is aligned with overall HCO strategic goals and objectives.

  • Draft and approve AI governance policies and procedures relating to:

    • Data governance that ensures appropriate use of data repositories (e.g. data collection, storage, access, security, sharing) and aims to improve data quality and interoperability.

    • Selection and prioritisation of AI tools for development and/or deployment based on HCO priorities and capability.

    • Formalised processes that identify and mitigate risk associated with AI tools including bias, privacy, cybersecurity, ethical and legal issues, and clinical safety concerns.

    • Roles, responsibilities, and accountabilities of each stakeholder group (clinical, informatics, vendor, procurement, financial, legal) in AI tool deployment and monitoring.

  • Undertake a multi-axis assessment of HCO culture, capability, and preparedness for using AI, including staff and patient surveys.

  • Optimise data systems and digital infrastructure necessary for AI tools to operate effectively and safely.

  • Approve and oversee deployment of AI tools that meet all legal, regulatory, and governance requirements, ensure all stakeholders likely to be influenced by the tool have had input into approval decisions, and ensure tools are financially and environmentally sustainable and fully aligned with HCO values and goals.

  • Maintain a registry of ‘in progress’ and ‘deployed’ AI tools and implement procedures that track deployment date, current version, responsible personnel, last review date, authorised users and purpose, source of data used to train and test the AI tool, and results of local validation studies and performance comparisons with current care.

  • Maintain and regularly review real-time data about which staff are using AI tools, what type of tools are being used and how, and the clinical and non-clinical functions being served.

  • Train the workforce in how to use AI tools appropriately, and provide the necessary ‘at the elbow’ technical support.

  • Implement patient awareness and consenting procedures as necessary to preserve patient autonomy and choice in care decisions.

  • Implement safety monitoring and incident management protocols that quickly identify AI tool errors or mishaps and take corrective actions.

  • Oversee and approve contractual, procurement, and intellectual property arrangements with commercial or external vendors.

  • Provide leadership and advocacy in building a culture of HCO engagement in AI.

Box 2.Essential skills for AI governance teams
  • Can explain how AI technologies work: Understand AI methods, use cases and underlying data structures and be able to translate technical jargon and basic concepts of how AI technologies work to a broad, mostly non-technical stakeholder audience, including senior executives, together with the ability to go into more detail as required.

  • Can establish and manage an AI governance program: Gaining the confidence of senior executives in how AI projects will be responsibly governed and executed is key in securing their support for organisational change and procuring the required resources. While governance for AI must respect existing operational and compliance procedures relating to digital technologies, the need will arise to revise and change these procedures as required to better support AI implementation. Having senior executives championing AI minimises the risk of AI failures due to fragmented ownership and fuzzy accountability.

  • Understand the organisational structure and how work really gets done: Knowing where and by whom current clinical and non-clinical functions are performed within the organisation is critically important in pulling together the multidisciplinary teams best suited for specific AI projects and identifying who is accountable for what task in the pipeline of AI tool development and deployment. For example, having a robust framework around protecting data privacy but which fails to identify contractual risks around cost or intellectual property rights, will lead to AI project failures.

  • Solution and outcome-focused: It is critical that AI governance is solution and outcome-focused, and supports the organisation in overcoming bureaucratic and procedural challenges posed by laws, regulations, and commercial contracts which would otherwise fragment and hinder AI initiatives. Performance measures, incentives, good messaging and communications, and staff training and support are all required to enable successful innovation.

Box 3.Factors that predispose to AI implementation failures
  • Unrealistic expectations: Vendor presentations and anecdotal ‘success’ stories may obscure the considerable data acquisition and preparation, infrastructure requirements, talent needs, and organisational change management that make success possible.

  • Failing to do the ground work: HCOs should not attempt to implement advanced AI capabilities before establishing basic data infrastructure and mastering data warehousing, business intelligence, and traditional analytics.

  • Poor governance structures: HCOs must establish transparent and accountable AI governance structures that clearly define and communicate who owns specific AI projects and who makes decisions when trade-offs arise between speed, cost, and quality, in order to avoid AI projects drifting into ambiguity and eventual failure.

  • Poor data quality: AI tools, including large language models (LLMs), are essentially big data processing machines, so HCOs must aim to ensure data sources can provide sufficient quantity and quality of data for AI tools to function effectively, otherwise risk ‘garbage in, garbage out.’

  • Treating AI as a purely technical solution: Implementing any AI tool should never be seen as just a technical challenge but rather a socio-technical one requiring human adoption and integration whereby end-users, including patients, must be involved in designing and testing AI tools, and their legitimate concerns about how the tools affect their roles, workflows, and practice norms must be addressed.

  • Ambiguous value proposition: HCOs must ensure that any AI tool is directly linked to a genuine organisational problem, comes with specific, measurable service delivery outcomes, and avoids simply layering AI solutions on top of already dysfunctional or burdensome workflows.

  • Disconnect between AI developers and users: Having developers in different domains independently developing AI tools with no coordination between them, or no involvement of users, risks self-defeating competition for limited resources, irrelevant or duplicative efforts, incompatible systems, and eventually, project cancellations.

Invest in a sustainable artificial intelligence ecosystem

Adopting AI at scale requires long-term investment in finance and human capital, technical infrastructure, and continuous organisational learning.9,10 Investment is needed to: access and ingest high-quality data for model training; acquire software; upgrade servers; use cloud computing; establish AI laboratories for tool development and testing; hire clinical and technical experts; contract with vendors; and deliver ongoing workforce training, change management, patient education, and community engagement.11,12

HCOs must strategically select and invest in those AI tools that meaningfully address known deficiencies in current service delivery,13 meet organisational evaluation criteria,14 and satisfy prevailing regulatory standards. Any AI strategy requires clear objectives that support existing HCO goals for achieving better care, improved efficiency, enhanced professional and patient experience, and greater equity.15 Initially selecting high impact, low risk, easier to implement use cases allows testing of governance and decisional processes while limiting risk, ensures vendor priorities align with HCO needs, and attracts end-user buy-in.16 Taking on multiple use cases simultaneously or doing nothing because of seemingly insurmountable operational challenges should be avoided.17,18 Any investment strategy must remain adaptive to regulatory changes and advances in AI that increase safety and financial and environmental sustainability.19

Gain staff and patient trust in using artificial intelligence tools

Both HCO staff and patients will more likely use AI tools if six requirements are met.2024 First, the tool adds value in reliably improving service delivery in real-world settings.25 Second, tool use poses minimal risk to patient safety, privacy, user autonomy, personal liability, or organisational reputation. Third, users know identifiable governance office-holders are being held accountable for responsible use of AI.26 Fourth, HCOs provide both staff and patients, particularly the former, with protected, paid time to train and become proficient in AI,27 and, working with developers, co-produce user-centred, fit for purpose AI tools. Fifth, HCOs must be transparent in ensuring staff and patients are aware of when, how, and why AI tools are being used, obtain consent, and communicate who has access to any data collected through the tool.28 For high-stakes decision-making, explanations or rationales for AI outputs should be provided, wherever possible, in formats interpretable by both parties. Finally, staff and patients must retain agency in being able to over-ride AI advice if perceived as wrong or inappropriate, and provide feedback to the governance group on such instances.29,30

Establish robust risk management processes

HCOs must implement risk assessment and mitigation processes31 aligned with government frameworks and regulatory standards (Box 4). Risk-tiered frameworks may delineate unacceptable, high, medium, and low risk tools, wherein totally autonomous tools in patient-sensitive domains may be deemed unacceptable, medium to high risk clinical decision support tools require high quality evidence of benefit (ideally pragmatic clinical trials using real-time data),32 and low risk administrative support tools only require pre-post observational studies. Medium to high risk tools will require robust data governance protocols, validation on local populations, disclosure of capabilities to users, regulatory approval, and human oversight standards.33 HCOs will need to regularly assess the impacts of AI tools on patient outcomes and system-wide quality and safety,34 decide criteria for revising or decommissioning an AI tool,35 and designate personnel to switch off an AI tool when necessary and manage organisational functions that have become dependent on that tool.

Box 4.AI risk management frameworks
  • The Queensland Government has adopted, since September 2024 and updated January 2025, the FAIRA framework (Foundational AI Risk Assessment; available at: Foundational artificial intelligence risk assessment guideline | For government | Queensland Government) as a means for ensuring transparency, accountability, and risk identification of AI systems within its jurisdiction.

  • The NSW Government has adopted a self-assessed AI Assessment Framework (AIAF) for its various agencies to ensure responsible design, development, deployment, procurement, and use of AI technologies (available at: NSW Artificial Intelligence Assessment Framework | Digital NSW). The framework is structured around the principles of community benefit, fairness, privacy/security, transparency, and accountability.

  • The South Australian Government, in August 2024, released a risk assessment and mitigation guideline specifically for generative AI and LLM tools. It covers data confidentiality, integrity and accuracy, legality and privacy, and ethics and fairness, together with a list of do’s and dont’s in LLM tool use. It also mandates that no data is inputted into consumer-facing, open-source tools (available at: Guideline-13.1-Use-of-Large-Language-Model-AI-Tools-Utilities.pdf).

  • The Commonwealth Scientific and Industrial Research Organisation (CSIRO), in early 2023, after mapping 16 existing AI risk assessment frameworks in identifying deficiencies, derived a concrete and connected risk assessment (C2AIRA) framework (available at: https://lnkd.in/dWaTMQk7?trk=public_post_reshare-text). With the advent of generative AI, QB4AIRA was constructed – a novel bank of questions refined from those of five globally recognised AI risk frameworks, categorised according to Australia’s eight AI ethics principles, and comprising 293 prioritised questions covering a wide range of AI risk areas (available at: Question Bank for Safe and Responsible AI Risk Assessment – Software Systems).

  • The International Standards Organisation (ISO) 42001 AI Risk Management standard (available at: https://www.iso.org/standard/81230.html) requires organisations to establish a process to assess and document potential consequences that may result from an AI system throughout its life cycle, specifically how it affects: the legal position or life opportunities of individuals; their physical or psychological well-being; and universal human rights. Organisations should document the following: intended use and reasonable foreseeable misuse of AI systems; positive and negative impacts on relevant individuals; predictable failures, potential impacts, and mitigation measures; demographic groups the system applies to; system complexity; the role of humans, including oversight capabilities; processes and tools for avoiding negative impacts; and AI system resources, including data, tools, computing, and human resources.

Determine liability for artificial intelligence-induced patient harm

HCOs must determine when liability for AI-related patient harm rests with, or is shared between, staff, developers, and the deploying HCO itself.3640 To date, case law in Australia and overseas that informs guidance on liability is virtually absent given the infancy of healthcare AI (see Box 5 for an example). For now, prime responsibility will probably remain with staff (and their HCO through vicarious liability provisions), with claims arbitrated by both AI and domain experts. This applies particularly to where AI developers/manufacturers rely on, and regulators accept, liability exclusion clauses designating their tools as only assistive and always requiring ‘human in the loop’ judgement.

Box 5.Liability considerations for AI tools
A challenging case
Patient A has been diagnosed with cancer B to which a ‘black box’ AI tool recommends a treatment highly personalised for Patient A, given all the data it has relied on, while noting potential contraindications. Based on experience with similar cases, clinician X expects this treatment will likely confer benefit, but does not fully understand, and is unable to articulate in meaningful terms to the patient, how the AI tool arrived at the advice it has.
Does the clinician act on the advice of the AI tool and invoke therapeutic privilege, or admit the inability to adequately explain the decision and risk not receiving patient consent for what could well be life-preserving treatment?
Current legal approach to determining liability
At the outset, the clinician should ensure the AI tool has received regulatory approval based on its intended use which, if present, markedly offsets clinician liability. The tool manufacturer is obligated to provide information, comprehensible to the clinician as its primary user, that explains the correct use of the tool, how to interpret its outputs, and any context-related limits to its performance.
Proving liability for an act of negligence through use of an unapproved tool, or an approved tool outside of its intended use indications, requires three criteria to be met: the clinician (or employing institution) owes the patient a duty of care; that duty has been breached by failing to meet the requisite standard of care; and the patient has suffered damage because of that breach of duty.A The clinician has a duty to warn a patient of a material risk inherent in the proposed treatment; a risk is material if, given the circumstances of the case, a reasonable person in the patient’s position, if warned of the risk, would likely attach significance to it, or if the clinician is or should reasonably be aware that the particular patient, if warned of the risk, would likely attach significance to it.B
However, liability law recognises therapeutic privilege whereby a clinician does not breach their duty of care if it is established they acted in a way, at the time care was provided, that was widely accepted by a significant number of respected peer clinicians as being competent professional practice.C
Patient consent and AI transparency
In using AI to inform care, transparency in obtaining patient consent has to account for what explanations clinicians are reasonably able to provide, and patients can reasonably understand, about how ‘black box’ AI tools work and produce their results.36 Comprehensible explanations should be provided whenever possible, although the attainable level of understanding for both parties will vary from case to case depending on the clinical context, tool complexity, and participant capabilities.
AI developers and deploying HCOs must take steps to provide the assistance clinicians require, as the primary tool users, in each specific situation. While all patients should receive a broad overview, a hardline insistence on providing detailed ‘under-the-hood’ explanations of how an AI tool works could overwhelm patients with incomprehensible technical detail that may disadvantage their care by causing confusion and anxiety, and possibly impairing the judgement of their clinicians.37 Demanding full explainability is not feasible, may be misleading to both parties, and is not applied to other domains in medicine.38,39 For example, clinicians do not necessarily need to understand the specific mechanism of action of a particular drug in order to responsibly prescribe it to a patient – they draw instead from evidence of efficacy and safety in clinical studies and an understanding of patient populations and circumstances in which the drug was evaluated (e.g. stage in disease trajectory, comorbidities, potential side effects, etc).39
Accordingly, what patients should be fully informed of, as far as possible, are the benefits, risks, and mitigations of specific tools in issuing advice, as determined by rigorous clinical validation studies of their accuracy and outcome impacts, and transparency around AI tool development and quality assurance processes.40 Moreover, the tool output, by itself, should never be the only factor influencing decision-making; the totality of data available to the clinician, contextualised to the patient’s specific situation (care goals, values and preferences, risk factors, demographics), must be considered.39,40
Clinicians using AI should consider what would be accepted as reasonable decisions under the law, including if AI was not used at all. Courts are unlikely to simply ask ‘was the AI tool approved for this type of use and, if so, did you follow its advice?’. A more nuanced approach to shared decision-making and deep engagement with patients, when done properly, underpins the social licence for AI use, informs standards around its operations, governance and ethics, and promotes trust, health literacy, and information sharing.

Any AI tool directly influencing clinical care is subject to approval by the Therapeutics Goods Administration. In addition, poorly designed or malfunctioning tools may expose developers to negligence provisions of Australian consumer law applicable to all consumer products.41 Appropriately trained staff who knowingly misuse tools or unjustifiably reject or accept accurate or inaccurate AI outputs respectively will be liable, but failing to provide the governance, infrastructure, or user training required for safe tool use renders the HCO liable. Users and their HCO will assume greater liability if an approved and deployed tool degrades in performance over time due to data shifts, but which remains undetected due to absent tool monitoring and auditing.42

In conclusion, in realising the benefits of AI, HCOs must undertake significant foundational work in establishing governance, infrastructure, training, and cultural adaptation. HCOs that approach AI with appropriate expectations, planning, and resources greatly improve their odds of successful implementation.

Data availability

Data sharing is not applicable as no new data were generated or analysed during this study.

Disclaimer

The views expressed in this publication are those of the author(s) and do not necessarily represent those of, and should not be attributed to the publisher, the journal owner or CSIRO.

Conflicts of interest

The authors declare no conflicts of interest.

Declaration of funding

This research did not receive any specific funding.

Author contributions

IAS: conceptualised the article, reviewed relevant literature, and drafted the manuscript. AVV: provided comments on technical infrastructure, data governance, and regulatory approval. SC: provided comments on organisational governance and investment strategy. PN: provided comments on medicolegal liability. KP: provided comments on patient and consumer perspectives towards AI. All authors reviewed the final manuscript and approved its submission.

References

Haug CJ, Drazen JM. Artificial intelligence and machine learning in clinical medicine, 2023. N Engl J Med 2023; 388(13): 1201-1208.
| Crossref | Google Scholar | PubMed |

Janssen AB, Kavisha S, Johnson A, Marinic A, Teede H, Shaw T. Implementation of artificial intelligence tools in Australian healthcare organisations: Environmental scan findings. Stud Health Technol Inform 2024; 310: 1136-1140.
| Crossref | Google Scholar | PubMed |

Apfelbacher T, Koçman SE, Prokosch HU, Christoph J. A governance framework for the implementation and operation of AI tools in a university hospital. Stud Health Technol Inform 2024; 316: 776-780.
| Crossref | Google Scholar | PubMed |

Beecy AN, Longhurst CA, Singh K, et al. The Chief Health AI Officer — An emerging role for an emerging technology. NEJM AI 2024; 1(7): AIp2400109.
| Crossref | Google Scholar |

Hassan M, Borycki EM, Kushniruk AW. Artificial intelligence governance framework for healthcare. Healthc Manage Forum 2025; 38(2): 125-130.
| Crossref | Google Scholar | PubMed |

Scott IA, Abdel-Hafez A, Barras M, Canaris S. What is needed to mainstream artificial intelligence in health care. Aust Health Rev 2021; 45: 591-596.
| Crossref | Google Scholar | PubMed |

Loufek B, Vidal D, McClintock DS, et al. Embedding internal accountability into health care institutions for safe, effective, and ethical implementation of artificial intelligence into medical practice: A Mayo Clinic case study. Mayo Clin Proc Digit Health 2024; 2(4): 574-583.
| Crossref | Google Scholar | PubMed |

Alves M, Seringa J, Silvestre T, Magalhães T. Use of Artificial Intelligence tools in supporting decision-making in hospital management. BMC Health Serv Res 2024; 24(1): 1282.
| Crossref | Google Scholar | PubMed |

Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: a perspective for healthcare organizations. Artif Intell Med 2024; 151: 102861.
| Crossref | Google Scholar | PubMed |

10  Roppelt JS, Kanbach DK, Kraus S. Artificial intelligence in healthcare institutions: a systematic literature review on influencing factors. Technol Soc 2024; 76: 102443.
| Crossref | Google Scholar |

11  Callahan A, Ashley E, Datta S, et al. The Stanford Medicine data science ecosystem for clinical and translational research. JAMIA Open 2023; 6: ooad054.
| Crossref | Google Scholar | PubMed |

12  Corbin CK, Maclay R, Acharya A, et al. DEPLOYR: a technical framework for deploying custom real-time machine learning models into the electronic medical record. J Am Med Inform Assoc 2023; 30: 1532-1542.
| Crossref | Google Scholar | PubMed |

13  Stretton B, Koovor JG, Hains L, et al. How will the artificial intelligence algorithm work within the constraints of this healthcare system? Intern Med J 2024; 54(1): 190-191.
| Crossref | Google Scholar | PubMed |

14  Economou-Zavlanos NJ, Bessias S, Cary MP, Jr, et al. Translating ethical and quality principles for the effective, safe and fair development, deployment and use of artificial intelligence technologies in healthcare. J Am Med Inform Assoc 2024; 31(3): 705-713.
| Crossref | Google Scholar | PubMed |

15  Nundy S, Cooper LA, Mate KS. The Quintuple Aim for health care improvement: A new imperative to advance health equity. JAMA 2022; 327(6): 521-522.
| Crossref | Google Scholar | PubMed |

16  Davenport T, Bean R. Clinical AI gets the headlines, but administrative AI may be a better bet. MIT Sloan Manag Rev. 2022. Available at https://sloanreview.mit.edu/article/clinical-ai-gets-the-headlines-but-administrative-ai-may-be-a-better-bet/ [accessed 10 April].

17  Hassan M, Kushniruk A, Borycki E. Barriers to and facilitators of artificial intelligence adoption in health care: scoping review. JMIR Hum Factors 2024; 11: e48633.
| Crossref | Google Scholar | PubMed |

18  Alami H, Lehoux P, Papoutsi C, et al. Understanding the integration of artificial intelligence in healthcare organisations and systems through the NASSS framework: a qualitative study in a leading Canadian academic centre. BMC Health Serv Res 2024; 24: 701.
| Crossref | Google Scholar | PubMed |

19  Gonzalez A, Crowell T, Lin SY. AI Code of Conduct—Safety, inclusivity, and sustainability. JAMA Intern Med 2025; 185(1): 12-13.
| Crossref | Google Scholar | PubMed |

20  Kinney M, Anastasiadou M, Naranjo-Zolotov M, Santos V. Expectation management in AI: A framework for understanding stakeholder trust and acceptance of artificial intelligence systems. Heliyon 2024; 10(7): e28562.
| Crossref | Google Scholar | PubMed |

21  Lahusen C, Maggetti M, Slavkovik M. Trust, trustworthiness and AI governance. Sci Rep 2024; 14(1): 20752.
| Crossref | Google Scholar | PubMed |

22  Isaacks DB, Borkowski AA. Implementing trustworthy AI in VA high reliability health care organizations. Fed Pract 2024; 41(2): 40-43.
| Crossref | Google Scholar | PubMed |

23  Kim M, Sohn H, Choi S, Kim S. Requirements for trustworthy artificial intelligence and its tool in healthcare. Healthc Inform Res 2023; 29(4): 315-322.
| Crossref | Google Scholar | PubMed |

24  Bergquist M, Rolandsson B, Gryska E, Laesser M, Hoefling N, Heckemann R, Schneiderman JF, Björkman-Burtscher IM. Trust and stakeholder perspectives on the implementation of AI tools in clinical radiology. Eur Radiol 2024; 34(1): 338-347.
| Crossref | Google Scholar | PubMed |

25  Hennrich J, Ritz E, Hofmann P, Urbach N. Capturing artificial intelligence tools’ value proposition in healthcare: a qualitative research study. BMC Health Serv Res 2024; 24(1): 420.
| Crossref | Google Scholar | PubMed |

26  Mahmood U, Shukla-Dave A, Chan HP, et al. Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing. BJR Artif Intell 2024; 1(1): ubae003.
| Crossref | Google Scholar | PubMed |

27  Gazquez-Garcia J, Sánchez-Bocanegra CL, Sevillano JL. AI in the health sector: Systematic review of key skills for future health professionals. JMIR Med Educ 2025; 11: e58161.
| Crossref | Google Scholar | PubMed |

28  Rose SL, Shapiro D. An ethically supported framework for determining patient notification and informed consent practices when using artificial intelligence in health care. Chest 2024; 166(3): 572-578.
| Crossref | Google Scholar | PubMed |

29  Cavalcante Siebert L, Lupetti ML, Aizenberg E, et al. Meaningful human control: actionable properties for AI system development. AI Ethics 2023; 3: 241-255.
| Google Scholar |

30  Khanijahani A, Iezadi S, Dudley S, et al. Organizational, professional, and patient characteristics associated with artificial intelligence adoption in healthcare: a systematic review. Health Policy Technol 2022; 11(1): 100602.
| Google Scholar |

31  Ranjbar A, Mork EW, Ravn J, et al. Managing risk and quality of AI in healthcare: Are hospitals ready for implementation? Risk Manag Healthc Policy 2024; 17: 877-882.
| Crossref | Google Scholar | PubMed |

32  Jin MF, Noseworthy PA, Yao X. Assessing artificial intelligence solution effectiveness: The role of pragmatic trials. Mayo Clin Proc Digit Health 2024; 2(4): 499-510.
| Crossref | Google Scholar | PubMed |

33  Dixit A, Quaglietta J, Gaulton C. Preparing for the future: how organizations can prepare boards, leaders, and risk managers for artificial intelligence. Healthc Manage Forum 2021; 34: 346-352.
| Crossref | Google Scholar | PubMed |

34  Ratwani RM, Bates DW, Classen DC. Patient safety and artificial intelligence in clinical care. JAMA Health Forum 2024; 5(2): e235514.
| Crossref | Google Scholar | PubMed |

35  Ansari S, Baur B, Singh K, Admon AJ. Challenges in the postmarket surveillance of clinical prediction models. NEJM AI 2025; 2(5): AIp2401116.
| Crossref | Google Scholar |

36  Nolan P, Matulionyte R. Artificial Intelligence in medicine: Issues when determining negligence. J Law Med 2023; 30: 593-615.
| Google Scholar | PubMed |

37  Nolan P. Artificial intelligence in medicine – is too much transparency a good thing? Med Leg J 2023; 91(4): 193-197.
| Crossref | Google Scholar | PubMed |

38  Mello MM, Guha N. Understanding liability risk from using health care artificial intelligence tools. N Engl J Med 2024; 390(3): 271-278.
| Crossref | Google Scholar | PubMed |

39  McCradden MD, Stedman I. Explaining decisions without explainability? Artificial intelligence and medicolegal accountability. Future Healthc J 2024; 11: 100171.
| Crossref | Google Scholar | PubMed |

40  Matulionyte R, Nolan P, Magrabi F, Beheshti A. Should AI-enabled medical devices be explainable? Int J Law Inform Technol 2022; 30(2): 151-180.
| Google Scholar |

41  Competition and Consumer Act 2010 (Commonwealth), Schedule 2, Australian Consumer Law, s9, s138.

42  Liu X, Glocker B, McCradden MM, et al. The medical algorithmic audit. Lancet Digit Health 2022; 4(5): e384-e397.
| Crossref | Google Scholar | PubMed |

Footnotes

A Civil Liability Act 2003 (Qld), Sections 9 to 12.

B Civil Liability Act 2003 (Qld), Section 21; See also: Rogers v Whitaker (1992) 175 CLR 479.

C Civil Liability Act 2003 (Qld), Section 22: See also: Bolam v Friern Hospital Management Committee [1957] 1 WLR 582 (‘The Bolam Test’)