Register      Login
Microbiology Australia Microbiology Australia Society
Microbiology Australia, bringing Microbiologists together
RESEARCH ARTICLE

Epidemiology of healthcare-associated infections: uses, pitfalls and the future

Mary-Louise McLaws
+ Author Affiliations
- Author Affiliations

School of Public Health and Community Medicine
The University of New South Wales
Sydney, NSW 2052, Australia
Tel: +61 2 9385 2586
Fax: +61 2 93136185
Email: m.mclaws@unsw.edu.au

Microbiology Australia 35(1) 17-23 https://doi.org/10.1071/MA14006
Published: 10 February 2014

Key messages
  1. Surveillance is not designed to establish causality. It is an appropriate design to flag the magnitude of healthcare-associated infections (HAIs) when correct analysis is applied to the data.

  2. The distributions of rare events like HAIs are often overdispersed and difficult to predict. Reports should include monthly and quarterly HAI counts presented in charts only and annual HAI reported as rates. Use funnel plots for comparing annual performance between different size facilities. Establish thresholds from Poisson or negative binomial regression models.

  3. An alternative to HAI surveillance is HAI-related process surveillance such as surveillance of the important elements of infection prevention bundles.

  4. There should be an Internet data repository for online data analysis with public access to HAI data.

  5. The public should have access to a single source for national and state reports.




This paper provides an overview of the history of epidemiological activities in Australia at state and national levels to monitor healthcare-associated infections (HAIs), an examination of the pitfalls of surveillance as an epidemiological design for causality of HAIs and the attempts at correcting them, the ease of web access for information about statewide programs and reports and a look into the future of HAI epidemiology.


History of the epidemiology of HAI in Australia

I never guess, it is a shocking habit, destructive to the logical faculty. Sherlock Holmes in the Sign of Four.

The epidemiology of causal organisms for HAI in Australia would have been unknown until Phyllis Rountree’s first analysis of the distribution of Staphylococcus aureus susceptibility patterns in a joint publication in 19471. In 2000, laboratories commenced providing annual samples of Staphylococcus aureus isolates, from inpatients and outpatients, to the Australian Group for Antimicrobial Resistance (AGAR) to establish the proportion that were methicillin-sensitive Staphylococcus aureus (MSSA) and methicillin-resistant Staphylococcus aureus (MRSA) and their antibiograms2. The epidemiology of Clostridium difficile (CDI) in Australian hospital patients was first described in 19833 and surveillance and reporting is now mandatory for public hospitals.

The first national epidemiological study of the magnitude of all types of HAIs in 28,643 patients from 269 public hospitals was undertaken in 1984 using international standardised definitions4. Thereafter, and for more than a decade, our understanding of the epidemiology of HAIs in Australia remained a tightly guarded secret within individual hospitals. The New South Wales (NSW) Department of Health commissioned the first attempt at a statewide surveillance program using internationally standardised definitions in 1998 and pilot testing continued until 20005. Over the next few years individual statewide programs were established: Queensland (QLD)6 in 2000; Victoria (VIC)7 in 2002; and Western Australia (WA)8 with voluntary surveillance in 2005 and mandatory surveillance in 2007. The Northern Territory (NT) has been conducting healthcare-associated Staphylococcus aureus bloodstream infection (SABSI) and selective surgical site infection (SSI) surveillance, informally, for the past 20 years at the hospital level and, recently, data have been provided to NT Health (personal communication, Tain Gardiner, NT Health). Only major hospitals in South Australia (SA) have been reporting healthcare-associated bloodstream infections (BSI) and clinically significant MRSA isolates since 1997 and recently activities have been expanded statewide with the addition of selected SSI and central line-associated bacteraemia (CLABSI)9. Then in 2009 national surveillance commenced with the establishment of the Australian Commission for Quality and Safety in Healthcare (ACQSH), which mandated that all public hospitals report incidence of SABSI, and in 2011 required data to include CDI and CLABSI10.

The current state-based surveillance programs (Table 1) illustrate the commonalities across the individually focused programs that address their specific patient risk groups. Definitions that use different time to onset for deep/organ space SSI and the questions about the ability for auditors to apply the current CLABSI definition11 to achieve a standardised case definition may influence the validity of between hospital comparisons nationally. The pitfall of the CLABSI definition is associated with one of the criteria for interpreting a positive blood culture as significant, namely: ‘a common skin contaminant is cultured from at least 1 blood sample, and the physician institutes appropriate antimicrobial therapy’; this places undue weight on the clinician’s diagnosis in the presence of an ambiguous blood culture result. I suspect if refunding the costs associated with CLABSIs from public hospitals is withdrawn as it has been in the United States and national antibiotic stewardship is implemented there will be a reduction in the influence of this criterion. Standardisation of case definitions and case detection across Australia is not insurmountable and, even without improvements to HAI definitions, national surveillance could commence immediately if a moratorium on state comparison was introduced during the initial phasing of data sharing.


Table 1. Components of healthcare associated infection surveillance in public healthcare facilities by State and Territories.
Click to zoom

However, more important hurdles for a meaningful national surveillance program include the lack of collection of important extrinsic and intrinsic determinants that may act as confounders, representative patient sampling and improved analysis. Without improvements to surveillance the data should not be used to point to differences between hospitals, as there are inherent pitfalls to surveillance.


Pitfalls of HAI surveillance

Information is not knowledge. Albert Einstein, Physicist.

In the words of the ACQSH – surveillance is undertaken with a certainty that with mandatory and ‘authoritative’ analysis, one can make comparisons between hospitals of similar level and provide ‘an evidence-based’ with which to direct ‘public health action for better health outcomes’10. Yet, epidemiologists understand that surveillance may not provide information that is sufficiently robust to enable sound comparisons between quarterly surveillance reports even within a single hospital or evidence of causation. Surveillance is a basic study design that can only flag the possibility of change. In fact change may be due to random fluctuation rather than a true increase or decrease in the epidemiology of HAI. To understand the limitations of the results we will examine the epidemiological pitfalls of surveillance.

Historically, epidemiology was developed for and by public health professionals to study common disease distribution and causes of diseases in the population. Surveillance is a lesser epidemiological design to measure the changes in the trends of a disease in the total population or sentinel groups12. Healthcare facilities may passively survey their patient populations for multiresistant organisms (MROs), such MRSA or vancomycin-resistant enterococci (VRE) or key HAIs, such as SABSI or CDI and many others, via positive laboratory results reported by pathology. In the main, healthcare facilities undertake sentinel surveillance where the same surgical procedure or at-risk groups are used to establish trends in rates of specific HAI such as SSI associated with total knee or hip replacement or coronary artery bypass graft (CABG) surgery. Surveillance for HAI can be undertaken periodically, but to improve the level of evidence about trends it is preferable to use continuous surveillance. A pitfall of sentinel surveillance in healthcare facilities is the following of different patients over different time periods to establish changes in rates between reporting quarters without collecting data to establish whether the change in rates is associated with a change in the intrinsic (patient) determinant or the extrinsic (hospital) determinant (risk factors) for infection. This lack of concurrent measurement of intrinsic determinants is a distinguishing and limiting feature of the current approach to surveillance of HAI.

Nonetheless, the strength of the surveillance design is its simplicity and its use of: (i) a standardised method for data collection; (ii) reliable and standardised definitions (often using diagnostic test results that increase the validity of a case); and (iii) the collection of few variables (the variable of interest and one or two risk factors). The ranking of study designs for the ability to provide high-level evidence for causation has been painstakingly evaluated for the effect of common methodological shortfalls associated with each epidemiological study design13. Surveillance does not rank as a study design for testing associations between potential determinants of infections and the outcome of infection. Surveillance design is referred to as an observational study design, and as such should be principally used to measure changes in magnitude of infection not cause and, as such, indicates the potential for trends in the transmission patterns of HAI when undertaken in similar patient groups who experience similar exposures to intrinsic and extrinsic determinants of infection over the surveillance periods being compared.


Study design for causality

When examining the complex association of causality for HAI, higher level study designs (experimental designs including randomised control trial and pseudo randomised control trial) will produce estimates of association with a higher degree of certainty than estimates produced from lesser designs (observational designs including non randomised control trials, cohort, case-control and interrupted time series with a control group). The higher-level designs attempt to a priori or anteriorly control for the effect of confounding (a distortion of the estimates of HAI by other causal or proxy causal factors) during the enrolment and randomisation of patients. The lesser designs, especially surveillance and time series are mostly reliant on a posteriori testing for the presence of confounders and attempt to adjust for confounding during analysis. Consider a select patient group under routine surveillance, for example, patients undergoing a CABG procedure, of whom a proportion may have diabetes. There are three rules of confounding (Fig. 1) and confounding (from the effect of diabetes) will distort the risk estimates of HAI if: (i) patients with uncontrolled diabetes are unequally distributed into the extrinsic determinant group (e.g. surgeons or surveillance periods); (ii) uncontrolled diabetes has a direct causal link with HAI or is a proxy risk factor for HAI; and (iii) uncontrolled diabetes is not a determinant of interest. The effect of uncontrolled diabetes, during CABG procedures, on the rate of HAI will not be controlled with a posterior stratification if uncontrolled diabetes status is not collected.


Figure 1. The three rules for confounding for the outcome of interest.
Click to zoom


How is confounding controlled and is this approach appropriate for Australia?

Not everything that can be counted counts, and not everything that counts can be counted. Albert Einstein, Physicist.

The potential for uncontrolled confounding is a fundamental design flaw of all observational designs including surveillance. A priori control of confounding occurs during randomisation and is superior to a posteriori attempts to control confounding during analysis. A posteriori control through analysis is undertaken for only those potential confounders collected along with the primary risk factors, for example, type of procedure. Stratifying HAI rates by National Nosocomial Infections Surveillance (NNIS) Risk Index score14 was an a posteriori attempt to adjust for the effect of American Society of Anesthesiologists (ASA) scorei, duration of procedure and degree of contamination.

Controlling for the three NNIS risk index factors14 in SSI rates has had varying success in Australia5,1517. Extended duration of procedure, a high ASA score and contamination of the surgical site must be common enough, across surgical patients to enable successful discrimination of different HAI rates between the three NNIS risk levels. During the first pilot testing of standardised definition in NSW in the late 1990s the different NNIS risk indices lacked discrimination for many procedures and it was determined that the burden of collecting these data was not warranted5. The inability for duration of procedure past the 75th percentile and ASA scores to discriminate risk was due to the homogeneity of these factors across many procedures5,1517. The duration of procedure established by NNIS were longer in NSW for some procedures (e.g. CABG), and shorter for others (e.g. knee replacement). Colorectal surgery was the only procedure where the SSI rate was significantly influenced by the risk factors that contribute to the NNIS Risk Index5. The variable planned versus emergency/unplanned also did not discriminate risk differences. The NNIS index did not discriminate risk well for commonly surveyed procedures collected for the QLD surveillance program, such as partial hip, revision total hip, total knee, revision total knee replacements, femoro-popliteal bypass graft and CABG17. An unknown proportion of HAI will always be due to unaccounted confounding. One reason for the variability of success in controlling confounding is that Australian patients are served by small to medium-size healthcare facilities. As such, the samples of surgical procedures are small, preventing successful controlling of confounders.

In 2009 the National Healthcare Safety Network (NHSN) surveillance system moved from stratifying HAI data by specific risk wards and by the NNIS index to providing standardised infection rates (SIR) for deep/organ space SSI and CLABSI18. SSI SIRs are calculated for procedures using multiple logistic regression models for each procedure using the observed number of HAIs and the predicted number of infections in all facilities for the reporting period while adjusting for multiple key risk factors for the specific procedure. A SSI SIR >1 indicates more infections are observed than expected. Control for confounding using multiple regression analysis was used back in 1984 but this was a one-off project and applied to a large dataset4. The SSI SIR approach is a further extension of the NNIS risk factors specific to procedure: age, gender, trauma, body mass index (BMI), anesthesia, ASA score, duration of procedure, endoscope, medical school affiliation, hospital bed size, wound class and emergency. Procedures not included in the SSI SIR are those where duration is <5 minutes or extensively prolonged (75th percentile in minutes + [five times the interquartile range in minutes]) suggesting that either intrinsic or extrinsic risks are uncontrollable during the procedure and HAI may not be preventable. The benefit of collecting a similarly extensive number of potential confounders may not be warranted in Australia.


When is confounding not controlled?

Controlling by modelling or stratifying infection rates by specific risk determinants for each procedure will not provide rates that discriminate between different patient groups if the procedure is performed infrequently and the expected probability of infection is low. Adjusting multiple confounders, such as the NNIS risk index or NHSN approach, is not appropriate for individual hospitals because a local dataset for a quarterly reporting period will never be sufficiently large to result in meaningful rates. At the local hospital level HAI are count data and frequency varies greatly (referred to as overdisperison) within a single month. Individual healthcare facilities, collecting data for surgical procedures associated with low SSI rates that are infrequently performed, cannot reliably produce a rate; if the 95% confidence intervals (CI) around the SSI rate include estimates that span below and above the threshold, the rate should be considered unreliable. For example, the SSI threshold of <1% for cardiovascular procedures – the SSI rate (0.97%, 95% CI 0.38–2.5%) established for 400 procedures over 3 months – is unreliable. The margin of error reflected in the 95% CI spans below and above the <1% threshold. Consequently, small samples for procedures with low SSI rates should not be estimated more frequently than annually at the individual hospital level.

The aim of surveillance data is primarily to indicate the possibility of a problem; however, the current method of aggregating all catheter-days to establish a single rate for CLABSI has a major methodological pitfall19. The majority of CLABSIs develop in patients with dwell time >9 days, whereas the majority of patients are admitted to NSW intensive care units for <9 days19. This means that the CLABSI rate as currently estimated will not reflect the epidemiology in the majority of low risk patients. Consequently, the rate cannot reflect the magnitude of infection in a small proportion of high-risk patients whose dwell time is >9 days and are the major contributors to the CLABSI rate. To develop appropriate infection prevention strategies for high-risk patients, rates for prolonged dwell-times must be separated from the close to zero rate associated with shorter dwell-times20.


Analyses for count data

The goal is to turn data into information, and information into insight. Carly Fiorina, Former CEO of Hewlett-Packard.

The problem of unreliable information on HAI obtained from small samples may be compounded by the application of parametric statistics to overdispersed data that renders comparisons between the quarterly report and an annual rate within the facility useless. Small but real changes in the HAI rate over a quarter may be missed or increases in HAI may be due to random fluctuation. When count data are overdispersed, such as for catheter-associated urinary tract infection (CAUTI), CLABSI, ventilator-associated pneumonia (VAP) or SABSI, and comparisons are made using small samples against a national threshold, then the appropriate approach may be zero-inflated models (for datasets with excessive zeros because most patients do not acquire a HAI), negative binomial or Poisson regression models using patient- or device-days or total number of procedures as the offset or denominator variable21. Monthly and quarterly counts of HAI should be plotted using methods that ‘smooth out’ excessive rate fluctuations, such as cumulative sum (CUSUM) control charts for SSI and Shewhart control chart or exponentially weighted moving average (EWMA) for bloodstream infections and MROs22. When comparing rates between different size healthcare facilities use analyses that are less reactive to random fluctuation, such as funnel plots or Bayesian analysis23.

Reporting of infrequently performed procedures and overdispersed counts of any HAI should: (i) include 95% CI to illustrate whether the margin of error includes estimates that are below and above the threshold and, when this occurs; (ii) only calculate annual rates; (iii) attempt annual comparisons of HAI against other institutions using funnel plots; (iv) for quarterly analysis plot the counts of SSI, bloodstream infections and MROs on charts with a pre-determined threshold established from a binomial or Poisson model or a chosen hospital target as the threshold; and (v) for longitudinal comparisons use Poisson, or negative binomial regression models.

Quarterly charts and annual rates with 95% CI should only be used to indicate or flag the possibility of changes in incidence to hospital boards. Visual displays, charts, of rare events against a realistic threshold will provide better insight than an unreliable rate.


Process surveillance

For facilities where surgical procedures are performed infrequently, device utilisation rate is low or collecting data on potential confounders is difficult, then process surveillance is appropriate (personal experience as World Health Organization Advisor to China and Malaysia). Process surveillance – for example, pre-surgical or urinary catheter device prophylaxis and early removal of intravascular devices – provide information that allows for immediate correction with an immediate and direct impact on HAI. Given the success of bundling infection control strategies24,25 for rare HAIs, process surveillance associated with the implementation of the individual elements of the bundle, could be an alternative to measuring a rare HAI. Surveillance of process indictors also becomes more attractive than HAI surveillance as length of stay decreases with a concomitant increase in developing HAI post-discharge rather than during hospitalisation. Currently, hand hygiene compliance rates (www.Myhospital.gov.au) are viewed as having a direct causal link with the level of SABSI but should be thought of as a patient safety process indicator until the reliability of both SABSI and hand hygiene data improve enough to enable a strong causal link to be proven26.


Access to HAI rates

Support for public access to publically funded data collection is not disputed in Australia27. Yet, without knowing the exact web address, accessing the correct website for state surveillance programs (Table 1) and reports was difficult for all programs with the exception of Victoria, which has a dedicated website and impressive readily accessible reports. Queensland’s surveillance program was readily located but reports, with the exception of SABSI, were not. The current websites for surveillance programs and reports are:

Data should be readily accessible to researchers, academics and the public. The MyHospital website provides SABSI and hand hygiene compliance data for individual healthcare facilities. However, data have been aggregated and are not available as numerator and denominator data for calculating a margin of error for each rate (i.e. to estimate reliability). Interested parties must access each hospital and the website does not provide state or area health service level data.


An epidemiological wish list for the future

When we have all data online it will be great for humanity. It is a prerequisite to solving many problems that humankind faces. Robert Cailliau, Belgian informatics engineer and computer scientist and co-developer of the World Wide Web.

Electronic banking is here and has been, in the main, safe. Encrypted internet databases would bring the future of data sharing between hospitals closer. HAI data should be deposited monthly via encrypted internet transfer to a central database for standardised analysis and rapid feedback. Automated online real-time charts and intra- and inter-hospital comparisons using analysis for overdispersed count data would remove the need for hospital epidemiologist statisticians.

Public hospital HAI data have been collected with public funding. The cost of data collection within private hospitals is passed onto the consumer. Therefore, ethics dictate that there must be a central repository providing quarterly and annual data (in the form of numerators and denominators) available at anytime to the public who pay for the data collection. The future challenge for meaningful HAI epidemiology will be in its adaption to shorter length of stay, hospital in the home, over-representation of the elderly and the comorbidities associated with living longer. Geospatial mapping, used in the study of human movement, defense, environmental science, parasitology and many other applications, could one day assist infection prevention staff to visualise and locate hotspots of HAI. Future HAI surveillance will benefit from greater input from a variety of professions bringing together new methods of data collection and analysis.



Acknowledgements

I am grateful to Lyn Gilbert and Jan Gralton (NSW), Michael Richards (VIC), Irene Wilkinson (SA), Alistair McGregor (TAS), John Marquess (QLD), Allison Peterson (WA) and Tain Gardiner (NT) for providing me with information on the surveillance activities in their State/Territory.


References

[1]  Rountree, P.M. and Thomas, E.F. (1949) Incidence of penicillin-resistant and streptomycin-resistant Staphylococci in a hospital. Lancet 254, 501–504.
Incidence of penicillin-resistant and streptomycin-resistant Staphylococci in a hospital.Crossref | GoogleScholarGoogle Scholar |

[2]  Turnidge, J.D. et al. (1996) Evolution of resistance in Staphylococcus aureus in Australian teaching hospitals. Australian Group on Antimicrobial Resistance (AGAR). Med. J. Aust. 164, 68–71.
| 1:STN:280:DyaK287lvVyitA%3D%3D&md5=38600d0497cdd3d4a40a2433e23136aaCAS | 8569574PubMed |

[3]  Riley, T.V. et al. (1983) Diarrhoea associated with Clostridium difficile in a hospital population. Med. J. Aust. 1, 166–169.
| 1:STN:280:DyaL3s7ptFyhtw%3D%3D&md5=f50b3ded15902aff5552c35dbacb183eCAS | 6843465PubMed |

[4]  McLaws, M.L. et al. (1988) The prevalence of nosocomial and community acquired infections in Australian Hospitals. Med. J. Aust. 149, 582–590.
| 1:STN:280:DyaL1M%2FntVemuw%3D%3D&md5=0bf568ce8cd5d994ec9fd555897ad035CAS | 3143900PubMed |

[5]  McLaws, M.L. and Taylor, P. (2003) The Hospital Infection Standardised Surveillance (HISS) programme: analysis of a two-year pilot. J. Hosp. Infect. 53, 259–267.
The Hospital Infection Standardised Surveillance (HISS) programme: analysis of a two-year pilot.Crossref | GoogleScholarGoogle Scholar | 12660122PubMed |

[6]  Queensland Health. Centre for Healthcare Related Infection Surveillance and Prevention and Tuberculosis Control. http://www.health.qld.gov.au/chrisp/ (accessed 12 November 2013).

[7]  Russo, P.L. et al. (2006) The establishment of a statewide surveillance program for hospital-acquired infections in large Victorian public hospitals: a report from the VICNISS Coordinating Centre. Infect. Control Hosp. Epidemiol. 34, 430–436.

[8]  Western Australia Department of Health. Public health. http://www.public.health.wa.gov.au/3/455/2/reports_healthcare_associated_infection_unit.pm (accessed 12 November 2013).

[9]  South Australia Health. http://www.health.sa.gov.au/INFECTIONCONTROL/Default.aspx?tabid=147 (accessed 12 November 2013).

[10]  Australian Commission for Quality and Safety in Healthcare. http://www.safetyandquality.gov.au (accessed 12 November 2013).

[11]  McBryde, E.S. et al. (2009) Validation of Statewide Surveillance System Data on Central Line-Associated Bloodstream Infection in Intensive Care Units in Australia. Infect. Control Hosp. Epidemiol. 30, 1045–1049.
Validation of Statewide Surveillance System Data on Central Line-Associated Bloodstream Infection in Intensive Care Units in Australia.Crossref | GoogleScholarGoogle Scholar | 19803720PubMed |

[12]  Rotham, K.J. et al. (2008) Modern Epidemiology (Third edn). Wolters Kluwer Lippincott Williams and Wilkins, Philadelphia.

[13]  NHMRC (2009) NHMRC additional levels of evidence and grades for recommendations for developers of guidelines. National Health and Medical Research Council, Canberra. http://www.nhmrc.gov.au/.../stage_2_consultation_levels_and_grades.pdf (accessed 12 November 2013).

[14]  Culver, D.H. et al. (1991) Surgical wound infection rates by wound class, operative procedure, and patient risk index. Am. J. Med. 91, S152–S157.
Surgical wound infection rates by wound class, operative procedure, and patient risk index.Crossref | GoogleScholarGoogle Scholar |

[15]  Friedman, N.D. et al. (2007) Performance of the national nosocomial infections surveillance risk index in predicting surgical site infection in Australia. Infect. Control Hosp. Epidemiol. 28, 55–59.
Performance of the national nosocomial infections surveillance risk index in predicting surgical site infection in Australia.Crossref | GoogleScholarGoogle Scholar | 17230388PubMed |

[16]  Clements, A.C.A. et al. (2007) Risk stratification for surgical site infections in Australia: evaluation of the NNIS risk index. J. Hosp. Infect. 66, 148–155.
Risk stratification for surgical site infections in Australia: evaluation of the NNIS risk index.Crossref | GoogleScholarGoogle Scholar | 1:STN:280:DC%2BD2szjsVKltA%3D%3D&md5=5600bbc82de8cd7a766dda6b74049b69CAS |

[17]  Morton, A.P. et al. (2008) Surveillance of healthcare-acquired infections in Queensland, Australia: data and lessons from the first 5 years. Infect. Control Hosp. Epidemiol. 29, 695–701.
Surveillance of healthcare-acquired infections in Queensland, Australia: data and lessons from the first 5 years.Crossref | GoogleScholarGoogle Scholar | 18690786PubMed |

[18]  Centers for Disease Control and Prevention. NHSN newsletter – your guide to the standardized infection ratio. http://www.cdc.gov/nhsn/PDFs/Newsletters/NHSN_NL_OCT_2010SE_final.pdf (accessed 10 November 2013).

[19]  McLaws, M.L. and Burrell, A. (2012) Zero-risk for central line associated bloodstream infection: are we there yet? Crit. Care Med. 40, 388–393.
Zero-risk for central line associated bloodstream infection: are we there yet?Crossref | GoogleScholarGoogle Scholar | 22020239PubMed |

[20]  Worth, L.J. and McLaws, M.L. (2012) Is it possible to achieve a target of zero central line associated bloodstream infections? Curr. Opin. Infect. Dis. 25, 650–657.
Is it possible to achieve a target of zero central line associated bloodstream infections?Crossref | GoogleScholarGoogle Scholar | 23041775PubMed |

[21]  Gelman, A. and Hill, J. (2007) Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press, Melbourne.

[22]  Morton, A.P. et al. (2001) The application of statistical process control charts to the detection and monitoring of hospital-acquired infections. J. Qual. Clin. Pract. 21, 112–117.
The application of statistical process control charts to the detection and monitoring of hospital-acquired infections.Crossref | GoogleScholarGoogle Scholar | 1:STN:280:DC%2BD387ivF2gsA%3D%3D&md5=7e78af81defe2e379dfbc334b731cee3CAS | 11856406PubMed |

[23]  Morton, A.P. et al. (2008) Surveillance of healthcare-acquired infections in Queensland, Australia: data and lessons from the first 5 years. Infect. Control Hosp. Epidemiol. 29, 695–701.
Surveillance of healthcare-acquired infections in Queensland, Australia: data and lessons from the first 5 years.Crossref | GoogleScholarGoogle Scholar | 18690786PubMed |

[24]  Saint, S. et al. (2009) Translating health care–associated urinary tract infection prevention research into practice via the bladder bundle. Jt. Comm. J. Qual. Patient Saf. 35, 449–455.
| 19769204PubMed |

[25]  Wip, C. and Napolitano, L. (2009) Bundles to prevent ventilator-associated pneumonia: how valuable are they? Curr. Opin. Infect. Dis. 22, 159–166.
Bundles to prevent ventilator-associated pneumonia: how valuable are they? Crossref | GoogleScholarGoogle Scholar | 19276975PubMed |

[26]  Playford, G.E. et al. (2012) Problematic linkage of publicly disclosed hand hygiene compliance and health care-associated Staphylococcus aureus bacteraemia rates. Med. J. Aust. 197, 29–30.
Problematic linkage of publicly disclosed hand hygiene compliance and health care-associated Staphylococcus aureus bacteraemia rates.Crossref | GoogleScholarGoogle Scholar |

[27]  Worth, L.J. et al. (2013) Public reporting of health care-associated infection data in Australia: time to refine. Med. J. Aust. 198, 252–253.
Public reporting of health care-associated infection data in Australia: time to refine.Crossref | GoogleScholarGoogle Scholar | 23496389PubMed |


Biography

Mary-Louise McLaws is Professor of Epidemiology at The University of New South Wales, Australia. Her work in healthcare associated infections (HAI) began with the first national survey in 1984 and she was World Health Organization Advisor to China and Malaysia during their development of a national surveillance system. Mary-Louise reviewed the epidemiology of the sudden acute respiratory syndrome (SARS) outbreak in Beijing and Hong Kong. On behalf of the Department of Health and Ageing she reviewed the Pandemic Influenza Infection Control guidelines for evidence-based practices. As Honorary Advisor to the Clinical Excellence Commission she collaborates on patient safety activities and is epidemiology advisor to the World Health Organization Clean Care is Safer Care challenge.



i American Society of Anesthesiologists (ASA) Score is a global score that assesses the physical status of patients before surgery.