Register      Login
Australian Health Review Australian Health Review Society
Journal of the Australian Healthcare & Hospitals Association
RESEARCH ARTICLE (Open Access)

Emergency department waiting times: do the raw data tell the whole story?

Janette Green A B , James Dawber A , Malcolm Masso A and Kathy Eagar A
+ Author Affiliations
- Author Affiliations

A Australian Health Services Research Institute, Sydney Business School, iC Enterprise 1, Innovation Campus, University of Wollongong, NSW 2522, Australia. Email: jdawber@uow.edu.au, mmasso@uow.edu.au, kathyeagar@optusnet.com.au

B Corresponding author. Email: janette@uow.edu.au

Australian Health Review 38(1) 65-69 https://doi.org/10.1071/AH13065
Submitted: 20 May 2013  Accepted: 15 November 2013   Published: 17 January 2014

Journal Compilation © AHHA 2014

Abstract

Objective To determine whether there are real differences in emergency department (ED) performance between Australian states and territories.

Methods Cross-sectional analysis of 2009−10 attendances at an ED contributing to the Australian non-admitted patient ED care database. The main outcome measure was difference in waiting time across triage categories.

Results There were more than 5.8 million ED attendances. Raw ED waiting times varied by a range of factors including jurisdiction, triage category, geographic location and hospital peer group. All variables were significant in a model designed to test the effect of jurisdiction on ED waiting times, including triage category, hospital peer group, patient socioeconomic status and patient remoteness. When the interaction between triage category and jurisdiction entered the model, it was found to have a significant effect on ED waiting times (P < 0.001) and triage was also significant (P < 0.001). Jurisdiction was no longer statistically significant (P = 0.248 using all triage categories and 0.063 using only Australian Triage Scale 2 and 3).

Conclusions Although the Council of Australian Governments has adopted raw measures for its key ED performance indicators, raw waiting time statistics are misleading. There are no consistent differences in ED waiting times between states and territories after other factors are accounted for.

What is known about the topic? The length of time patients wait to be treated after presenting at an ED is routinely used to measure ED performance. In national health agreements with the federal government, each state and territory in Australia is expected to meet waiting time performance targets for the five ED triage categories. The raw data indicate differences in performance between states and territories.

What does this paper add? Measuring ED performance using raw data gives misleading results. There are no consistent differences in ED waiting times between the states and territories after other factors are taken into account.

What are the implications for practitioners? Judgements regarding differences in performance across states and territories for triage waiting times need to take into account the mix of patients and the mix of hospitals.

Introduction

When patients present at an Australian emergency department (ED), they are assigned a triage category to indicate how urgently they should be seen. Scores on the Australian Triage Scale (ATS) range from 1 (immediately life-threatening) to 5 (less urgent).1

Although designed as a measure of clinical urgency, the ATS is now used in the reporting of ED performance. Under existing national health agreements, the jurisdictions (six states and two territories) are assessed and compared on several indicators including the percentage of patients who are treated within national benchmark waiting times for each triage category. As shown in Table 1, benchmarks have been set as the percentage of patients seen within prescribed times for each category.


Table 1.  Australasian College for Emergency Medicine triage performance standards2
ATS, Australian Triage Scale
Click to zoom

This paper reports on a study on the use of the triage scale to investigate jurisdictional differences in ED waiting times.3 The context is the current health reform agenda. The key research question is whether there are real differences in ED performance between jurisdictions. Investigating the link between triage scale and ED performance is of particular interest at present as the ATS is included in the ED casemix classification that has been adopted for use in the new national Activity Based Funding model.4


Methods

Data on ED presentations at Australian public hospitals between 1 July 2009 and 30 June 2010 were analysed. These data were the national non-admitted patient ED care database, which is compiled from data supplied by the jurisdictions to the Australian Institute of Health and Welfare (AIHW). Only the Australian Capital Territory (ACT) and the Northern Territory report nationally on all ED attendances with all of the states excluding some smaller EDs from the data collection. The AIHW estimates that the percentage of ED attendances reported ranges from 67% in South Australia to 100% in the two territories.

Variables of interest in this database included triage category, ED waiting time, ED departure status, date of birth and postcode of usual residence. This last variable was used to derive a measure of socioeconomic status and a remoteness category for each patient. Hospital peer group was added to the analysis database. Waiting time was defined as the time elapsed between triage and the commencement of assessment and treatment.

The data were inspected for missing values and other anomalies. All records with a waiting time of more than 8 h and any attendances for which the triage category was missing were removed from the dataset. It was assumed that a wait of more than 8 h between being triaged and being seen was either a data error or may have represented a patient who did not require emergency care.

A descriptive analysis of the refined dataset was undertaken to explore the relationship between demographic information and variables such as jurisdiction, triage category and hospital peer group and ED waiting times. As several of these variables were expected to be correlated with each other, the descriptive analysis also explored interactions between these variables.

The primary intent of the analysis was to test for and understand any differences in ED waiting times between the jurisdictions. Such differences could arise because of differences between hospitals or differences between patients within hospitals. To investigate any systematic variation in the data, the analysis continued by fitting several multilevel models, with ED waiting time as the response variable. Variables found to be associated with ED waiting times in the exploratory analysis were included in the models as explanatory variables, and systematic model selection operations were performed. These models would collectively adjust for all relevant variables and hence provide a more practical comparison of the differences in waiting times between jurisdictions.

Explanatory variables within the model were defined as statistically significant if the P-value was less than 0.05. However, it is important to emphasise that, with such a large sample size, statistical significance in some tests is easily achieved. This means that even very minor differences that would be considered clinically non-significant would very likely be statistically significant. For this reason, clinical and statistical significance were considered together.

All analyses were conducted on the edited dataset, first using the whole dataset, then for Triage Categories 2 and 3 separately and for Triage Categories 2 and 3 together. These latter analyses required further trimming of the data. Statistical outliers were judged to be waiting times longer than 120 min and 180 min for Triage Categories 2 and 3 respectively. Patients whose treatment could be delayed beyond these outlier thresholds were considered unlikely to have met the clinical criteria for classification into the relevant triage category; Triage Category 2 is for patients with imminently life-threatening conditions and Triage Category 3 is for patients with potentially life-threatening conditions.

The results were presented at a national stakeholder workshop involving participants drawn from jurisdictions and health interest groups, including academics and the Australasian College of Emergency Medicine. Their feedback provided additional insight for interpretation of the results.


Results

The initial trimming of the data removed 4165 records with waiting times in excess of 8 h and a further 3328 records with triage category missing, representing 0.072 and 0.057% respectively of total attendances. This left more than 5.8 million ED attendances, with each jurisdiction contributing more than 100 000 attendances to the dataset. The overall triage profile of each jurisdiction is shown in Table 2. This table shows both the number and percentage of attendances by triage category. It will be seen that the percentage in each category (i.e. the triage profile) varied considerably between some jurisdictions. These differences in triage profile are further explored below.


Table 2.  Profile of emergency department attendances by triage and jurisdiction, 2009−10
Click to zoom

A little over 40% of attendances represented in the data had been allocated to Triage Categories 2 and 3. For the analyses that examined data on patients from these two triage categories separately, 1270 (0.24%) of Triage Category 2 and 43 022 (2.25%) of Triage Category 3 records were identified as outliers and removed.

As expected, ED waiting times varied by triage category, with patients in Triage Category 1 waiting the shortest time and patients in Triage Category 5 waiting the longest. These differences are illustrated by the waiting times for Triage Categories 2 and 3 shown in Table 3. Table 3 also illustrates that differences in waiting times were also found between jurisdictions. With all triage categories combined, waiting times varied from 38.5 min in New South Wales (NSW) to 65.7 min in the ACT. Different patterns emerged when Triage Categories 2 and 3 data were examined separately, with the shortest waiting times being the ACT (7.5 min) and Victoria (27.3 min) in Triage Categories 2 and 3 respectively.


Table 3.  Mean waiting times (min) by jurisdiction
Click to zoom

The mean waiting time also varied by geographic location and by hospital peer group (Fig. 1). Similar patterns were found when Triage Categories 2 and 3 were investigated separately. No clear effect of socioeconomic status or indigenous status on waiting times was observed. There were small differences by age, with slightly shorter waiting times for adults as age increased and for children.


Fig. 1.  Mean waiting times by hospital peer group – all triage categories. A1, principal referral; A2, specialist women’s and children’s; B1, large major cities; B2, large regional and remote; C1, medium major cities and regional group 1; C2, medium major cities and regional group 2; D1, small regional acute; D2, small non-acute; D3, remote acute; G, unpeered and other acute.
F1

The proportions of ED attendances by hospital peer group differed between the jurisdictions. For example, all ED attendances in the ACT were in Peer Group A hospitals. In contrast, only 50% of ED attendances in Western Australia were in Peer Group A hospitals.

Although there were observed differences in the mean waiting times between jurisdictions, there were also jurisdictional differences in other factors. Statistical models were fitted to the data to help determine whether these differences in waiting times were associated with other factors varying between the jurisdictions, such as data from more Peer Group A hospitals, which also had longer waiting times.

All variables were significant in the model designed to test the effect of jurisdiction on ED waiting times. These variables included triage category, hospital peer group, patient socioeconomic status and patient remoteness. However, even after compensating for all these other variables, there were still significant differences between the states and territories. This was the case when all triage categories were included in the model and when the model used only Triage Categories 2 and 3.

The model was then expanded to take into account the simultaneous effect of pairs of explanatory variables by including interaction terms. When the interaction between triage category and jurisdiction entered the model, it was found to have a significant effect on ED waiting times (P < 0.001) and triage category was also significant (P < 0.001). Importantly, jurisdiction was no longer statistically significant (P = 0.248 using all triage categories and 0.063 using only ATS 2 and 3) (Table 4).


Table 4.  Statistical significance results using the full statistical model
T4


Discussion

Several studies have found that comparisons of raw ED performance measures can be misleading. For example, based on hospital-level data reported on the MyHospitals website, performance differences between hospitals have been found to be related to patient urgency mix and hospital peer group.5

This study was designed to investigate jurisdictional differences in the length of time patients wait in an ED. Our key finding is that there are no consistent differences in ED waiting times between the jurisdictions after other factors (including the effect of hospital peer group) are taken into account.

Waiting times differ according to hospital type (hospital ‘peer groups’). Patients attending ‘Peer Group A’ hospitals wait significantly longer than patients attending other hospitals. The implication of this finding is that a jurisdiction may have a longer average reported waiting time because more of its patients are seen in Peer Group A hospitals, and these hospitals tend to have poorer performance across all jurisdictions.

It was equally necessary to adjust for differences in geographic location and recorded urgency of treatment. The difference in triage profiles (Table 2) is an important finding. To clarify, as a prerequisite to meeting national benchmarks, less than 40% of all ED attendances in NSW and Victoria would need to be seen within 30 min compared with over 50% in Queensland. In other words, all other things being equal, it would be easier for NSW and Victoria to achieve the national benchmark than Queensland.

Many studies have been conducted to measure the consistency of triage using the ATS but it is difficult to compare results because of differences in methods and use of the ATS over time.6 A recent review of the ATS concluded that ‘the ATS per se is insufficient to ensure acceptable inter-rater reliability, particularly during busy periods in the ED, given the over-emphasis of the ATS on key outcomes’.7 The inter-rater reliability of the ATS for mental health patients in ED is particularly inadequate.8

The triage category assigned can, in part, depend on the person doing the triage. For example, the results of a review of the literature (involving eight studies) suggest that triage nurses who have received triage education make better triage decisions whereas none of the studies found a significant relationship between triage decision-making and experience, either the number of years working as an emergency nurse or years of triage experience.9

Our study results rely on the accuracy of the times recorded as these are used to calculate waiting time. An audit of triage practices undertaken in NSW in 2008 identified some differences in practices and protocols that would have an effect on the recording of times and therefore the subsequent calculation of waiting time.10 However, there is no reason to suspect any systematic jurisdictional errors arising because of these differences.

Several other factors influence waiting time statistics. For example, there is good evidence that factors within the ED have an effect on patient throughput. It is inevitable that queues will develop when demand exceeds capacity, as occurs due to the variability inherent in the demand for ED services.11 Although there is no single, internationally accepted definition for ED overcrowding,12 it is recognised as the cause of inefficiencies. Overcrowding in the ED can arise because of the number, urgency or complexity of patients arriving, because of factors within the ED or because of the inability to ‘move patients on’ elsewhere, usually due to an inability to admit patients to an inpatient bed. In particular, a systematic review of the literature identified the importance of output factors causing problems in EDs with the authors concluding that ‘the body of literature demonstrates that ED crowding is a local manifestation of a systemic disease’.13

High bed occupancy, rather than the number of beds per se, appears to be the major driver of ‘access block’ problems in hospitals.14 The influence of hospital occupancy is well illustrated by a study at the Queen Elizabeth Hospital in Adelaide where a decrease in hospital occupancy during a period of industrial action was linked with a reduction in the time patients in Triage Categories 2–5 waited for treatment.15

Although any of these factors can influence ED waiting times, none of them is likely to explain the systematic differences in jurisdictional triage profiles. Nevertheless, when the results were discussed at the national stakeholder workshop, jurisdictional representatives and clinicians were not surprised to learn of these significant jurisdictional differences in triage profile.

There was a widespread perception that triage is assessed differently across (and sometimes within) jurisdictions. Importantly, differences in triage profiles were attributed to differences in triage processes rather than jurisdictional differences in the clinical profile of patients presenting for ED care.

Although there was not consensus about the reasons for these differences, it was identified that there had been no nationally consistent training in triage assignment since the ATS was introduced two decades ago. This is despite the release of the Emergency Triage Education Kit in 2009. The implication is that triage staff in different locations had unintentionally drifted over time from the definitions of each triage category. A further issue is that, although triage was introduced for clinical purposes, it is now being used in other ways. Specifically, triage profile influences funding in some jurisdictions and ED performance is measured by triage category. There are thus varying incentives across the country to triage in different ways. This is consistent with Bevan and Hood’s study in the UK that found that the use of targets to measure performance results in gaming.16

The implications of this are important. The ATS has several uses. Although it is primarily used as a tool to ensure patients are treated within an appropriate timeframe based on the urgency of their condition, it is also used as a funding mechanism, and as an indicator of performance.17,18 Each of these secondary uses creates their own incentives.


Conclusion

Raw waiting time statistics can be misleading. Although one jurisdiction may appear to be the best performer when measured by raw waiting times, this is not the case when differences in the mix of patients and the mix of hospitals are taken into account.

In the context of the current health-reform agenda, further research is required to better understand the reasons for differences in triage practices. Subsequent to that, a national strategy is required to improve the consistency of triage assignment across the country. Until this occurs, we urge caution in interpreting raw triage waiting times as measures of performance and in using triage category as a basis for funding.


Competing interests

The authors have no conflicts of interest to report.



Acknowledgements

The authors wish to acknowledge the Council of Australian Governments Reform Council for funding the project reported in this paper.


References

[1]  Australasian College for Emergency Medicine Australasian College for Emergency Medicine: national triage scale. Emerg Med 1994; 6 145–6.

[2]  Australasian College for Emergency Medicine. Policy on the Australasian Triage Scale. Melbourne: Australasian College for Emergency Medicine; 2006.

[3]  Eagar K, Dawber J, Masso M, Bird S, Green J. Emergency department performance by states and territories. Wollongong : Centre for Health Service Development, University of Wollongong; 2011.

[4]  Independent Hospital Pricing Authority. ABF price model reference classifications for 2012–13. Canberra: Independent Hospital Pricing Authority; 2012. Available at http://www.ihpa.gov.au/internet/ihpa/publishing.nsf/Content/ABF-Price-Model-Reference-Classifications-for-2012-13 [verified 8 February 2013].

[5]  Greene J, Hall J. The comparability of emergency department waiting time performance data. Med J Aust 2012; 197 345–8.
The comparability of emergency department waiting time performance data.Crossref | GoogleScholarGoogle Scholar | 22994833PubMed |

[6]  Gerdtz MF, Collins M, Chu M, Grant A, Tchernomoroff R, Pollard C, et al Optimizing triage consistency in Australian emergency departments: The Emergency Triage Education Kit. Emerg Med Australas 2008; 20 250–9.
Optimizing triage consistency in Australian emergency departments: The Emergency Triage Education Kit.Crossref | GoogleScholarGoogle Scholar | 18462405PubMed |

[7]  Forero R, Nugus P. Literature review on the Australasian Triage Scale (ATS). Sydney: Australian Institute of Health Innovation, Australasian College for Emergency Medicine; 2012. Available at http://www.acem.org.au/media/media_releases/2012_-_ACEM_Triage_Literature_Review.pdf [verified 8 February 2013].

[8]  Creaton A, Liew D, Knott J, Wright M. Interrater reliability of the Australasian Triage Scale for mental health patients. Emerg Med Australas 2008; 20 468–74.
Interrater reliability of the Australasian Triage Scale for mental health patients.Crossref | GoogleScholarGoogle Scholar | 19125824PubMed |

[9]  Considine J, Botti M, Thomas S. Do knowledge and experience have specific roles in triage decision-making? Acad Emerg Med 2007; 14 722–6.
Do knowledge and experience have specific roles in triage decision-making?Crossref | GoogleScholarGoogle Scholar | 17656608PubMed |

[10]  Deloitte. NSW Department of Health triage benchmarking review. Sydney: Deloitte Touche Tohmatsu; 2008.

[11]  Higginson I, Whyatt J, Silvester K. Demand and capacity planning in the emergency department: how to do it. Emerg Med J 2011; 28 128–35.
Demand and capacity planning in the emergency department: how to do it.Crossref | GoogleScholarGoogle Scholar | 1:STN:280:DC%2BC3M7hs1WqtA%3D%3D&md5=39e21fd4bdcbc955b8288e31ca7391baCAS | 21030542PubMed |

[12]  Guo B, Harstall C. Health technology assessment report no. 38: strategies to reduce emergency department overcrowding. Edmonton, AB, Canada: Alberta Heritage Foundation for Medical Research; 2006.

[13]  Hoot NR, Aronsky D. Systematic review of emergency department crowding: causes, effects, and solutions. Ann Emerg Med 2008; 52 126–36.
Systematic review of emergency department crowding: causes, effects, and solutions.Crossref | GoogleScholarGoogle Scholar | 18433933PubMed |

[14]  Dwyer J, Jackson T. Literature review: integrated bed and patient management. Melbourne: School of Public Health, La Trobe University; 2001.

[15]  Dunn R. Reduced access block causes shorter emergency department waiting times: an historical control observational study. Emerg Med 2003; 15 232–8.
Reduced access block causes shorter emergency department waiting times: an historical control observational study.Crossref | GoogleScholarGoogle Scholar |

[16]  Bevan G, Hood C. Have targets improved performance in the English NHS? BMJ 2006; 332 419–22.
Have targets improved performance in the English NHS?Crossref | GoogleScholarGoogle Scholar | 16484272PubMed |

[17]  Yousif K, Bebbington J, Foley B. Impact on patients triage distribution utilizing the Australasian Triage Scale compared with its predecessor the National Triage Scale. Emerg Med Australas 2005; 17 429–33.
Impact on patients triage distribution utilizing the Australasian Triage Scale compared with its predecessor the National Triage Scale.Crossref | GoogleScholarGoogle Scholar | 16302934PubMed |

[18]  FitzGerald G, Jelinek GA, Scott D, Gerdtz MF. Emergency department triage revisited. Emerg Med J 2010; 27 86–92.
Emergency department triage revisited.Crossref | GoogleScholarGoogle Scholar | 20156855PubMed |