Register      Login
Health Promotion Journal of Australia Health Promotion Journal of Australia Society
Journal of the Australian Health Promotion Association
EDITORIAL

Advancing evaluation practice in health promotion

Ben J. Smith A E , Chris Rissel B , Trevor Shilton C and Adrian Bauman D
+ Author Affiliations
- Author Affiliations

A School of Public Health and Preventive Medicine, Monash University, Level 6, 99 Commercial Road, Melbourne, Vic. 3004, Australia.

B NSW Office of Preventive Health, Don Everett Building, Level 1, Liverpool Hospital, Liverpool, NSW 2170, Australia.

C National Heart Foundation Western Australia, 334 Rokeby Road, Subiaco, WA 6008, Australia.

D Prevention Research Collaboration, School of Public Health, University of Sydney, Sydney, NSW 2006, Australia.

E Corresponding author. Email: ben.smith@monash.edu

Health Promotion Journal of Australia 27(3) 184-186 https://doi.org/10.1071/HEv27n3_ED2
Published: 6 December 2016

Program evaluation has long been recognised as a core competency for health promotion practitioners,1,2 reflecting the vital contribution that this evaluation can make to the design, impact and sustainability of our policies and strategies. Over the past three decades, several popular textbooks3,4 and a range of guides and frameworks57 have been produced to facilitate appropriate and high-quality evaluation in this field. Given this history, which spans much of the contemporary health promotion movement, it is of interest to see the renewed and critical attention to evaluation practice, methods, use and capacity by health promotion agencies and practitioners. The questions of what is generating this interest, what has been revealed by recent reviews and analysis, and what further learning and resources may strengthen evaluation, warrant consideration in this special issue of the Health Promotion Journal of Australia on ‘Advancing evaluation practice’.

Some commentators have argued that there are few examples of comprehensive evaluation in health promotion in Australia and elsewhere, or of evaluation evidence being used to guide program development.8 This represents a lost opportunity for learning and program improvement. Others have argued that the cuts to funding of health promotion agencies and programs in Australia in recent years demonstrate the need for higher quality evidence of the public health contribution of health promotion policies and strategies.9 A further perspective has highlighted the complex policy and practice challenges that face health promotion agencies, and the need for closer attention to how evaluation capacity can be developed to provide the evidence needed to guide this work.10

Recent reviews of published and unpublished evaluation reports from Australian health promotion projects have shed light on the scope and methods that characterise evaluation practice. Hulme-Chambers et al.9 identified 157 articles in peer-reviewed journals from 1992–2011 that reported on health promotion evaluations. Impact evaluation was most often presented, with about half of the evaluations using one method only (usually surveys). Notably, there was little change in the purposes and designs of published evaluations over this 20-year period. Francis and Smith11 audited unpublished evaluation reports provided by 24 health promotion organisations in Melbourne between 2008 and 2011. They found that all reported process evaluation, most included impact evaluation, and formative evaluation was rare. There was limited detail given in the reports about impact evaluation methods, but where described, this most often entailed pre- and post-surveys in small samples, participant interviews or focus groups.

Studies that have explored factors affecting the quality and extent of evaluation by health promotion agencies have confirmed that time, resources, staff skills, manager priorities and the presence of an evaluation culture are commonly reported issues.12,13 More recently, interviews with senior policymakers and evaluators in Australia have highlighted the role played by political imperatives in determining whether evaluation is feasible, given the pressure to be demonstrating action within short timeframes, or even desirable, because of the risk that it may reveal that an initiative has had limited success.10 Another study involving interviews with practitioners found that the narrow reporting requirements set by funding agencies, and changes to reporting priorities, were deterrents to comprehensive evaluation.11 This study also found that difficulties in defining and measuring impacts affected the ability of agencies to evaluate projects to the extent that they would like.

Against this background, this special issue has been prepared to disseminate recent insights that can assist the planning, design and implementation of evaluations of health promotion policies and strategies. The first section of the issue presents three papers that explore evaluation designs and frameworks, with two of these focusing on strategies delivered via online and mobile technologies. In their systematic review of the methods used to evaluate health promotion via social networking sites, Lim et al.14 report that evidence and practice insights can be gained through rigorous testing of these in real-life settings, using quasi-experimental or before–after designs together with comprehensive engagement metrics. White et al.15 describe four models that can be used to evaluate mHealth interventions (using mobile technologies), and present a case study of the plan developed to evaluate the Milk Man app, an innovative approach to engaging fathers in the promotion of breastfeeding. This illustrates how app design principles, technological performance principles, and behavioural and health outcomes can all be examined within the scope of an evaluation. In the third paper, Wolfenden et al.16 argue that the speed of evidence generation to inform policy and practice can be improved through greater understanding and use of effectiveness–implementation hybrid designs. Three types of hybrid design are presented, with examples of how they may be used in health promotion evaluation.

Addressing the challenges of conceptualising and reporting on the implementation and impacts of health promotion strategies, four papers in this issue present novel methods for data collection. In the first of these, Kostadinov et al.17 report the use of a perceived community leadership readiness tool to understand implementation quality and context across 20 communities in the South Australia Obesity Prevention and Lifestyle program. Reilly et al.18 examine the properties of four measures for assessing the characteristics of school canteen menus, and show that the quick menu audit tool is a valid instrument for evaluating school canteen policy compliance at a population level. Two papers describe how mobile technologies can be used in data collection: Heesch and Langdon19 show the potential and limitations of GPS data for measuring the effects of infrastructure developments on cycling; and Engelen et al.20 offer a Brief Report on ecological momentary assessment via smartphones as a method for assessing impacts in worksite health promotion strategies.

Complementing the methodological papers in this issue are four reports of evaluations of complex health promotion initiatives, involving multiple settings and partners, and action to bring about change across several levels (i.e. individuals, organisations, environments and/or policies). Kearney et al.21 apply systems theory to the evaluation of a whole school approach to violence prevention that examined student attitudes and skills, class room practices and curricula, and school polices and culture. Genat et al.22 report the findings of the statewide Aboriginal led Victorian Aboriginal Nutrition and Physical Activity Strategy, which set out to build the capacity of Aboriginal and non-Aboriginal professional, organisational and community participants to take action on these issues. Two further papers illustrate how evaluations provided insights to improve the implementation and impact of worksite health promotion programs: Khanal et al.23 report on a developmental evaluation of the Get Healthy at Work program in over 3000 worksites in New South Wales, while Grunseit et al.24 describe how a meso-level evaluation of the Health Workers Initiative in seven Australian jurisdictions revealed critical factors affecting the translation of a national initiative into state-specific programs.

The imperative for high-quality evaluation in health promotion, and the commonly reported barriers to achieving this, have stimulated consideration of systematic approaches to building evaluation capacity. The NSW Health Department has invested significant resources in this area, and the final Brief Report in this issue describes the NSW Health approach to building research and evaluation capacity in population health.25

The range and quality of contributions to this special issue demonstrate the continued learning that is taking place across the health promotion field that can strengthen evaluation design and methods, and the capacity of practitioners and agencies to implement this in a systematic way. We hope that you find new insights within these pages that are of value to your area of work.



References

[1]  Shilton T, Howat P, James R, Burke L, Hutchins C, Woodman R (2008) Health promotion competencies for Australia 2001–5: trends and their implications. Promot Educ 15, 21–6.
Health promotion competencies for Australia 2001–5: trends and their implications.Crossref | GoogleScholarGoogle Scholar |

[2]  Barry MM, Battel-Kirk B, Dempsey C (2012) The CompHP core competencies framework for health promotion in Europe. Health Educ Behav 39, 648–62.
The CompHP core competencies framework for health promotion in Europe.Crossref | GoogleScholarGoogle Scholar |

[3]  Hawe P, Degeling D, Hall J, Brierley A. Evaluating health promotion: a health worker’s guide. Sydney: MacLennan & Petty; 1990.

[4]  Bauman A, Nutbeam D. Evaluation in a nutshell: a practical guide to the evaluation of health promotion programs. Sydney: McGraw Hill; 2013.

[5]  Glasgow RE, Vogt T, Boles S (1999) Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health 89, 1322–7.
Evaluating the public health impact of health promotion interventions: the RE-AIM framework.Crossref | GoogleScholarGoogle Scholar |

[6]  Prevention and Population Health Branch, Victorian Government Department of Health. Evaluation framework for health promotion and disease prevention programs. Available from: https://www2.health.vic.gov.au/getfile/?sc_itemid=%7b33944722-B41A-4FE3-9EA4-BD360DFDD7FA%7d&title=Evaluation%20framework%20for%20health%20promotion%20and%20disease%20prevention%20programs [Verified 10 November 2016].

[7]  NSW Government Premier and Cabinet. NSW Government Program Evaluation Guidelines, January 2016. 2016. Available from: http://www.dpc.nsw.gov.au/__data/assets/pdf_file/0009/155844/NSW_Government_Program_Evaluation_Guidelines.pdf [Verified 10 November 2016].

[8]  Lobo R, Petrich M, Burns SK (2014) Supporting health promotion practitioners to undertake evaluation for program development. BMC Public Health 14, 1315
Supporting health promotion practitioners to undertake evaluation for program development.Crossref | GoogleScholarGoogle Scholar |

[9]  Chambers AH, Murphy K, Kolbe A (2015) Designs and methods used in published Australian health promotion evaluations 1992–2011. Aust N Z J Public Health 39, 222–6.
Designs and methods used in published Australian health promotion evaluations 1992–2011.Crossref | GoogleScholarGoogle Scholar |

[10]  Huckel Schneider C, Milat AJ, Moore G (2016) Barriers and facilitators to evaluation of health policies and programs: policymaker and researcher perspectives. Eval Program Plann 58, 208–15.
Barriers and facilitators to evaluation of health policies and programs: policymaker and researcher perspectives.Crossref | GoogleScholarGoogle Scholar |

[11]  Francis LJ, Smith BJ (2015) Toward best practice in evaluation a study of Australian health promotion agencies. Health Promot Pract 16, 715–23.
Toward best practice in evaluation a study of Australian health promotion agencies.Crossref | GoogleScholarGoogle Scholar |

[12]  Jolley GM, Lawless AP, Baum FE, Hurley CJ, Fry D (2007) Building an evidence base for community health: a review of the quality of program evaluations. Aust Health Rev 31, 603–10.
Building an evidence base for community health: a review of the quality of program evaluations.Crossref | GoogleScholarGoogle Scholar |

[13]  Brug J, Tak NI, Te Velde SJ (2011) Evaluation of nationwide health promotion campaigns in The Netherlands: an exploration of practices, wishes and opportunities. Health Promot Int 26, 244–54.
Evaluation of nationwide health promotion campaigns in The Netherlands: an exploration of practices, wishes and opportunities.Crossref | GoogleScholarGoogle Scholar |

[14]  Lim MSC, Wright CJC, Carrotte ER, Pedrana AE (2016) Reach, engagement, and effectiveness: a systematic review of evaluation methodologies used in health promotion via social networking sites. Health Promot J Austr 27, 187–97.
Reach, engagement, and effectiveness: a systematic review of evaluation methodologies used in health promotion via social networking sites.Crossref | GoogleScholarGoogle Scholar |

[15]  White BK, Burns SK, Giglia RC, Scott JA (2016) Designing evaluation plans for health promotion mHealth interventions: a case study of the Milk Man mobile app. Health Promot J Austr 27, 198–203.
Designing evaluation plans for health promotion mHealth interventions: a case study of the Milk Man mobile app.Crossref | GoogleScholarGoogle Scholar |

[16]  Wolfenden L, Williams CM, Wiggers J, Nathan N, Yoong SL (2016) Improving the translation of health promotion interventions using effective–implementation hybrid designs in program evaluations. Health Promot J Austr 27, 204–7.
Improving the translation of health promotion interventions using effective–implementation hybrid designs in program evaluations.Crossref | GoogleScholarGoogle Scholar |

[17]  Kostadinov I, Daniel M, Jones M, Cargo M (2016) Assessing change in perceived community leadership readiness in the Obesity Prevention and Lifestyle program. Health Promot J Austr 27, 208–14.
Assessing change in perceived community leadership readiness in the Obesity Prevention and Lifestyle program.Crossref | GoogleScholarGoogle Scholar |

[18]  Reilly K, Nathan N, Wolfenden L, Wiggers J, Sutherland R, Wyse R, Yoong SL (2016) Validity of four measures in assessing school canteen menu compliance with state-based health canteen policy. Health Promot J Austr 27, 215–21.
Validity of four measures in assessing school canteen menu compliance with state-based health canteen policy.Crossref | GoogleScholarGoogle Scholar |

[19]  Heesch KC, Langdon M (2016) The usefulness of GPS bicycle tracking data for evaluating the impact of infrastructure change on cycling behaviour. Health Promot J Austr 27, 222–9.
The usefulness of GPS bicycle tracking data for evaluating the impact of infrastructure change on cycling behaviour.Crossref | GoogleScholarGoogle Scholar |

[20]  Engelen L, Chau JY, Burks-Young S, Bauman A (2016) Application of ecological momentary assessment in workplace health evaluation. Health Promot J Austr 27, 259–63.
Application of ecological momentary assessment in workplace health evaluation.Crossref | GoogleScholarGoogle Scholar |

[21]  Kearney S, Leung L, Joyce A, Ollis D, Green C (2016) Applying systems theory to the evaluation of a whole school approach to violence prevention. Health Promot J Austr 27, 230–5.
Applying systems theory to the evaluation of a whole school approach to violence prevention.Crossref | GoogleScholarGoogle Scholar |

[22]  Genat B, Browne J, Thorpe S, MacDonald C (2016) Sectoral system capacity development in health promotion: evaluation of an Aboriginal nutrition program. Health Promot J Austr 27, 236–42.
Sectoral system capacity development in health promotion: evaluation of an Aboriginal nutrition program.Crossref | GoogleScholarGoogle Scholar |

[23]  Khanal S, Lloyd B, Rissel C, Portors C, Grunseit A, Indig D, Ibrahim I, McElduff S (2016) Evaluation of the implementation of Get Health at Work, a workplace health promotion program in New South Wales, Australia. Health Promot J Austr 27, 243–50.
Evaluation of the implementation of Get Health at Work, a workplace health promotion program in New South Wales, Australia.Crossref | GoogleScholarGoogle Scholar |

[24]  Grunseit AC, Rowbotham S, Pescud M, Indig D, Wutzke S (2016) Beyond fun runs and fruit bowls: an evaluation of the meso-level processes that shaped the Australian Health Workers Initiative. Health Promot J Austr 27, 251–8.
Beyond fun runs and fruit bowls: an evaluation of the meso-level processes that shaped the Australian Health Workers Initiative.Crossref | GoogleScholarGoogle Scholar |

[25]  Edwards B, Stickney B, Milat A, Campbell D, Thackway S (2016) Building research and evaluation capacity in population health: the NSW Health approach. Health Promot J Austr 27, 264–7.
Building research and evaluation capacity in population health: the NSW Health approach.Crossref | GoogleScholarGoogle Scholar |