Immunology Question

DescriptionLobo et al. BMC Public Health 2014, 15:1315
http://www.biomedcentral.com/1471-2458/15/1315
DEBATE
Open Access
Supporting health promotion practitioners to
undertake evaluation for program development
Roanna Lobo1*†, Mark Petrich2† and Sharyn K Burns1†
Abstract
Background: The vital role of evaluation as integral to program planning and program development is well supported
in the literature, yet we find little evidence of this in health promotion practice. Evaluation is often a requirement for
organisations supported by public funds, and is duly undertaken, however the quality, comprehensiveness and use of
evaluation findings are lacking. Practitioner peer-reviewed publications presenting evaluation work are also limited. There
are few published examples where evaluation is conducted as part of a comprehensive program planning process or
where evaluation findings are used for program development in order to improve health promotion practice.
Discussion: For even the smallest of programs, there is a diverse array of evaluation that is possible before, during and
after program implementation. Some types of evaluation are less prevalent than others. Data that are easy to collect or
that are required for compliance purposes are common. Data related to how and why programs work which could be
used to refine and improve programs are less commonly collected. This finding is evident despite numerous resources
and frameworks for practitioners on how to conduct effective evaluation and increasing pressure from funders to provide
evidence of program effectiveness. We identify several organisational, evaluation capacity and knowledge translation
factors which contribute to the limited collection of some types of data. In addition, we offer strategies for improving
health promotion program evaluation and we identify collaboration of a range of stakeholders as a critical enabler for
improved program evaluation.
Summary: Evaluation of health promotion programs does occur and resources for how to conduct evaluation are readily
available to practitioners. For the purposes of program development, multi-level strategies involving multiple stakeholders
are required to address the organisational, capacity and translational factors that affect practitioners’ ability to undertake
adequate evaluation.
Keywords: Evaluation, Health promotion, Program development
Background
“Evaluation’s most important purpose is not to prove
but to improve” ([1], p.4).
The vital role of evaluation as integral to program planning and program development is well supported in the
literature [2-5], yet we find little evidence of this in health
promotion practice. Practitioners often cite a lack of time
by staff already under significant work pressures as a reason why they do not conduct more evaluation [5]. Some
* Correspondence: roanna.lobo@curtin.edu.au

Equal contributors
1
WA Centre for Health Promotion Research, School of Public Health, Curtin
University, GPO Box U1987, Bentley WA 6845, Western Australia
Full list of author information is available at the end of the article
practitioners also admit to undervaluing evaluation in the
context of other competing priorities, notably service
delivery. In our experience as evaluators, supported by the
literature [6], we have found few examples where evaluation
is part of a comprehensive program planning process or
where program evaluation provides sufficient evidence to
inform program development to improve health promotion
practice.
There is a cyclical and ongoing relationship between
program planning and evaluation. Different types of evaluation (formative, process, impact, outcome) are required
at different stages in planning and implementing a program. Evaluation activities are much broader than measuring before and after changes as a result of intervention.
They include stakeholder consultation, needs assessment,
prototype development and testing, measuring outputs,
© 2014 Lobo et al.; licensee BioMed Central. This is an Open Access article distributed under the terms of the Creative
Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain
Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article,
unless otherwise stated.
Lobo et al. BMC Public Health 2014, 15:1315
http://www.biomedcentral.com/1471-2458/15/1315
Page 2 of 8
monitoring implementation activities, and assessing how a
program works, its strengths and limitations [7]. Following
implementation, periods of reflection inform future program planning and refinement (see Figure 1, based on a
four stage project management life cycle [8]).
This cyclical approach to planning and evaluation is not
always evident in practice; many organisations appear to
take a linear approach, often undertaking planning and
evaluation activities independently, and commensurate with
the funding available. Others see evaluation as a measure of
compliance or proof that is more useful to those requesting
or funding the program evaluation than to those implementing the program.
In this discussion we will argue that, for even the smallest of programs, there is a diverse array of evaluation that
is possible before, during and after program implementation. We suggest that some types of evaluation are less
prevalent than others. We will examine three types of factors contributing to limited program evaluation and
propose strategies to support practitioners to undertake
evaluation that improves health promotion practice.
Discussion
Evaluation of health promotion programs does occur. However, the evaluation undertaken seems to be determined, at
least in part, by data that is easy to collect, such as program
costs, client satisfaction surveys, number of participants,
number of education sessions, and costs per media target
audience ratings points (TARPs). These data may be aggregated and reported on, however such cases seem to be fairly
rare and restricted to evaluations that are undertaken as research or where compliance is required; for example, where
program funding may not be approved without a clear
evaluation plan.
There is little evidence of evaluation being truly used for
program development despite:
Extensive literature on the health promotion
planning and evaluation process [7,9-19];
Evaluation identified as a core competency of health
promotion practice [20-23]; and
Planning and evaluation units as central components
of public health and health promotion degrees
[24,25].
“Effective evaluation is not an ‘event’ that occurs at the
end of a project, but is an ongoing process which helps
decision makers better understand the project; how it
is impacting participants, partner agencies and the
community; and how it is being influenced/impacted
by both internal and external factors” ([26], p.3).
Our experience as evaluators indicates that practitioners
frequently regard evaluation as an add-on process when a
program is finished, rather than an ongoing process. Evaluation measures which are focused on outputs do not usually
reflect the totality of a program’s effects. As such, practitioners may not perceive the data to be useful and therefore
may undertake data collection solely to be compliant with
program reporting requirements, or to justify continued
funding.
Evaluation that identifies what works well and why,
where program improvements are needed and whether a
program is making a difference to a health issue will provide more valuable data for service providers. It is acknowledged that these data are often harder to collect
than measures of program activity. However, these types
of evaluation are needed to avoid missed opportunities for
service improvements [6]. Important evaluation questions
include: Are resources being used well? Are services equitable and accessible? Are services acceptable to culturally
and linguistically diverse populations? Could modifications to the program result in greater health gains?
•Formative evaluation
•Collection of baseline
data
•Needs assessment
•Organisational fit
•Strategic alignment
Initiate
Plan
Finish
Implement
•Impact evaluation
•Outcome evaluation
•Process evaluation
•Formative evaluation
•Impact evaluation
Program review
and refinement
Figure 1 Planning and evaluation cycle (adapted from [8]).
Lobo et al. BMC Public Health 2014, 15:1315
http://www.biomedcentral.com/1471-2458/15/1315
Page 3 of 8
The answers to these types of questions can inform
more effective and sustainable health promotion programs, reduce the potential for iatrogenic effects (preventable harm), and justify the expenditure of public funds. To
be an ethical and responsible service provider, this accountability is critical.
Why does the planning and evaluation cycle break
down for some organisations?
We can identify a number of factors that contribute
to limited program evaluation, many of which are generally well known to health promotion practitioners
[5,6,18,19,27,28]. We have grouped these constraints on
program evaluation broadly as organisational, capacity,
and translational factors (see Figure 2).
Organisational factors
Funding for preventive health programs is highly competitive [15], with implied pressure to pack in a great deal of
intervention, often to the detriment of the evaluation and
the health programs. Economic evaluations are also desired
by funding agencies without clear appreciation of the technical constraints and costs involved in undertaking these
evaluations [16,25]. Since health promotion programs often
do not attract significant funding or resources [23,29], the
evaluation budget can be limited. In addition, funding can
operate on short-term cycles, often up to 12 months, see
for example projects funded by the Western Australian
Health Promotion Foundation (Healthway) [30].
These project-based funding models can damage the
sustainability of good preventive health programs for a
number of reasons, including that funding may end before
the program is able to demonstrate evidence that it works.
Instead of desirable longer-term emphasis on health change
and program development, evaluation choices tend to be
responsive to these short-term drivers. Consequently, the
planning process for programs may be fast-tracked with the
focus weighted heavily towards intervention, and the evaluation component can move towards one of two extremes –
either research-oriented or managerial evaluations.
Research-oriented evaluation projects, experimental in
nature, include attempts to create a controlled research
study, which may not be sustainable outside the funding
period. There is good evidence of sound researchoriented evaluations [31,32]. However, the gold standard
of the randomised controlled trial (RCT) requiring a
comparison non-intervention control group is not always appropriate for a health promotion program that is
delivering services to everyone in the community [33].
Many health promotion programs are also not well
suited to quasi-experimental evaluation of health outcomes, given their naturalistic setting and proposed program benefits often well into the future. The rigorous
study design of research-oriented projects is important
but may not be appropriate for many community and
national health promotion interventions.
Managerial evaluations are minimalist types of evaluation that are designed to meet auditing requirements.
We suspect that understanding of program evaluation
has become rather superficial, based more on recipestyle classification or naming conventions (formative,
process, impact, outcome evaluation) and less on the
underlying purpose.
These categories may meet funder requirements as a
recognisable model of types of evaluation. However, the
purpose of the evaluation and its value are often lost. The
capacity to inform both the current program and future
Organisational
factors
•Evaluation is seen by practitioners as a compliance
process not for program development
•Service providers focus on operational measures
•Funding agencies have unrealistic expectations for
economic evaluations or measuring impact
Capacity
factors
•Lack of evaluation knowledge and skills
•Practitioners unsure how to design and implement
evaluation or how to use evaluation findings
•Unable to develop and apply an evaluation
framework that is coherent, logical and feasible
Translational
factors
•Lack of opportunities or difficulties converting
evaluation knowledge into practice
•Political influence
•Difficulties engaging target groups in programs
perceived to be short-term as a result of funding
Figure 2 Factors contributing to limited program evaluation.
Lobo et al. BMC Public Health 2014, 15:1315
http://www.biomedcentral.com/1471-2458/15/1315
iterations is also reduced. Furthermore, the usefulness of
the evaluation data collected may be compromised if the
program planning process has not considered evaluation
in advance, in particular, objectives that are relevant to the
project and that can be evaluated.
Capacity factors
Evaluation of health promotion programs is particularly
important to prevent unintended effects. Yet the reality
for agencies undertaking health promotion work is that
evaluation may not be well understood by staff, even those
with a health promotion or public health background. For
example, health promotion practitioners intending to collect evaluation data that can be shared, disseminated or
reported more widely may need to partner with a relevant
organisation in order to access a Human Research Ethics
Committee (HREC) to seek ethics approval. Evaluation
can also be seen as difficult [28,34], and a task that cannot
be done without specialist software, skills or assistance.
As a result, there may be inadequate consideration of
evaluation or evaluation is not planned at all. Data are
not always suitable to address desired evaluation questions, or are not collected well. Consider the use of surveys to obtain evaluation data. People creating surveys
often have no training in good survey design. The resulting survey data are not always useful or very meaningful
and these negative experiences may fuel service providers’ doubts about the value of evaluation surveys, and
indeed evaluation in general.
A number of circumstances can contribute further to
limited evaluation. For example, when resources are scarce
or there is instability in the workforce, or when a program
receives recurrent funding with no requirement for evidence of program effectiveness, the perceived value of
rigorous evaluation is low. Furthermore, in any funding
round, the program may no longer be supported making
any evaluative effort seem wasteful. Conversely, if the
program was successful in previous funding rounds there is
often little motivation to change other than a gradual creep
in activity number and/or scope. Consequently, there
appear to be few incentives for quality evaluation used to
inform program improvements, to modify, expand or contract, or to change direction in programming.
Translational factors
Political influence also plays a role. Being seen to be ‘doing
something quickly’ to address a public health issue is a
common reaction by politicians leading to hurriedly
planned programs with few opportunities for evaluation.
The difficulties of working with hard to reach and unengaged target groups may be ignored in the urgency to act
quickly. As a result, getting buy-in from community
groups may not be actively sought, or may be tokenistic at
Page 4 of 8
best, with the potential for exacerbating the social exclusion experienced by marginalised groups [35].
There is also some reluctance by practitioners to evaluate
due to potential risk of unfavourable findings. Yet, it is important to share both positive and negative outcomes so that
others learn from and do not make similar mistakes. Confidence to publish lessons learned firstly requires developing
the skills to be able to disseminate knowledge through peerreviewed journals and other forums. Secondly, practitioners
require reassurance from funding agencies that funds will
not automatically be withdrawn without opportunity for
practitioners to adjust programs to improve outcomes.
Information about how to conduct evaluations for different types of programs is plentiful and readily available
to practitioners in the public domain (see for example
[7,9,10,16]). Strategies for building evaluation capacity
and creating sustainable evaluation practice are also well
documented [36]. Yet barriers to translating this knowledge
into health promotion practice clearly remain [34,37].
What is needed to support practitioners to undertake
improved evaluation?
We propose that multi-level strategies are needed to address the organisational, capacity and translational factors
that contribute to the currently limited program evaluation focused on health promotion program development
[6,12,14]. We also suggest that supporting health promotion practitioners to conduct evaluations that are more
meaningful for program development is a shared responsibility. We identify strategies and roles for an array of actors including health promotion practitioners, educators,
policymakers and funders, organisational leadership and
researchers. Many strategies also require collaboration between different roles. Examples of the strategies and
shared responsibilities needed to improve health promotion program evaluation are shown in Figure 3 and are
discussed below.
Strategies to address organisational factors
We have questioned here the expectations of funding
agencies in relation to evaluation. We concur with Smith
[6] and question the validity of the outcomes some organisations may require, or expect, from their own programs.
Assisting organisations to develop achievable and relevant
goals and objectives, and processes for monitoring these,
should be a focus of capacity building initiatives.
Organisational leadership needs to place a high value on
evaluation as a necessary tool for continuous program development and improvement. There should be greater
focus on quantifying the extent of impact needed. Practitioners need to feel safe to ask: could we be doing better?
Asking this question in the context of raising the standards and quality of programs to benefit the target groups
Lobo et al. BMC Public Health 2014, 15:1315
http://www.biomedcentral.com/1471-2458/15/1315
Addressing
organisational
factors
Support to develop
and monitor
achievable and
relevant goals and
objectives (R, OL, F)
Promote and support
evaluation efforts (OL)
Support to publish
and disseminate
evaluation findings
(R, E)
Identify features of
organisations which
use evaluation for
program development
(R)
Page 5 of 8
Increasing
evaluation
capacity
Access to case
studies which use
both traditional and
creative evaluation
approaches (R)
Workforce
development training, skills
building, mentoring
(OL, E, R)
Evaluability
assessment of
programs (HP)
Facilitating
knowledge
translation
Flexible evaluation
designs (F, P)
Participatory action
research methods
(R, HP, F)
Funding contracts
which include an
evaluation budget
(F, P)
All programs subject
to monitoring and
evaluation (P)
Key: Researchers (R), Organisational leadership (OL), Health promotion practitioners (HP), Funders (F),
Policymakers (P), Educators (E)
Figure 3 Improving health promotion program evaluation.
[6] is recommended rather than a paradigm of individual
or group performance management.
Establishing partnerships with researchers can add the
necessary rigour and credibility to practitioners’ evaluation efforts and can facilitate dissemination of results in
peer-reviewed journal articles and through conferences.
Sharing both the processes and results of program evaluations in this way is especially important for the wider
health practitioner community. Furthermore, identifying
organisations that use evaluation well for program development may assist in understanding the features of organisations and the strategies and practices needed to
overcome common barriers to evaluation.
Strategies to increase evaluation capacity
In some organisations, there is limited awareness of
what constitutes evaluation beyond a survey, or collecting operational data. We would argue that with some
modification, many existing program activities could be
used for evaluation purposes to ensure systematic and
rigorous collection of data. Examples include recording
journal entries of program observations, and audio or
video recording of data to better understand program
processes and participant involvement.
Specialist evaluation skills are not always required. Practitioners may wish to consider appreciative inquiry methods
[38] which focus on the strengths of a program and what is
working well rather than program deficits and problems.
Examples include the most significant change technique, a
highly participatory story-based method for evaluating
complex interventions [39] and the success case method for
evaluating investments in workforce training [40]. These
methods use storytelling and narratives and provide powerful participatory evaluation methods for integrating evaluation and program development. Arts-based qualitative
inquiry methods such as video diaries, graffiti/murals, theatre, photography, dance and music are increasingly being
explored for their use in health promotion given the relationship between arts engagement and health outcomes
[41]. Boydell and colleagues provide a useful scoping review
of arts-based health research [42].
Such arts-based evaluation strategies may be particularly
suited to programs that already include creative components. They may also be more culturally acceptable when
used as community engagement tools for groups where
English is not the native language or where literacy may
be low. In our experience, funding agencies are increasingly open to the validity of these data and their potential
for wider reach, particularly in vulnerable populations
[43,44]. The outputs of arts-based methods (for example,
photography exhibitions, theatre performances) are also
powerful channels for disseminating results and have the
potential to influence policy if accepted as rigorous forms
of evidence [42].
We encourage practitioners to begin a dialogue with funders to identify relevant methods of evaluation and types of
evidence that reflect what their programs are actually doing
and that provide meaningful data for both reporting and
Lobo et al. BMC Public Health 2014, 15:1315
http://www.biomedcentral.com/1471-2458/15/1315
program development purposes. Other authors have also
recognised the paucity of useful evaluations and have developed a framework for negotiating meaningful evaluation in
non-profit organisations [45].
Workforce development strategies, including mentoring, training, and skills building programs, can assist in
capacity building. There are several examples of centrally
coordinated capacity building projects in Australia which
aim to improve the quality of program planning and
evaluation in different sectors through partnerships and
collaborations between researchers, practitioners, funders
and policymakers (see for example SiREN, the Sexual
Health and Blood-borne Virus Applied Research and
Evaluation Network [46,47]; the Youth Educating Peers
project [48]; and the REACH partnership: Reinvigorating
Evidence for Action and Capacity in Community HIV
programs [49]). These initiatives seek to provide health
promotion program planning and evaluation education,
skills and resources, and assist practitioners to apply new
knowledge and skills. The Western Australian Centre for
Health Promotion Research (WACHPR) is also engaged
in several university-industry partnership models across a
range of sectors including injury prevention, infectious
diseases, and Aboriginal maternal and child health.
These models have established formal opportunities for
health promotion practitioners to work alongside health
promotion researchers, for example, co-locating researchers
and practitioners to work together over an extended period
of time. Immersion and sharing knowledge in this way seeks
to enhance evaluation capacity and evidence-based practice,
facilitate practitioner contributions to the scholarly literature, and improve the relevance of research for practice.
There has been some limited evaluation of these capacity
building initiatives and it is now timely to collect further evidence of their potential value to justify continued investment
in these types of workforce development strategies.
Also important is evaluability assessment, including establishing program readiness for evaluation [7] and the feasibility and value of any evaluation [22]. Though rare, in some
cases, agencies may over-evaluate without putting the results
to good use. Organisations need to be able to assess when
to evaluate, consider why they are evaluating, and be mindful whether evaluation is needed at all. Clear evaluation
questions should always guide data collection. The use of
existing data collection tools which have been proven to be
reliable is advantageous for comparative purposes [7].
Page 6 of 8
It is not always possible to collect baseline data. Where
no baseline data has been collected against which to compare results, this does not have to be a barrier to evaluation, as comparisons against local and national data may
be possible. Post-test only data, if constructed well, can
also provide some indication of effectiveness, for example,
collecting data on ratings of improved self-efficacy as a result of a project.
Practitioners should be encouraged that evaluation is a dynamic and evolving process. Evaluation strategies may need
to be adapted several times before valuable data are collected. This process of reflective practice may be formalised
in the approach of participatory action research [51] which
features cycles of planning, implementation and reflection
to develop solutions to practical problems. Through active
participation and collaboration with researchers, practitioners build evaluation skills and increased confidence that
evaluation is within their capability and worth doing to improve program outcomes.
Funding rules can help reinforce the importance of adequate evaluation as an essential component of program
planning. Some grants require 10-15% of the program
budget to be spent on evaluation including translation of
evaluation findings. Healthway, for example, encourages
grant applicants to seek evaluation planning support from
university-based evaluation consultants prior to submitting a project funding application.
The Ottawa Charter for Health Promotion guides health
promotion practice to consider multi-level strategies at individual, community and system levels to address complex
issues and achieve sustainable change [52]. The role of
political influence on the strategies that are implemented
and the opportunities for evaluation cannot, however, be
ignored. A balance must be achieved between responding
quickly to a health issue attracting public concern and
undertaking sufficient planning to ensure the response is
appropriate and sustainable. In a consensus study with the
aim of defining sustainable community-based health promotion practice, practitioners highlighted four key features: effective relationships and partnerships; building
community capacity; evidence-based decision-making and
practice; and a supportive context for practice [29]. Reactive responses may be justified if implemented as part of a
more comprehensive, evidence-based and theory-based
health promotion strategy. A significant evaluation component to assess the responses can enable action to extend, modify or withdraw the strategies as appropriate.
Strategies to facilitate knowledge translation
Evaluation has to be timely and meaningful, not simply
confirming what practitioners know, otherwise there is
limited perceived value in conducting evaluation. Practical evaluation methods that can be integrated into daily
activities work well and may be more sustainable by
practitioners [50].
Summary
In this article, we have argued that along the spectrum of
evaluation, evaluation activities focused on output measures
are more frequently undertaken than evaluation activities
that are used to inform program development. We have
outlined organisational, capacity, and translational factors
Lobo et al. BMC Public Health 2014, 15:1315
http://www.biomedcentral.com/1471-2458/15/1315
that currently limit the extent of program evaluation undertaken by health promotion practitioners.
We propose that multiple strategies are needed to address
the evaluation challenges faced by health promotion practitioners and that there is a shared responsibility of a range of
stakeholders for building evaluation capacity in the health
promotion workforce (Figure 3). We conclude that adequate
evaluation resources are available to practitioners; what is
lacking is support to apply this evaluation knowledge to
practice.
The types of support needed can be identified at an individual, organisational, and policy level:
Individual practitioners require support to develop
evaluation knowledge and skills through training,
mentoring and workforce capacity development
initiatives. They also need organisational leadership
that endorses evaluation activities as valuable (and
essential) for program development and not
conducted simply to meet operational auditing
requirements.
Organisations need to create a learning environment
that encourages and rewards reflective practice and
evaluation for continuous program improvement. In
addition, a significant evaluation component is
required to ensure that reactive responses to major
public health issues can be monitored, evaluated and
modified as required.
At a policy level, all programs should be monitored,
evaluated and refined or discontinued if they do not
contribute to intended outcomes. Seeking evidence
that is appropriate to program type and stage of
development is important; funders need to give
consideration to a range of evaluation data.
Adequate budget also needs to be available for and
invested in program evaluation.
Evaluation of health promotion programs does occur
and resources for how to conduct evaluations are readily available to practitioners. Multi-level strategies involving a range of stakeholders are also required to
address the organisational, capacity and translational
factors that influence practitioners’ ability to undertake
adequate evaluation. Establishing strong partnerships
between key stakeholders who build evaluation capacity,
fund or conduct evaluation activities will be critical to
support health promotion practitioners undertake
evaluation for program development.
Abbreviations
HREC: Human Research Ethics Committee; RCT: Randomised Controlled Trial;
REACH: Reinvigorating Evidence for Action and Capacity in Community HIV
programs; SiREN: Sexual Health and Blood-borne Virus Applied Research and
Evaluation Network; TARPs: Target audience ratings points; WACHPR: Western
Australian Centre for Health Promotion Research.
Page 7 of 8
Competing interests
The authors declare that they have no competing interest.
Authors’ contributions
RL drafted the manuscript based on discussions with MP and SB regarding
key messages, relevant examples, overall structure, clarification of key
audience and appropriate language, figures and references. All authors read
and approved the final manuscript.
Acknowledgements
The authors extend their thanks to Gemma Crawford and the reviewers for
their suggestions and feedback on the paper.
Author details
1
WA Centre for Health Promotion Research, School of Public Health, Curtin
University, GPO Box U1987, Bentley WA 6845, Western Australia. 2Department
of Health Policy and Management, School of Public Health, Curtin University,
GPO Box U1987, Bentley WA 6845, Western Australia.
Received: 13 June 2014 Accepted: 16 December 2014
Published: 23 December 2014
References
1. Stufflebeam D: The CIPP Model for Evaluation. In The International
Handbook of Educational Evaluation. Edited by Kellaghan T, Stufflebeam DL.
Dordrech: Kluwer; 2003.
2. Dunt D: Levels of project evaluation and evaluation study designs. In
Population Health, Communities and Health Promotion. 1st edition. Edited by
Jirowong S, Liamputtong P. Sydney: Oxford University Press; 2009.
3. Green L, Kreuter M: Health Promotion Planning: An Educational and
Ecological Approach. 4th edition. New York: McGraw Hill; 2005.
4. Howat P, Brown G, Burns S, McManus A: Project planning using the
PRECEDE-PROCEED model. In 1st edition. Edited by Jirowong S,
Liamputtong P. Sydney: Oxford University Press; 2009.
5. O’Connor-Fleming M-L, Parker E, Higgins H, Gould T: A framework for evaluating
health promotion programs. Health Promot J Aust 2006, 17(1):61–66.
6. Smith B: Evaluation of health promotion programs: are we making
progress? Health Promot J Aust 2011, 22(3):165.
7. Hawe P, Degeling D, Hall J: Evaluating Health Promotion: A Health Workers
Guide. MacLennan and Petty: Sydney; 1990.
8. Project Management Institute (PMI) Inc: A Guide to the Project Management
Body of Knowledge (PMBOK Guide). 5th edition. Newtown Square, PA: Project
Management Institute Inc.; 2013.
9. Bauman A, Nutbeam D: Evaluation in a nutshell: a practical guide to the
evaluation of health promotion programs. N.S.W.: McGraw-Hill; 2013.
10. Eggar G, Spark R, Donovan R: Health promotion strategies and methods. 2nd
edition. Sydney: McGraw-Hill; 2005.
11. Glasgow R, Vogt T, Boles S: Evaluating the public health impact of health
promotion interventions: the RE-AIM framework. Am J Public Health 1999,
89(9):1322–1327.
12. Nutbeam D: The challenge to provide ‘evidence’ in health promotion.
Health Promot Int 1999, 14(2):99–101.
13. Valente T: Evaluating health promotion programs. USA: Oxford University
Press; 2002.
14. Fagen M, Redman S, Stacks J, Barrett V, Thullen B, Altenor S, Neiger B:
Developmental evaluation: building innovations in complex
environments. Health Promot Practice 2011, 12(5):645–650.
15. Harris A, Mortimer D: Funding illness prevention and health promotion in
Australia: a way forward. Austr NZ Health Policy 2009, 6(1):25.
16. Hildebrand J, Lobo R, Hallett J, Brown G, Maycock B: My-Peer Toolkit [1.0]:
developing an online resource for planning and evaluating peer-based
youth programs. Youth Stud Aus 2012, 31(2):53–61.
17. Jolley G, Lawless A, Hurley C: Framework and tools for planning and
evaluating community participation, collaborative partnerships and
equity in health promotion. Health Promot J Aust 2008, 19(2):152–157.
18. Klinner C, Carter S, Rychetnik L, Li V, Daley M, Zask A, Lloyd B: Integrating
relationship- and research-based approaches in Australian health
promotion practice. Health Promot Int 2014. doi:10.1093/heapro/dau026.
19. Pettman T, Armstrong R, Doyle J, Burford B, Anderson L, Hillgrove T: Honey
N. Waters E: Strengthening evaluation to capture the breadth of public health
practice: ideal vs. real. J Public Health 2012, 34(1):151–155.
Lobo et al. BMC Public Health 2014, 15:1315
http://www.biomedcentral.com/1471-2458/15/1315
20. Barry M, Allegrante J, Lamarre M, Auld M, Taub A: The Galway Consensus
Conference: international collaboration on the development of core
competencies for health promotion and health education. Global Health
Promot 2009, 16(2):5–11.
21. Demspey C, Battel-Kirk B, Barry M: The CompHP Core Competencies Framework
for Health Promotion handbook. Galway: National University of Ireland; 2011.
22. Thurston W, Potvin L: Evaluability assessment: a tool for incorporating
evaluation in social change programmes. Evaluation 2003, 9(4):453–469.
23. Wise M, Signal L: Health promotion development in Australia and New
Zealand. Health Promot Int 2000, 15(3):237–248.
24. Maycock B, Jackson R, Howat P, Burns S, Collins J: Orienting health
promotion course structure to maximise competency development. In
13th Annual Teaching and Learning Forum. Perth: Murdoch University; 2004.
25. Zechmeister I, Kilian R, McDaid D: Is it worth investing in mental health
promotion and prevention of mental illness? A systematic review of the
evidence from economic evaluations. BMC Public Health 2008, 8(1):20.
26. W.K. Kellogg Foundation: The W. K. Kellogg Foundation Evaluation Handbook.
Michigan: W.K. Kellogg Foundation; 2010.
27. Hanusaik N, O’Loughlin J, Kishchuk N, Paradis G, Cameron R: Organizational
capacity for chronic disease prevention: a survey of Canadian public
health organizations. Eur J Public Health 2010, 20(2):195–201.
28. Jolley G: Evaluating complex community-based health promotion:
addressing the challenges. Eval Program Plann 2014, 45:71–81.
29. Harris N, Sandor M: Defining sustainable practice in community-based
health promotion: A Delphi study of practitioner perspectives.
Health Promot J Aust 2013, 24:53–60.
30. Case studies for health promotion projects. Published by Healthway 2015,
[https://www.healthway.wa.gov.au/?s=case+studies]
31. Cross D, Waters S, Pearce N, Shaw T, Hall M, Erceg E, Burns S, Roberts C,
Hamilton G: The Friendly Schools Friendly Families programme: threeyear bullying behaviour outcomes in primary school children. Int J Educ
Res 2012, 53:394–406.
32. Maycock B, Binns C, Dhaliwal S, Tohotoa J, Hauck Y, Burns S, Howat P:
Education and support for fathers improves breastfeeding rates: a
randomized controlled trial. J Hum Lact 2013, 29(4):484–490.
33. Kemm J: The limitations of ‘evidence-based’ public health. J Eval Clin Pract
2006, 12(3):319–324.
34. Armstrong R, Waters E, Crockett B, Keleher H: The nature of evidence
resources and knowledge translation for health promotion practitioners.
Health Promot Int 2007, 22(3):254–260.
35. Green J, South J: Evaluation with hard-to-reach groups. In Evaluation: Key
concepts for public health practice. 1st edition. Berkshire, England: Open
University Press; 2006:113–128.
36. Preskill H, Boyle S: A multidisciplinary model of evaluation capacity
building. Am J Public Health 2008, 29(4):443–459.
37. Lobo R, McManus A, Brown G, Hildebrand J, Maycock B: Evaluating peerbased youth programs: barriers and enablers. Eval J Australas 2010,
10(1):36–43.
38. Preskill H, Catsambas T: Reframing evaluation through appreciative inquiry.
Thousand Oaks, CA: Sage Publications; 2006.
39. Dart J, Davies R: A dialogical, story-based evaluation tool: the most significant
change technique. Am J Eval 2003, 24(2):137–155.
40. Brinkerhoff R: The success case method: a strategic evaluation approach
to increasing the value and effect of training. Adv Dev Hum Resour 2005,
7:86–101.
41. Davies C, Knuiman M, Wright P, Rosenberg M: The art of being healthy: a
qualitative study to develop a thematic framework for understanding
the relationship between health and the arts. BMJ Open 2014,
4:e004790. doi:10.1136/bmjopen-2014-004790.
42. Boydell K, Katherine M, Gladstone B, Volpe T, Alleman B, Stasiulis E: The
production and dissemination of knowledge: a scoping review of artsbased health research. Forum: Qual Soc Res 2012, 13(1), Art.32. http://nbnresolving.de/urn:nbn:de:0114-fqs1201327.
43. Lobo R, Brown G, Maycock B, Burns S: Peer-based Outreach for Same Sex
Attracted Youth (POSSAY) project final report. Perth: Western Australian
Centre for Health Promotion Research, Curtin University; 2006.
44. Roberts M, Lobo R, Sorenson A: Evaluating the Sharing Stories youth drama
program. In Australasian Sexual Health Conference, 23-25 October 2013.
Darwin Australia: Australasian Sexual Health Alliance (ASHA); 2013.
Page 8 of 8
45. Liket K, Rey-Garcia M, Maas K: Why aren’t evaluations working and what
to do about it: a framework for negotiating meaningful evaluation in
nonprofits. Am J Eval 2014, 35:171–188.
46. Lobo R, Doherty M, Crawford G, Hallett J, Comfort J, Tilley P: SiREN – building
health promotion capacity in Western Australian sexual health services. In
International Union for Health Promotion and Education, 21st World Conference
on Health Promotion, 25–29 August 2013. Pattaya, Thailand: International Union
for Health Promotion and Education (IUHPE); 2013.
47. SiREN – Sexual Health and Blood-borne Virus Applied Research and
Evaluation Network. Published by Curtin University 2015 [www.siren.org.au]
48. Walker R, Lobo R: Asking, listening and changing direction: guiding youth
sector capacity building for youth sexual health promotion. In Australasian
Sexual Health Conference, 15–17 October 2012. Melbourne: International Union
against Sexually Transmitted Infections (IUSTI); 2012.
49. Brown G, Johnston K: REACH partnership – final report. Melbourne: Australian
Research Centre in Sex, Health and Society, La Trobe University; 2013.
50. Lobo R: An evaluation framework for peer-based youth programs. PhD
thesis. Perth: Curtin University; 2012.
51. Kemmis S, McTaggart R: Participatory action research. In Handbook of
Qualitative Research. 2nd edition. Edited by Denzin N, Lincoln YS. Thousand
Oaks, CA: Sage; 2000:567–606.
52. The Ottawa Charter for Health Promotion: First International Conference
on Health Promotion, Ottawa, 21 November 1986. Published by the
World Health Organisation 2015, accessed 3 January 2015 [http://www.who.
int/healthpromotion/conferences/previous/ottawa/en/]
doi:10.1186/1471-2458-14-1315
Cite this article as: Lobo et al.: Supporting health promotion
practitioners to undertake evaluation for program development. BMC
Public Health 2014 15:1315.
Submit your next manuscript to BioMed Central
and take full advantage of:
• Convenient online submission
• Thorough peer review
• No space constraints or color figure charges
• Immediate publication on acceptance
• Inclusion in PubMed, CAS, Scopus and Google Scholar
• Research which is freely available for redistribution
Submit your manuscript at
www.biomedcentral.com/submit

Purchase answer to see full
attachment

We offer the bestcustom writing paper services. We have done this question before, we can also do it for you.

Why Choose Us

  • 100% non-plagiarized Papers
  • 24/7 /365 Service Available
  • Affordable Prices
  • Any Paper, Urgency, and Subject
  • Will complete your papers in 6 hours
  • On-time Delivery
  • Money-back and Privacy guarantees
  • Unlimited Amendments upon request
  • Satisfaction guarantee

How it Works

  • Click on the “Place Order” tab at the top menu or “Order Now” icon at the bottom and a new page will appear with an order form to be filled.
  • Fill in your paper’s requirements in the "PAPER DETAILS" section.
  • Fill in your paper’s academic level, deadline, and the required number of pages from the drop-down menus.
  • Click “CREATE ACCOUNT & SIGN IN” to enter your registration details and get an account with us for record-keeping and then, click on “PROCEED TO CHECKOUT” at the bottom of the page.
  • From there, the payment sections will show, follow the guided payment process and your order will be available for our writing team to work on it.