News‎ > ‎

Reflections on a Pragmatic Evaluation course: Another biscuit Mr Spock?

posted Jun 4, 2015, 10:46 AM by Ben Jane   [ updated Jun 4, 2015, 2:06 PM ]

June, 2015 saw the ISBNPA international conference converge on Edinburgh but this big event was preceded by the first of what is intended to be many courses in Pragmatic Evaluation.  I was fortunate enough to be able to attend this course and this article contains my thoughts and reflections on the experience.

An overview of the course

Victor Matsudo has previously pointed out that despite a growing body of evidence linking physical activity to health, levels of physical activity across the population have decreased paradoxically. For this reason, there have been various calls to action (Kohl et al, 2012) and organisations such as ISBNPA, ISPAH, and HEPA are all working hard to improve the levels of advocacy and evaluation capacity so that good practice in health promotion can be shared and increased population level changes can be made. This two and a half day course was supported by these organisations and was put together with a great deal of thought and consideration by the triumvirate of Karen Milton, Paul Kelly and Justin Richards and while these three are more than capable of leading such a course, they were joined by Charlie Foster, Nick Cavill and one of the world's pre-eminent physical activity advocates, Adrian Bauman. It was this team that delivered most of the content, but we were also joined by other leaders in physical activity research Nanette Mutrie MBE, Sonja Kahlemeier, David Ogilvie, Fiona Bull MBE and Jim Sallis who were all generous enough to give us many insights into their work.

The two and half days were structured into a range of topics but reflecting on the course I've resisted the temptation to provide a running commentary of sessions in favour of trying to pick out three main themes that seemed to form the backbone of all contributors work.

Collaborative planning (dunkability or crunchiness?)

It was clear from all perspectives that evaluation means different things to different people, a situation that was illustrated when we were given the intentionally simple brief of evaluating a packet of biscuits. In further discussions on various methods of evaluation the traditional academic tensions of qualitative and quantitative research and various philosophical standpoints were for once, all on the same side as traditional science was pitted against the demands of the realities of service delivery and real-world projects.

An appeal was made so that rather than persisting with the false dichotomy of science versus practice, those involved should spend more time understanding the needs of each other and then work together to find common themes that can be taken forward. Trying to understand multiple perspectives might allow those involved to produce realistic but meaningful evaluation objectives and then using a combination of knowledge, experience and innovation result in well designed, pragmatically evaluated programmes (e.g. the DAVE trial, Solomon et al, 2012). While good principles of scientific evaluation are widely seen as the gold standard, there are examples of how control groups are not always the panacea that we think they are (Waters et al, 2012) and as a large body of academic research is actually addressing the initial problem, it is failing to fully explore the methods by which we might address these problems.

Think Logically (Live long and prosper)

The importance of a good Logic Model was stressed throughout. A logic model is a summary description of program inputs, the context of the program and the activities that make up the program (Bauman &
Nutbeam, 2013). They contain a range of actions and information that are all connected by a series of directional arrows which are built on assumptions that one action will lead to the next and it is these arrows and assumptions that need to be tested in order to fully understand any programme. The CHAMPS III project (Stewart et al, 2006) was cited as containing a good example of a logic model and there is more information on producing logic models in the Standard Evaluation Framework for Physical Activity (Cavill & Rutter, 2012). Such models can highlight the need to analyse a range of aspects within a programme and in this respect, Reger-Nash et al (2006) was cited as a good example of a study that looked at such a range of elements within the programme implementation process.

Producing research that changes practice

Adrian Bauman suggested that we are failing to answer the questions that policy makers want us to and this was reinforced by David Ogilvie who recommended that “a good evaluation should be of use to someone that is making a policy”, both seeking solutions to a problem highlighted by Brownson et al (2006) who suggested that researchers and policy makers often live in parallel universes.

On a similar note, Nanette Mutrie made a three part plea that practitioners and researchers work together to support the culture of evaluation, get more evidence published so that it might be included in academic reviews and continue to be advocates of physical activity and to lobby politicians and other decision makers for meaningful change.

Bauman and Nutbeam (2013) have identified this area of work as replication, dissemination and institutionalisation and the work of Milat, Bauman et al (see below) examines this area in more depth.

Final Thoughts

On more than one occasion, Prof. Bauman used the phrase “epistemology not epidemiology” which I felt captured the importance of making process evaluations key in our analysis of programmes. Rather than being an afterthought as it can often seem, it is where the answers to many questions will be found. The course was built around the evaluation stages of Bauman & Nutbeam's (2013) Rocket Model which highlights the importance of all stages of the evaluation process and can be seen in the figure below.

I would enthusiastically recommend that this course is repeated in a similar format in the future and like to thank Karen, Justin and PK for all their hard work as well as all those that contributed to the course. The final comment on the experience though, should go to my fellow students, who brought such a wide range of knowledge, experience and enthusiasm from many corners of the world and I hope that between us we will foster a new network of practice and support in the future.

I have added a comments box below this article and I am more than willing to engage with anyone on the content. This blog article is intended to be my own interpretation of what I had experienced over the last few days and so I apologise to any of the contributors if I have misunderstood or misrepresented them in anyway.

BJ, 4th June, 2015

Bauman & Nutbeam's (2014) Rocket Model that outlines the stages of research and evaluation.

References and Related Reading

Bauman, A., & Nutbeam, D. (2013). Evaluation in a nutshell: A practical guide to the evaluation of health promotion programs. McGraw Hill.

Brownson, R. C., Royer, C., Ewing, R., & McBride, T. D. (2006). Researchers and policymakers: travelers in parallel universes. American Journal of Preventive Medicine, 30(2), 164-172.[full text]

Bull, F. C., & Bauman, A. E. (2011). Physical inactivity: the “Cinderella” risk factor for noncommunicable disease prevention. Journal of Health Communication, 16(sup2), 13-26.

Cavill, N., & Rutter, H. (2012). Standard evaluation framework for physical activity interventions. National Obesity Observatory.[full text]

Kohl, H. W., Craig, C. L., Lambert, E. V., Inoue, S., Alkandari, J. R., Leetongin, G., ... & Lancet Physical Activity Series Working Group. (2012). The pandemic of physical inactivity: global action for public health. The Lancet, 380(9838), 294-305.

Milat, A. J., Bauman, A. E., Redman, S., & Curac, N. (2011). Public health research outputs from efficacy to dissemination: a bibliometric analysis. BMC Public Health, 11(1), 934.

Milat, A. J., King, L., Bauman, A. E., & Redman, S. (2012). The concept of scalability: increasing the scale and potential adoption of health promotion interventions into policy and practice. Health Promotion International, dar097.[full text]

Milat, A. J., King, L., Bauman, A., & Redman, S. (2011). Scaling up health promotion interventions: an emerging concept in implementation science. Health Promotion J of Australia, 22, 238.

Reger-Nash, B., Bauman, A., Cooper, L., Chey, T., & Simon, K. J. (2006). Evaluating communitywide walking interventions. Evaluation and Program Planning, 29(3), 251-259. [abstract]

Rychetnik, L., Bauman, A., Laws, R., King, L., Rissel, C., Nutbeam, D., ... & Caterson, I. (2012). Translating research for evidence-based public health: key concepts and future directions. Journal of epidemiology and community health, jech-2011.[abstract]

Solomon, E., Rees, T., Ukoumunne, O. C., & Hillsdon, M. (2012). The Devon Active Villages Evaluation (DAVE) trial: study protocol of a stepped wedge cluster randomised trial of a community-level physical activity intervention in rural southwest England. BMC public health, 12(1), 581.[full text]

Waters, L., Reeves, M., Fjeldsoe, B., & Eakin, E. (2012). Control group improvements in physical activity intervention trials and possible explanatory factors: a systematic review. J Phys Act Healt.[full text]

Stewart, A. L., Gillis, D., Grossman, M., Castrillo, M., McLellan, B., Sperber, N., & Pruitt, L. (2006). Diffusing a Research-based Physical Activity Promotion Program for Seniors Into Diverse Communities: CHAMPS III. Preventing chronic disease, 3(2).[full text]


The gadget spec URL could not be found