Domhnall MacAuley is a CMAJ Associate Editor and a professor of primary care in Northern Ireland, UK.
We publishing the wrong research and funding too many of the wrong studies. This was the general message from Adrian Bauman’s keynote address – “What gets published in physical activity research and why it seldom has an influence on policy” – at the Health Advancing Physical Activity (HEPA) conference.
The talk might have been about physical activity research but the message has resonance across medicine. If we really want to change medicine we really need to understand how researchers produce evidence and how policy makers interpret, or misinterpret, what is published. There is a significant mismatch between researchers’ objectives and policy makers’ needs. And, rarely heard in a medical context, Adrian was quite sympathetic to the needs of policymakers. We need to understand their different priorities, the range of demands and expectations they face, and the multi-sectoral environment in which they work.
Where are researchers going wrong? if I might paraphrase Adrian, they are, simply, playing the game. Researchers are responding to university demands – publication leads to promotion and this guides researcher behaviour. Changing the academic culture is a fundamental challenge for the next decade. But, researcher-driven publications are not what policy makers need. True research translation goes from original research to scaling up interventions but this is where the link is broken. Researchers simply continue publishing research in narrow area – because its what they have to do – but there’s little overall impact. Looking at the Global Atlas of Physical Activity, Adrian focused on his own country, Australia, as an example of this dissonance: Australia has the second highest output of research related to physical activity, but low prevalence of regular activity in the general population, and no policy plan.
There is too much research providing information that is useless, misleading, irrelevant, and not needed, Adrian said, going on to contrast the audience members’ experiences of those initiatives that are “most effective at increasing population physical activity” with examples from five topical research areas.
Take the stairs. There is evidence that putting up a sign encouraging people to take the stairs rather than the elevator can increase the number of people taking the stairs. But, looking at research outputs there are a disproportionate number of research studies replicating these outcomes. And he asked at what point there was sufficient evidence, and why researchers continued to publish on this issue. Traditional meta analysis can tell us that the intervention is effective but sequential meta analysis can tell us when the evidence is enough. It was enough in 2005, so why did researchers continue to do more studies?
The untold story of sitting. Physical activity research publication has increased overall but studies of sitting have increased disproportionately to the extent that sitting has become the new target in physical activity interventions. It’s a new story and it spread across the media like wildfire, with headlines suggesting that “sitting is the new smoking” and more greatly exaggerated emphasis. Some researchers, Adrian suggested, have made a career out of research on sitting and might be resistant to being challenged. Looking at the evidence, however, the effects of prolonged sitting are greatly attenuated by physical activity, so we should invest in more in activity than in sitting research. The policy message is that if you are very inactive at work then you should aim to be more active outside work. Yet, the studies continue with ongoing publicity about standing meetings, desks etc. And, what about the effect on work place productivity, absences, sickness….and do you get tired of standing?
The built environment and its effect on human activity. Adrian is, he conceded, a believer in the critical importance of the built environment but this is not based on evidence. The evidence is weak and cross sectional and we need more longitudinal studies. If we simply look at cross sectional correlates then we won’t know anything long term.
Primary care and physical activity. Studies on referral schemes, nurse counselling, brief interventions, and other variations on primary care interventions abound. Physical activity promotion through primary care has become a mantra but the reality is that most systematic reviews show no effect, yet beliefs persist. And, when you look more closely at studies where the effect size is quite good in the intervention group, it wasn’t dissimilar to controls. So, people recruited to these studies were well motivated volunteers which this bears little relationship to real primary care.
Systematic reviews and more systematic reviews. Research has increased. But, the rate of publication of systematic reviews has increased disproportionately. Indeed, some groups specialize in producing systematic reviews. This gets closer to the policy makers’ needs as they require evidence to support interventions. Yet many systematic reviews don’t, or cannot, answer their own research question, and the language of uncertainty is unhelpful for policy makers. Policy makers need systematic reviews that lead to a definitive conclusion.
Adrian’s observations could be applied to many areas of medicine: we need no more cross sectional correlate studies in every sub group. We need intervention and longitudinal studies. The studies should be in sample frames that better reflect general populations. We need fewer but more conclusive and useful systematic reviews, and sequential meta analysis to tell us when to stop undertaking (and funding) research in a particular field. And, don’t blame the policy makers. They can only work with the evidence that researchers present, so make it relevant. Researchers and policy makers need to bridge the gap that divides the parallel universe.