Domhnall MacAuley is a CMAJ Associate Editor and a professor of primary care in Northern Ireland, UK, currently at the Society for Academic Primary Care annual conference
Measuring research impact fascinates academics and editors alike. We are all searching for the optimum metric to reflect research quality. Most of us are familiar with impact factors, citations and other individual measures like the h-index. But, what if your academic esteem and departmental funding depended on external assessment? In a mini symposium at the Society of Primary Care meeting in Oxford, entitled “ How can research impact be measured?” Professor Paul Little told us about the Research Excellence Framework (REF) in the UK which is set up to measure the value of research in a national and international context on behalf of the Higher Education Funding Council.
The REF has gone through a number of iterations but the key factor on this occasion, was that while output remains the dominant influence, “impact” was the new driver. There were five criteria to be addressed before the impact of work would be considered and these tended to favour departments with historical investment in research. What was particularly interesting was exploring how they considered impact: that it must be beneficial, and this included public policy. Indicators of impact might include inclusion in clinical guidelines, influence on the profession, public engagement etc. Once research units met the threshold to be considered, the panel was then asked to look at reach and significance. Essentially, research should make a difference to patient and health care.
Reflecting on the principles behind the REF, Paul felt these were reasonable. But, he made an interesting observation suggesting that Primary Care did very well and could justifiably have scored even better as the UK is very successful in the context of international research. Professor Amanda Howe, (University of East Anglia and vice president of WONCA), asked how young researchers in the early stages of their careers could prepare for future REF assessments.
Other measures can reflect a wider effect on the research audience and how studies attract public interest. Jean Liu, Product Development Manager at Altmetric described how these are measures are complementary to traditional bibliometrics and are a non-traditional form of impact. Altmetric is software product that collects data on the online attention for research articles. While not yet included in formal measures of esteem, it is interesting that the Wellcome Trust is taking an interest and a number of academic institutions have purchased access to the data.
Altmetric collects all the information on a details page linked to the research article which indicates the different sources: news stories, twitter, Wikipedia and these are included in a unified score. This is seen in a coloured “donut” where the colours convey the diversity of sources- each source is represented by a different colour. The legend then shows how each matches up. Altmetric simply shows the volume of interest and their sources but it doesn’t represent quality of research, quality of individual researchers and certainly doesn’t tell the whole story. The multiple online sources including news outlets, blogs, social media, Wikipedia, reference managers, peer review sites and the grey literature. The Altmetric score is simply an estimate of attention and doesn’t indicate if the reactions are positive or negative. There is no sentiment analysis. There have been some criticisms that it is “trivialising science” and, without doubt, there are limitations- the system cannot separate good science from bad, and there are other factors that may skew the level of online interest such as the name of an author or a word in the title that increases the profile of the paper. I asked Jean about the importance of altmetrics for researchers and you can hear her reply in the second video interview.
The day ended on a reflective note as we remembered the work of Helen Lester in a memorial lecture, and were challenged to think creatively about our own work. Helen’s particular interest was in mental health and she believed strongly that we needed to do better. She pointed out the considerable chronic disease morbidity among those with mental health issues and how we needed to- “Not just screen but to intervene.” Patients with mental health issues were dying twenty years too early…from cardiovascular disease.