Alberto emphasized the need for a structured, multi-stakeholder approach to impact assessment. That, he said, requires a multi-faceted definition of “impact” and the ability to create a compelling narrative that combines qualitative and quantitative elements and can be tailored to the specific audience, be it government, the private sector or local communities:
While some aspects of economic and societal impact — especially local ones such as new jobs created — are not necessarily directly related to scientific impact, other forms of economic and societal impact, with a broader reach and longer timeframes to develop, are related to scientific impact. For instance, scientific publications are routinely cited in patents to uphold their claims; likewise, many policy documents on major issues such as climate change and global health issues use scientific research to support political decisions. These are all pathways to impact that can be recognized by starting with an assessment of the scientific impact. The latter can be evaluated by tracking publications that report results of scientific research which involved research infrastructures.
The need for impact assessment is vital, Ondřej agreed, emphasizing that without it, funding would be hard to raise and sustain:
Research infrastructures require a substantial investment to establish and cover their operating costs. So, it does not come as a surprise that funders need to make some tough choices, and their expectation is that research infrastructures that are funded will deliver value and perform well, especially with regards to providing and the needed services required by the wide scientific community to advance science. Next to that, funders look also at wider impacts on society and the economy. Nowadays public budgets are under unprecedented pressure and must be carefully prioritized. The scientific, economic and societal impact of research infrastructures is more important than ever and needs to be proven. Depending on the country or funder, the relative importance of the impacts will differ.
Necessary as it may be, evaluating the impact of research infrastructure has its challenges. The typical metrics for measuring scientific impact are user publications, patents and other bibliometric indicators, which can usually be done with the help of database such as Scopus. However, Ondřej pointed out that this does not offer the complete picture:
What is still missing are reliable data to link publications to individual facilities and research infrastructures. This is due to a lack of practice in the academic community around acknowledging facilities, which is not always at the desirable level among authors and facilities. Secondly, the acknowledgement of facilities is not yet widely supported by journals. At the end of the day, the scientific impact evaluation can only be performed manually using full-text search of articles — which is frustrating.
Alberto agreed, commenting:
The need for data exposes the main challenge, which is data collection, as indicated by Ondřej. If we limit the scope to the evaluation of scientific impact, data collection translates in the ability to collect scientific publications that describe research activities where the RI in scope was used. As simple and obvious as this sounds, it is extremely difficult to track those papers in an automated manner: with the exception of the very large RIs such as the Large Hadron Collider, traditional methods based on article metadata such as authors and their affiliations, or funding acknowledgments, don’t work.
There are two key reasons for that, he suggested:
Firstly, lab staff is usually not included in the authors list; likewise, facilities where the experiments took place may or may not be mentioned in the acknowledgements section of the article. Furthermore, if a funder wants to evaluate what instruments have been used, those mentions usually occur in the body of the article, in a section called ‘Materials and Methods’ or something similar. That makes it extremely difficult to collect those papers using databases such as Scopus or PubMed. The remaining option is to reach out to researchers and lab managers to try to collect publications, which leads in general to a significant underrepresentation of the impact a piece of infrastructure may have had.
How can we support impact assessment?
So what can be done to address these issues? Ondřej shared his thoughts:
I mentioned two issues. The culture of acknowledging facilities can be improved by ongoing communication with the users. This is the task of facilities and their staff to remind users to do this. As facilities contribute to the experiments, I do consider this an ethical obligation, too. In fact, getting access to facilities has its financial value, which is often overlooked, but shall be clearly stated in the publication in the same way as funding/grant acknowledgements. Facilities shall also establish policies when and how the facility shall be, given their contribution, acknowledged or the facility staff shall be among the co-authors of the paper. This prevents future conflicts.
The second issue, Ondřej said, is closely linked to publishers and journals. He suggested that publishers include facility acknowledgements in their publication processes by integrating them into their submission checklists and relevant acknowledgement sections. “Many journals and publishers do not consider this to be important so far. There is much work ahead of us to change this,” he said. “At the end of the day, having proper acknowledgements in publications can also improve the research quality and reproducibility. Many facilities are actively engaged in helping users with their data — co-designing experiments, acquisition of raw data, data analysis and interpretation from initial treatment to the creation of figures, data archiving and sharing of raw and processed data. Giving a quality label of reliable data management by the facilities is important.”
Alberto, meanwhile, pointed to pilots that were already underway as a possible solution:
Over the past year, we have worked together with academic institutions on pilot projects where we are trying to answer the question: ‘What is the scientific impact of my institution’s research infrastructure?’ A good example is the partnership with Dutch institutions(opens in new tab/window), as part of a broader collaboration around open science. Our goal is to provide the quantitative evidence that research leaders at institutions can use to inform their qualitative assessment and provide data for evidence-based decision making. We are not developing a new assessment methodology: (for that) we rely on industry best practices and Elsevier’s Research Intelligence portfolio. We focus instead on automating the task of collecting publications so they can be fed into our systems for analysis.
The team has developed a sophisticated Natural Language Processing algorithm that is trained to identify those mentions in the “materials and methods” or equivalent section in scientific literature and link them to a taxonomy of research infrastructure. By using this approach, the team been able to collect up to four times more links between equipment and publications compared to traditional methods.
Both Alberto and Ondřej predicted that interest in research infrastructure would continue to grow, and with it the need for measurement as a way of demonstrating impact. As Ondřej noted, it’s a cause the research community would do well to rally behind:
This is yet an unexplored topic of interest to a wide community of facilities, irrespective of their scientific domain or size. Not only to the big ones such as ESFRI(opens in new tab/window) projects but also to smaller and mid-sized facilities that are typically hosted by universities and research institutes. So I hope we will get together a critical mass to move this topic further, work together with journals and publishers and possibly establish well-accepted standards.