Back in December I said that I would try to provide regular updates here on the UKRN Open Research Programme; apologies for not having done this as much as I would have liked; it has been a busy time. Several more UKRN institutions have expressed interest in being a part of the programme, even without external funding, suggesting that we are building something worth joining. And this value has been recognised by the Scottish Funding Council, which has now allocated funds to the three Scottish institutions in the programme.
As noted in the December update, the programme is divided into three main strands: training, sharing (including reward and recognition practices), and evaluation, so here’s an update on each of those.
Training
Building on year one work (reported in the first UKRN working paper), the recent priority has been to set up the behind-the-scenes machinery for a programme of train-the-trainer activities. This includes a topic schema (first version), a badging system for trainers (based on Open Badge Factory), and evaluation procedures. While we continue to work on those, we have also started the training, because the best way to learn is to do. So, the first two pieces of training will happen before the end of July, and will be from Project TIER (on embedding open research into undergraduate practice) and from the Center for Open Science (on open science, data management and research collaboration). There are active discussions with many other training providers, including the Software Sustainability Institute (based across the UK), VIRT2UE (based in the Netherlands) and Griffith University (based in Australia), demonstrating the truly international dimension to this programme and our commitment to bring the best open research training from around the world to researchers in the UK.
Sharing
One aspect of this work is to help institutions share and learn from each other, in how they support open research through policies, guidance, training, tools etc. As a first step, we have set up pages for each of the programme partner institutions, enabling them to share their approaches, and enabling others to learn from that. The next step is to use these pages as a prompt, to investigate how – in fact – people in institutions actually do share, adapt, adopt and learn new practices – in short, how UKRN institutions collaborate to promote open research, and what tools we can provide to help them.
A second aspect of this work is specifically focused on reforming researcher recognition and reward so that openness and transparency are more clearly factored in. The “OR4” project is leading on this, starting with a survey of where institutions are at this point, and a call for case study institutions to implement, review and help refine the management tools, materials, training and guidance that will be produced later in 2023-4.
Evaluation
A major aspect of the evaluation work as been the situation assessment – a survey of researchers to assess which open research practices are relevant and important to them, which are used, and the levels and adequacy of support for them. This multi-institution survey has just closed, and we hope to get some early findings within the next couple of months, and report fully in a webinar in November and a UKRN working paper.
A second aspect has been to work with each of the teams doing the work described above, to help them design the best possible evaluation of their interventions. We have begin thinking about each intervention as essentially embodying a hypothesis, and therefore being in some ways an experiment. Clearly, it would be possible to push this too far – the training events and institutional case studies of reforming reward and recognition are not designed with statistical power, control groups, etc in mind. However, we are finding it a helpful lens for evaluation design.
A third aspect of the evaluation work, which will fold into the second in due course, is to work with a group of UKRN institutions on pilots for open research indicators. This starts with a call for sector priorities, to make sure we are targeting what’s important rather than what’s easy or usual to measure. We plan then to work with solution providers, from large commercial entities to agile start-ups to smaller academic teams, to develop candidate indicators that are valid, reliable and ethical, but that are also difficult to misuse, for example to create rankings.
Reflection
I really feel the programme is speeding up and beginning to deliver now, after a slow-ish start. A year into my time at UKRN, I remain so excited to be leading this programme and privileged to be working with the UKRN communities.