Foto: Fabian Mardi (Unsplash)

How should we measure knowledge exchange activity?

The Covid-19 pandemic has halted the implementation of the Knowledge Exchange Framework (KEF) in the UK, a new attempt to measure and publish KE activity, but does this give us an opportunity to rethink how and why we are doing this? Policymakers often want to develop a set of simple metrics as in the KEF, or the U-Multirank approach to rankings, but this is fraught with difficulties and may be counter-productive. There is a further complication when considering the territorial nature of knowledge exchange in that the activity is about interactions between the university and other bodies, and hence is not purely under the control of the university, but depends in part on the performance of the partners and the overall absorptive capacity of the region.

Four key questions can be asked of such measures:

  1. How can knowledge exchange activities be made visible and hence amenable to measurement?

Knowledge exchange is often a hidden activity within universities, as only part of the whole activity can easily be captured through contractual arrangements, yet these tend to be what we can see and count. There is a need to identify a deeper engagement of academics in knowledge exchange which may be informal, tacit, creative and takes place embedded in other academic activities.

  1. How do we determine quality or excellence in knowledge exchange?

Whilst there may be a fragile consensus over research or teaching quality, the variety of forms of KE and its dependence on external demand and the capabilities of partners means it is particularly difficult to define quality. Benefit to the university such as in levels of income is perhaps not the most sensible way to assess quality, but benefits to the community are highly heterogeneous and difficult to measure.

  1. What can be measured and what can’t be measured?

The temptation to measure what can be easily counted is difficult to resist. We have relatively robust measures for some forms of business engagement, but engagement with the public or community sector is hard to measure. The KEF tends to use university income, ignoring pro bono activities or impact. This is akin to the drunk looking under the streetlamp for his keys as that is where the light is, even if he dropped them somewhere else

  1. What is the purpose of measurement and how can it be used to encourage greater knowledge exchange activity?

Is the purpose to drive funding, in which case the choice of inappropriate measures will drive particular behaviours and fail to reward some of the things we want universities to do? Would a ranking based on a basket of such indicators also just drive universities to game the ranking system? How do we encourage universities to do more and better KE with a wider set of beneficiaries, to take a more strategic approach to developing their local region as stewards of place, and to act as effective civic universities?

We need to encourage universities to better identify their specific KE priorities (in partnership with various stakeholders) and ensure that they improve their performance across all of those priorities. To do this requires a broad range of metrics and indicators covering the full range of activities. Then rather than comparing like institutions across all measures, we need to assess performance relative to opportunity or context and focus on the improvement of institutional practices. A place-based approach could consider the mix of activities undertaken by universities in a particular city or region and whether the combination of universities delivers good outcomes across the whole range of possible forms of engagement.

The aim of policymakers should be to encourage universities to do more KE with all kinds of partners, to work more collaboratively with each other and other stakeholders in delivering KE, and to use whatever methods are more effective. More work is needed in this area to expand on the forms of KE we can assess and to develop suitable processes for internal review, peer review, and perhaps most importantly external assessment by stakeholders.


David Charles
Northumbria University