Skip to main content
SearchLoginLogin or Signup
8. Indicators
·

Can We Evaluate Openness?

Openness requires transparency. This principle applies both internally and externally. The members of an open knowledge institution need to know about the status of their organization, and their relationships with other institutions, groups, and individuals. They also need to be able to assess their own progress toward the goals that the institution has established via an open process of consultation and deliberation. This creates internal imperatives of accountability of the organization vis-à-vis its members. At the same time, as a public interest institution, an OKI is externally accountable toward its relevant communities and society. Both forms of accountability require organizational procedures and protocols for assessing the status of the open knowledge institutions by means of indicators.

Establishing such protocols, though, always involves a trade-off between the possible accuracy and quantifiability of certain indicators and their effects on perceptions and resulting behavior. As has been well theorized and empirically demonstrated (Holmström 2017), closed quantitative indicator systems necessarily result in two issues that are especially detrimental in the context of openness: indicators lead toward the relative neglect of all activities that are not measured by the indicator, and agents will try to game the system. If both effects come together, serious organizational pathologies occur.

As a consequence, an effective system of indicators for OKIs must combine internal progress evaluation with qualitative indicators for external communication and reporting. This remains open in the sense that it leaves room for contextualized adaptations to the environment and nature of the respective open knowledge institutions. Furthermore, it is imperative that the indicators be matched to the concepts that the institution hopes to incentivize, and in keeping with an open ideology.

Challenges in Evaluation

Within the broader scope of open knowledge along with the institutions that support and sustain it, a wide range of qualities and practices can contribute to a shift for a university toward becoming an OKI. Many of these activities and qualities already exist in some institutions and places, although with inconsistent implementation and without coherence across activities. In the specific case of universities and colleges, these activities and qualities are often not directly supported by the organization as a whole, or as a strategic priority. Rather, they are undertaken by individuals within the institution, often without recognition and in addition to their other (metricized) responsibilities.

The practices and qualities necessary to support universities in a transition toward becoming open knowledge institutions are progressive and forward looking. They involve a spectrum of activities that includes engaging with new communities and mediating new forms of conversation in order to engage new audiences and participants. These forward-facing practices are frequently at odds with the dominant, if unstated, expectations about what universities and colleges do, and who they are for.

Existing forms of evaluation generally reinforce dominant perspectives and power structures, including the geographic dominance of the Global North in traditional metrics. The conservative orientation of existing evaluation systems in universities today is further reinforced by the growth of external threats. Funding is tightening, and knowledge itself is increasingly politicized and contested. National and governmental goals can seem aligned with these agendas. This creates disincentives for experimentation and innovation in relation to collaboration beyond the institution, and can restrict new approaches to scholarly communication. Risk management favors conservatism. At the same time, open knowledge agendas offer a route to increasing the diversity of university funding and support sources as well as engaging with broader publics, including policy and opinion makers, and becoming part of a more collectively determined and knowledge-guided future.

Existing rankings and their relation to quality signaling are, of course, seen as crucial for universities and their administrators. Universities direct their knowledge and research output toward the defined set of activities and dissemination formats that feature in high-profile rankings. They do this in the hope of signaling status and prestige—and in so doing, ensuring their appeal to students and research funders. The exclusive use of specific data providers in some ranking systems can drive university policy to demand publication in specific—invariably traditional, Western, and science, technology, engineering, and mathematics–focused venues. One example is the use of Scopus data by the Times Higher Education World University Rankings. This narrows incentives for publication in formats and venues that might be more accessible to wider publics—for example, in scholar-led open-access journals, popular media, policy papers, or reports to the government. It also reinforces existing regional power hierarchies between the North and South as well as disciplinary divisions and practices. This in turn increases the ability of disciplines to enforce boundaries by determining what can and cannot be published within influential journals.

Efforts to prevent unfair comparisons when measuring the “reach and impact” of individual scholars and their work are problematic too. One illustration is the normalizing of citation scores with reference to an author’s home discipline. These do not increase the fairness of the evaluation system. Rather, these strategies can themselves reinforce assumptions and biases, particularly for those conducting research across disciplinary boundaries. Even those within traditionally defined disciplines can be disadvantaged if they work in ways that do not conform to disciplinary norms. Work on issues considered to be local concerns by prestigious institutions, including, for instance, neglected tropical diseases, is often discounted. Activities involving mediation and communication are also frequently neglected, including the creative and performing arts along with many forms of research-led teaching and community engagement. This system of evaluation can especially push work directed at community building, including activities to support diversity, to the background or sometimes even underground.

Rankings create additional issues for universities with medium and lower world rankings that seek to distinguish themselves not by being the same as traditionally highly rated institutions but rather by being different. Creators of current ranking algorithms and reports are unlikely to either recognize or validate new measures that showcase differences to the advantage of these universities. The desire of such institutions to demonstrate their difference is thus countered by their simultaneous need to continue to place themselves as well as possible within existing ranking systems. Once again, this disadvantages many universities outside the traditional centers of academic power and prestige.

The homogenizing effect of rankings, and their perverse impacts on university strategies and decision making, pose a serious challenge to any effort to refine or redefine the role of a university or universities—including providing incentives for universities to change in ways that are congruent with the principles and protocols of OKIs.

Issues for Framework and Indicator Design

As suggested above, the aspects of an OKI that we have identified currently tend to be disregarded as valuable or measurable criteria within existing rankings and university evaluations. To some extent it might be argued that publication numbers or citations function as proxies for the mediation of knowledge. These numbers, however, have become so associated with concepts of “excellence” in university settings that they are now regarded as defining it. Their value as measures of knowledge mediation is questionable on two grounds: first, the limitations of citations in themselves as measures of any single quality, and second, the severely limited range of the users of research that citations report on. Other proxies that might have value for evaluating progress toward being an OKI have been investigated (e.g., via the EU Open Science Platform), but these are frequently limited in scope when compared to the ambition of the OKI agenda.

To avoid replicating past mistakes, an open knowledge framework must adhere to several principles:

  • Adaptability and like-to-like comparison: Since the aim of a scoring system is to unify and compare across various entities, its framework (or underlying proxies) need to be flexible enough to adapt to different geographic settings in the target group. This is a particularly difficult challenge as institutions have vastly different management models, financial structures, student input, and so on. In situations where homogeneity is neither desired nor possible, classification and normalization systems should exist to allow like-to-like comparison.

  • Generalizability: The test of a good indicator is the degree to which it serves as an adequate proxy for the underlying concept. For global indicators, it is important that the indicators represent the whole of the theoretical population in order to make inferences. For example, existing university rankings often fail to fully represent disciplines such as the humanities and social sciences, and research from the Global South is less likely to feature than that from the North. Global indicators must have global reach.

  • Standardization: Several indicators/proxies can be imagined that demonstrate varying dimensions of openness. An open framework should be careful in standardization not to give undue priority to certain dimensions over others. Furthermore, it should avoid combining indicators where the underlying concept is not the same.

  • Orthogonality: Information provided across various indicators is highly likely to overlap. Existing ranking systems frequently aggregate across such indicators without addressing this problem. Hence they create bias toward some criteria and undermine performance in other areas. A framework for open indicators should include a well-defined process for indicator selection, utilizing appropriate statistical procedures to ensure that the data underlying the indicators are as orthogonal as possible.

  • Qualitative versus quantitative data: Qualitative data should not be neglected in favor of quantitative metrics. Although this complicates other aspects of the framework (e.g., standardization and generalization), a framework for open knowledge indicators must triangulate several sources of data to represent the complex and dynamic system of knowledge production.

  • Thoughtful design of scoring systems: Another interesting and potentially important issue surrounds the way in which scores are assigned. Most current scoring systems utilize a bottom-to-top scoring approach. This is where the baseline score for an indicator is zero and points are awarded according to activities signaling the desired outcome for the indicator. Khaki Sedigh (2017) proposed an opposite approach where each indicator is assigned a score of a hundred to start with and then points are deducted for the lack of desired activities.

How Do Universities Change? Toward a Framework

In order to take practical steps toward transforming universities into OKIs, we must acknowledge the ways in which current rankings drive organizational change. This information needs to be used to anticipate how openness can be introduced into existing rankings. Comparison across institutions may be crucial to developing interest in and momentum for system-wide changes. Any framework must balance these needs, providing for the positive opportunities that arise from competition and aspirational comparisons, while allowing an institution to follow its own path and local needs toward its future.

There are three strands of evidence that we might use to evaluate a university. The first of these is evidence that a university is developing and implementing policy that speaks to this overall shift, whether in response to external pressure from government and funders, from community or public demands, or internally. Policy, strategy, and other public position statements are clearly not a direct sign of change occurring, but they are a signal of intent as well as proxy for organizational and institutional support of change.

The second strand of evidence emerges in the university’s actions. Is the institution putting in place platforms and systems that support mediation, engagement, diversity, and network building? Is provision for an institutional repository made and appropriately resourced? Is there visible support for data management and sharing? Is there support and expertise offered for crafting communications to speak with and effectively listen to appropriate communities?

The final strand will be evidence of outcomes and change. What evidence can we draw on as indicators or proxies of actual change? In some areas, such as assessing the degree of open access to formal traditional publications, this is becoming easier. As shown earlier, there has been a significant increase from many institutions over the past decade.

We can also see these three types of evidence as stages of development within a simple theory of change. In the first phase, characterized by policy development and deficit models, deficiencies are identified in specific areas of the university, and addressed by statements of intent or goals. In the second phase, there is action taken to address those deficiencies. Resource limitations will generally mean those actions are limited in scope, but well-designed actions and systems of resourcing will seek to maximize the positive benefits of these actions. The third stage is when outcomes result from actions, and ideally these are followed through evaluation and reflection, with new issues and unforeseen consequences identified, or limitations of the impacts of the actions taken.

A framework for open indicators should therefore include these three stages of development: policy and narrative signaling intent, action and investment that signals a prioritization of change, and measurable outcomes that result from these efforts.

Change is first propelled by an aspiration, often reflected in narrative or strategic documents or policy direction. This is generally driven by deficit models, where a problem is identified to be fixed. The second stage is action, which requires an investment of resources, time, money, or both. Choices here will be driven by investment models and identifying priorities. The final phase should deliver outcomes, and in an open knowledge system, these will be the subject of reflection and evaluation. Capacity models are appropriate to address the new capabilities and qualities of the university.

Figure 8.1

A simple model of institutional change in universities.

Signals of Openness

A strong framework for open knowledge indicators requires the incorporation of many categories of information. Paramount to this is expanding the value proposition for research, not only by incentivizing publication in open-access venues, but investing in and rewarding work that is translational and focused on broader impacts beyond the research community. The way in which the campus engages with the outside community will be a major dimension of openness, which can be measured through active (e.g., partnerships) as well as more passive engagement (e.g., social media).

Investment in infrastructure is a key element in a strategy for openness. The creation of repositories is historically an element of openness, although a global connection of OKIs will give rise to fundamentally more advanced and expanded networked opportunities for making research available to other scholars and the public. The physical campus is also an element that can be investigated, looking at physical accessibility and spaces for open engagement.

Universities are centers of learning. Therefore openness will be evaluated in terms of the composition of the study body as well as the engagement in open educational activities (e.g., participation in online courses). At the institutional level, the university will be rewarded for the adoption of policies for openness, not unlike those established for journals such as the Transparency and Openness Promotion Guidelines by the Center for Open Science (2015). These standards should seek to be comprehensive, not only incentivizing openness in one dimension of the university, but cutting across all university activities. Table 8.1 provides an illustration of potential data sources. This is not meant to be comprehensive but instead to provide examples of ways in which open knowledge indicators might be constructed.

Table 8.1

Examples of Potential Indicators and Data Sources

Indicators

Data sources

Examples of cross-institution sources

Publications/data

Open-access publications

Open-access repositories

Open data

Open data repositories

Publication indexes and access data

Repository directories

University repositories, community repositories/indexes

Registries

Web of Science, Scopus, PubMed, Dimensions, Microsoft Academic, Unpaywall, DOAJ, BASE

Directory of Open Access Repositories (DOAR)

DataCite, Figshare, Zenodo (but poor affiliation information)

Registry of Research Data Repositories (re3data.org)

Open campus

Open campus (physical/online) accessibility, open events, massive open online courses [MOOCs], etc.)

Open education resources (OER)

Collaboration (academe, government, industry, etc.).

Social media, library access and borrowing policies, MOOCs

Registries, world map

Higher education departments, government reports

MOOC Directory
http://www.moocs.co/
MOOC List https://www.mooc-list.com/

OER Policy Registry https://oerworldmap.org/oerpolicies, OER Commons https://www.oercommons.org/

HERDC data (Australia)

Diversity

Participation in education (student diversity)

Participation as researchers and other staff (staff diversity)

Output diversity

Government statistical data, university websites, reporting frameworks

Government statistical data, university websites, reporting frameworks

Authorship diversity, output type diversity, disciplinary diversity

Athena SWAN status, HESA Statistics (UK), Department of Education, Skills and Employment (DESE), Higher Education Statistics (Australia)

Gender Pay Gap reports (United Kingdom), Athena SWAN status (UK, Australia), Workplace Gender Equality Agency (United Kingdom, Australia), DESE Statistics (Australia), ETER, EUROSTAT (Europe), IPEDS (United States)

CWTS Leiden Ranking, Crossref (output types)

Community engagement

Participation by/in communities

Investment in and priority given to research, translation, and communication

Wider research impact

University websites, event databases

National and international networks

Impact reporting, altmetrics

Eventbrite

Research Impact, Development Research Uptake in Sub-Saharan Africa (DRUSSA), Learning Resource, Knowledge Translation Network Africa

REF United Kingdom Case Studies, altmetric.com, Crossref Event Data, PlumX

Standards

University protocols for openness (e.g., open-access agreements or memoranda)

Investment in and adoption of open standards and protocols

Strategic planning toward openness and integration into all university operations

International agreements, university policies, individual statements, manifestos

University statements, public budget documents

University budget / annual reports, funding for open access

Statements (e.g., Berlin Declaration, San Francisco Declaration on Research Assessment (DORA)

A Proposal for an Evaluation Framework

Table 8.1 provides a large selection of potential signals and indicators of openness within universities. While there are a number of ways these could be characterized, we have found it most effective to use the three platforms or activities that we identified in chapters 3–4, and then explored in more detail in chapters 5–7. These categories—diversity, communication, and coordination—provide a convenient means of capturing all the signals and indicators in table 8.1.

They are also useful in that they correspond to existing external policy stimuli that universities are facing: the significant diversity and inclusion deficit that is a characteristic of most universities, and societal expectations to address those issues; the demands for open access, data, and methodology sharing usually seen under the banner of “open science”; and the broader demands for public engagement and inclusion in knowledge- and decision-making processes within society.

By combining the simple theory of change articulated earlier in this chapter with these three categories, we develop a framework that combines our categories of action with the processes by which change is implemented and evaluated. Each stage of development is characterized by specific types of instruments or actions, and this helps us to organize the relevant indicators. We do not at this stage seek to refine these into any quantitative system of evaluation but instead present the framework as a way of identifying areas for evaluation as well as identifying gaps in our current information landscape.

One particular objection to this framework might be that action frequently precedes policy and organizational statements. Individuals will often be acting, sometimes without organizational sanction, to pursue an open agenda. Such activities are not organizational, however, precisely because they are not incorporated into the organizational narrative. The principle of subsidiarity supports the development of these local initiatives in the sense that it seeks to create an environment in which they are not prevented, but until they are adopted by the organization, they do not signal organizational activity. They are not yet institutionalized.

Institutionalizing Open Knowledge

In table 8.2 above there is a challenge as we move from left to right. It becomes increasingly difficult to know in which row a particular signal belongs. This problem actually points us toward a more rigorous theory of change that maps well onto the models that we develop in chapter 3. When we start on the journey of change, we will naturally engage with deficit models. What are we not doing? What do we need to change or do better? Policy efforts respond by targeting specific areas, ideally with as much focus as possible. Open-access policies never seem to address issues of diversity and inclusion, and diversity policies rarely, if ever, mention open access. But we cannot achieve the aspirations of open access in delivering more usable research outputs unless we address how our communications are currently affected by the lack of diversity in the academy. If we aim to communicate more effectively to diverse communities, then we need to include the experience of those diverse communities within the process of building that knowledge and planning its communication.

Table 8.2

A Framework for Organizing the Evaluation of Universities as OKIs

Aspiration

(Policy/narrative)

Action

(Investment)

Outcomes

(Evaluation)

Diversity

Diversity and inclusion policy

Policy on communications and evaluation (output diversity)

Public engagement/comms policy

Engagement with diversity programs

Staffing and support

Training programs

Interdisciplinary programs

Staff/student diversity

Underrepresented minority retention statistics

Diversity of revenue sources

Output (format) diversity

Coordination

Library access policies

Campus planning and public access

Policy support for coordination and/or community building

Investment in public transport integration/civil provisioning

Support for public events

Attendance at public events

Collaboration measurements

Public engagement/citizen science measures

Communication

Open-access policy

Data management policy

Public engagement/comms policy

Communication in core documents

Open-access funding

Data management and repository support

Support for wider communications

Percent of open-access outputs

Data shared and archived

Public access of outputs

Public engagement

The consideration of how these different areas relate to each comes into focus when we move to the next phase of our theory of change. As soon as a university invests resources, whether time or money, there are choices to be made about where those resources are deployed. Do you invest in paying article processing charges to deliver open access or is that money better spent on childcare provision? While this may appear a contrived example, these choices are often made implicitly and without consideration of an overall strategy. Katie Wilson and colleagues (2019) point out how those universities that perform well on open access to formal research outputs are not strongly correlated with universities that provide greater public access to the physical resources their libraries hold, illustrating a potential gap in the strategic thinking around information access.

How do universities need to change so as to find synergies between these investments? How can supporting the public to physically enter a library enhance their access to the digital open-access outputs of the university? How might investment in childcare also provide a connection to user communities for relevant research? How does the provision of support for open access and childcare build and strengthen those connections?

Policy and aspiration usually address single areas for change and not the combination. When investments are made, priorities need to be set. The systems for decision making contribute to the process of cultural and institutional change. A fully functioning OKI will have cultural and institutional forms that work to hold the three areas in tension, providing an optimal (but probably not the globally optimal) outcome for the organization in its current environment.

This leads us to the final phase. An imagined organization where it is the culture and institutional forms that hold all these issues in tension. There is no correct solution, but rather behaviors and practices that help to optimize the overall position as a whole. Just as in chapter 3 we talk about a shift in that optimum as a result of societal and technological change, we see here how culture and institutions (in the political economy sense) need to build and sustain that work to hold these conflicting requirements in tension.

Figure 8.2

The journey toward an OKI.

The key questions we need to ask therefore are how policy, investment, internal evaluation, and environmental change are contributing to this institution and culture building? How is this complex of competing factors being harnessed toward those goals? What works? What does not? This places evaluation firmly within the process of change, but also illustrates how on its own—as with policy making and investment—it is not enough.

A Forward-Looking and Open Framework

Most existing evaluation frameworks for institutions today look backward, based on the reporting of a limited set of outputs. Here we want to assess the orientation of a university to an unknown future, where new communities are engaged in ways that are difficult to predict. The explicit challenge to doing so is that any fixed framework for evaluation will be inherently conservative.

There is much interest in predictive analytics of academic work, but there is also little evidence that these do more than reinforce existing power structures, rents, and inequities. Systems based on predicting future performance, with a focus on present obsessions, from currently available data that are based on trajectories of success from the past, do little to help us challenge and diversify existing systems along with their closed nature.

If we only examine those instances of success found in our own local traditions, we can easily miss developments in other spaces and systems. The examples of Action Dialogues in South Africa have already been discussed. Another illustration includes SciELO, the open-access publishing platform in Latin America, although it remains relatively unknown outside South America and southern Africa. More than that, many of the activities and practices that align with OKIs may be deliberately hidden within the institution, operating under the radar to avoid scrutiny based on more traditional objectives.

We will need to identify institutions that bring those activities to the forefront, both locally and more widely. We will need a model that helps us identify the signals that an institution is supporting these activities. This framework is only a start along that journey. It needs to be open in itself, and any implementation that is applied to evaluation needs to allow for new signals or more relevant information to be added where appropriate.

If a path toward becoming an OKI can only be discovered step by step, then no single framework can provide simply evaluative answers. Equally there is the potential for a diversity of paths and universities in a diversity of contexts. This does not mean that evaluation is impossible, nor that progress on a selected set of indicators is not valuable. It simply means that the selection of indicators will be dependent on context as well as the goals and values of each institution.

A framework can supply a means of aiding the process of indicator selection. It can even help in comparing and contrasting the progress of different institutions. In this sense our work aligns with best practice efforts such as Research Quality Plus developed by the Canadian International Development Research Center (2018; Lebel and McLean 2018), HuMetricsHSS (n.d.) project, and European Expert Group on Indicators for Open Science (Schomberg et al. 2020). All these focus on the role of frameworks in guiding the selection of indicators for specific evaluations.

OKIs evolve over time, and their evaluation approaches must evolve with them. A commitment to openness, a state of poise between chaos and control, requires constant calibration and reevaluation—not least in the processes of calibration and evaluation themselves. It is not possible to achieve openness by measuring it in a closed way.

Comments
0
comment
No comments here
Why not start the discussion?