Onderstaande printversie van het indicatorenboek werd door uw browser gegenereerd, en zal niet steeds optimaal ogen. Via de ingebouwde printfunctie op de website van het Indicatorenboek (ronde knop rechts bovenaan) kan u een printvriendelijke PDF genereren met mooi ogende lay-out.
7.5.3Suggestions for research assessment and evaluation of interdisciplinarity in the context of the SSH
To tackle the difficulties which arise in an interdisciplinary research assessment context, seven evaluation principles are proposed by Klein (2008) in her review on the subject. These principles bundle many years of experience with inter- and transdisciplinary research studies and policy-making on the part of Klein, but also on the part of research management and policy systems currently in place. Here we recapitulate these seven principles and briefly discuss each one in the context of IDR in the SSH. While every project proposal or research outcome is unique, these seven generic principles can serve an important function when designing evaluation or research assessment procedures for the social sciences.
- Variability of goals. To begin with, not all disciplines in the SSH harness the same goals. It follows that the individual researchers from these different disciplines will behave differently. Whereas scholars active in more traditional disciplines might have the ambition to create new knowledge about a topic central to their disciplines, researchers from sub-disciplines like feminist studies or area studies might have the ambition to empower certain groups of people. The same holds true for interdisciplinary research projects. For some, “the production of new and broad knowledge of a particular phenomenon” is important, and for others “the development of technical equipment or products” is the main goal (Klein, 2008).
- Variability of criteria and indicators. The previous principle “drives the variability of criteria and indicators” (Klein, 2008). More traditional indicators, such as the number of publications or citations, for example, are not equally applicable to all disciplines in the same style. When it comes to communicating research, some social science disciplines or specialties value publications in journals more, while others value books more as outputs. The same goes for interdisciplinary research. While some projects might be concerned with societal changes, others will be directed towards the development of new scientific methods or techniques to approach a research problem. It goes without saying that these sociostructural differences as well as the differences in perceived goals should be taken seriously by panel members when assessing project proposals and their submitters. Societal impact, for example, should not be assessed with bibliometric indicators only.
- Leveraging integration. Knowledge integration is considered to be central to interdisciplinarity. It is therefore crucial to take into account the degree to which initiatives are taken to accomplish or ‘leverage’ this goal. Klein cites the organization of structural support to allow for integration, like opportunities for communication (meetings among researchers), the development of a common vocabulary, etc. A set of guiding questions has been developed by Klein to take stock off this aspect.
- Interactions of social and cognitive factors in collaboration. Interdisciplinary research, like all research, is a social process. Leveraging ‘intellectual integration’ (the previous principle) is a social endeavour and, according to Klein and others, communication and negotiation form the core of this endeavour.
- Management, leadership, and coaching. This principle underscores principle (3), the importance of “how well the organizational structure fosters communication”. Leadership is an important aspect in this regard, and should thus be taken into consideration when an interdisciplinary research project entails complex collaborations among researchers from different (and disparate) disciplines.
- Iteration and transparency in a comprehensive system. According to Klein, a strictly linear evaluation model is not appropriate for the assessment of interdisciplinary research. IDR in many cases develops in different phases and reiterates over these phases. In an early phase, principles 4 and 5 might be very important and thus deserve more attention when intermittent assessments are carried out. In a later stage, when an IDR project comes to an end, indicators for research output or impact might become more important. Transparency ensures that evaluators and those being evaluated are aware of the criteria being used at what stage. Ideally, Klein suggests, both evaluators and those who are evaluated get involved when defining appropriate indicators for their goals.
- Effectiveness and impact. The principle of effectiveness and impact returns to the first two principles. The impact of IDR is often “diffused, delayed in time, and dispersed across different areas of study and patterns of citation practice” (Boix Mansilla, 2006). Thus, the assessment of IDR requires thorough consideration and ideally takes into account potential but unpredictable long-term impacts.
Most of these principles require an active conversation among those who conduct IDR and those who evaluate it. Appropriate evaluation, Klein states, is not given but made: “It evolves through a dialogue of conventional and expanded indicators of quality”. As we discussed earlier, this is because ‘peers’ in the traditional sense are largely lacking in the case of interdisciplinarity. As such, “there is no consensus on the legitimate sources and types of control over [IDR]” (Huutoniemi & Rafols, 2016). A co-creation model of evaluation procedures guided by these principles might lead to more appropriate research assessment practices for IDR.
We have pointed out that we should first approach the scientific system in terms of dynamics of change. With regard to quantitative approaches discussed above, a first step consists of adequately mapping the scientific system. While citation-based approaches are immensely useful, they can be problematic if applied unthinkingly to the SSH, because the most commonly used data sources have serious coverage problems for (parts of) the SSH.
Science maps open up the possibility to study changes in the disciplinary system as a whole and will allow us to come up with more adequate and dynamic approaches to IDR. The increase in data availability (e.g., more textual data) will allow researchers to not only take into account journal articles, but also other research outputs in the form of text when drawing these maps. Sidestepping the need for predefined science classifications, a bottom-up text-based approach which makes use of document similarity methods and clustering, for instance, could yield important insights into the SSH landscape. In an evaluation context, these methods allow research administrators or policy advisors to locate research or researchers on the boundaries of established fields and disciplines – the cognitive areas where knowledge integration takes place.