Onderstaande printversie van het indicatorenboek werd door uw browser gegenereerd, en zal niet steeds optimaal ogen. Via de ingebouwde printfunctie op de website van het Indicatorenboek (ronde knop rechts bovenaan) kan u een printvriendelijke PDF genereren met mooi ogende lay-out.
7.6.5Analyzing and evaluating artistic research
Unlike academic research, the registration and evaluation of the outcomes of artistic research is not harmonized across the Flemish higher education space. Even though every association requires their doctoral candidates in the arts to register and store their publication output in institutional repositories for inclusion in the BOF-key, there are considerable variations in the instruments used to monitor arts and design research outcomes that diverge from traditional academic formats (e.g. peer-reviewed journal articles; conference papers). Because information about the outcomes of artistic research is stored in differently organized repositories and its registration does not currently follow uniform entry protocols in different institutions, few observations can be conclusively made about the material outcomes of fifteen years of Flemish research in the arts. Given the varieties in the status of written dissertations as a prerequisite for obtaining a doctoral title across the various Flemish university associations, even the PhD thesis does not represent a disambiguated object to systematically address the results of artistic research beyond analyses of broad metadata categories.
As a result, the funding scheme for artistic research currently does not include a variable ex-post component intended to allocate resources based on performance indicators. And insofar variable funding allocation is practiced in contexts other than Flanders, it is mostly operationalized in qualitative assessment instruments like the UK’s REF [Research Excellence Framework], Australia’s ERA [Excellence in Research for Australia] or the Swedish SRC’s [Swedish Research Council] periodic evaluation of support for artistic research by expert committees. Even though there is little consensus as to how excellence and quality can be adequately assessed when artistic research is concerned, involved parties in most higher education contexts agree that it makes little sense to simply integrate arts and design research outcomes in existing bibliometric monitoring tools and assessment instruments. Poland stands out as a solitary counterexample, however. Its CESU [Comprehensive Evaluation of Scientific Units] includes quantitative indications of artistic researchers’ performance using weighed outcome measures, which have been implemented in the same PRFS [Performance Based Research Funding System] ‘academic’ outcomes are assessed by. But the fact that even time and labour intensive outcomes of artistic research (e.g. a symphony; a screenplay) are valued below scholarly publications in scientific journals with average impact factors demonstrates that it is difficult and perhaps undesirable to liken arts and design research outcomes to scholarly realisations in an integrated assessment tool.
But even though the application of quantitative and metric logics to artistic research is rarely advocated for, the field could certainly benefit from systematic registration. Currently, the lack of tools to structurally document and disclose the outcomes of artistic research, combined with the ephemeral nature of at least some of its possible realisations (e.g. performances; temporary exhibitions) obscure the results of research in the arts, and render the accomplishments of individual researchers or the institutions they are embedded in discrete at best. Future steps in the institutionalization of artistic research in Flanders must therefore on the one hand recognize the inappropriateness of metric approaches towards the assessment of artistic research, but on the other hand emphasize the need to disclose artistic research outcomes and the investigative trajectories they are situated in to the benefit of the field’s future.