Fig 2 - uploaded by Michael Ochsner
Content may be subject to copyright.
Four types of research in the humanities. Commonalities across the disciplines. Source Ochsner et al. (2013), p. 86

Four types of research in the humanities. Commonalities across the disciplines. Source Ochsner et al. (2013), p. 86

Source publication
Chapter
Full-text available
The assessment of research performance in the humanities is linked to the question of what humanities scholars perceive as ‘good research’. Even though scholars themselves evaluate research on a daily basis, e.g. while reading other scholars’ research, not much is known about the quality concepts scholars rely on in their judgment of research. This...

Contexts in source publication

Context 1
... Hence, interdisciplinarity, cooperation and public orientation are not indicators of quality but of the 'modern' conception of research. It is notable that there is no clear preference for either conception of research (the 'traditional' conception received slightly more positive ratings). Hence, we can find four types of humanities research (see Fig. 2): (1) positively connoted 'traditional' research, which describes the individual scholar working within one discipline, who as a lat- eral thinker can trigger new ideas; (2) positively connoted 'modern' research charac- terized by internationality, interdisciplinarity and societal orientation; (3) negatively connoted 'traditional' ...
Context 2
... interdisciplinarity and societal orientation; (3) negatively connoted 'traditional' research that, due to strong introversion, can be described as monotheistic, too narrow and uncritical; and finally (4) negatively connoted 'mod- ern' research that is characterized by pragmatism, career aspirations, economization and pre-structuring (see Fig. ...

Citations

... "social vulnerability"metrics are suggested and used, findings are generalised and "best" practices are translated into "universal" guiding principles. In other words, methodological expectations of "quality research" have their origin in the natural sciences that seldom recognise that conception of knowledge creation relies on the coexistence of competing ideas and the expansion of knowledge (Ochsner et al., 2016). ...
Article
Purpose In this position piece, the authors will reflect on some of their recent experiences with the peer-review process in disaster studies and show how debate can so easily be stifled. The authors write it as a plea for healthy academic argumentative discussion and intellectual dialogue that would help all of us to refine our ideas, respect others’ ideas and learn from each other. Design/methodology/approach The authors provide reflection on our own experiences. All the examples here are based on the anonymous (double-blinded) peer reviews that the authors have received in the past two years in response to papers submitted to disaster-related journals. Findings The authors show that the grounds for rejection often have nothing to do with the rigour of the research but are instead based on someone's philosophy, beliefs, values or opinions that differ from that of the authors, and which undermine the peer-review process. Research limitations/implications There is so much potential in amicable and productive disagreements, which means that we can talk together – and through this, we can learn. Yet, the debate in its purest academic sense is a rare beast in disaster scholarship – largely because opposing views do not get published. Originality/value The authors call for ideological judgement and self-interest to be put aside when peers' work is reviewed – and for intellectual critique to be used in a productive way that would enhance rather than stifle scholarship.
... Until recently, research outputs at the university, faculty, and individual levels have been assessed primarily according to peer-reviewed publications and their citation analysis to illustrate quantifiable, material, and tangible results (Toledo 2018;Robinson-Garcia, van Leeuwen, and Rafols 2018). But this approach has never favored the humanities, for which journal impact factors and overall citation indices tend to be lower, with studies often focused on more localized contextual issues or published through detailed archives or manuscripts, the visible outcomes of which may be long-term (Hammarfelt and Haddow 2018;Ochsner, Hug, and Daniel 2016). ...
... Humanities scholars have continually struggled to accept the approaches used to evaluate research impact according to quantitative metrics, especially those based on traditional citation analysis (Hammarfelt and Haddow 2018;Ochsner, Hug, and Daniel 2016). The humanities are diverse, and research can be presented through a multitude of communication channels-including books, book chapters, monographs, art, music, digital visualizations, among others-as opposed to the more classical peer-reviewed journal articles of the "hard" sciences, and consequently have received limited recognition and coverage on academic databases (Lemke et al. 2019;Toledo 2018). ...
Article
Full-text available
During the twenty-first century, for the first time, the volume of digital data has surpassed the amount of analog data. As academic practices increasingly become digital, opportunities arise to reshape the future of scholarly communication through more accessible, interactive, open, and transparent methods that engage a far broader and more diverse public. Yet despite these advances, the research performance of universities and public research institutes remains largely evaluated through publication and citation analysis rather than by public engagement and societal impact. This article reviews how changes to bibliometric evaluations toward greater use of altmetrics, including social media mentions, could enhance uptake of open scholarship in the humanities. In addition, the article highlights current challenges faced by the open scholarship movement, given the complexity of the humanities in terms of its sources and outputs that include monographs, book chapters, and journals in languages other than English; the use of popular media not considered as scholarly papers; the lack of time and energy to develop digital skills among research staff; problems of authority and trust regarding the scholarly or non-academic nature of social media platforms; the prestige of large academic publishing houses; and limited awareness of and familiarity with advanced digital applications. While peer review will continue to be a primary method for evaluating research in the humanities, a combination of altmetrics and other assessment of research impact through different data sources may provide a way forward to ensure the increased use, sustainability, and effectiveness of open scholarship in the humanities.
... Up to now, while there are several indicators and sets of criteria, no multidisciplinary instrument for measuring research performance on a personal level exists (Ochsner, Hug, and Daniel 2016b). In order to derive an adequate set of criteria for the inter-individual and multidisciplinary measurement of research performance of scientists, we will consider different quantitative and qualitative indicators that are typically used, and discuss their advantages and disadvantages. ...
... The results of several studies conducted by Ochsner, Hug, and Daniel (2012, 2014, 2016b also emphasized that an evaluation of research quality based on quantitative indicators is limited in the social sciences and humanities. The authors argue, however, that an evaluation of research quality by means of qualitative criteria is promising within those disciplines, 'if a broad range of quality criteria are applied' (Ochsner, Hug, and Daniel 2016b: 64). ...
... Throughout a series of studies, the following criteria were agreed on by experts in German literature studies, English literature studies, as well as art history: (1) scholarly exchange, (2) innovation/originality, (3) rigour, (4) fostering cultural memory, (5) impact on research community, (6) connection to other research, (7) openness, (8) erudition, (9) enthusiasm, (10) vision, and (11) connection between research and teaching. The authors also developed a first version of a questionnaire to assess these aspects of research quality within the humanities (Ochsner, Hug, and Daniel 2016b). Since such qualitative indicators are considered to be particularly important in the humanities but also appear to be central to research quality in natural sciences, it consequently seems suitable to include them to assess research quality across different disciplines. ...
Preprint
Research is often specialised and varies in its nature between disciplines, making it difficult to assess and compare the performance of individual researchers. Specific qualitative and quantitative indicators are usually complex and do not work equally well for different research fields. Therefore, the aim of the present study was to develop an economical questionnaire that is valid across disciplines. We constructed a Short Multidisciplinary Research Performance Questionnaire (SMRPQ), with which researchers can briefly report 11 quantitative and qualitative performance aspects from four areas (research quality, facilitation, transfer/exchange, and reputation) in relation to their peer reference groups (fellow researchers with the same status and discipline). To validate this questionnaire, 557 German researchers from Physics, History, and Psychology fields (53% male, 34% post-docs, 19% full professors) completed it, and for the purpose of convergent and discriminant validation additionally made assessments regarding specific quantitative and qualitative indicators of research performance as well as affective, cognitive, and behavioural aspects of their research activities (perceptions of positive affect, help-seeking, procrastination). The results attested reliable measurement, endorsed the postulated structure of the newly developed instrument, and confirmed its invariance across the three disciplines. The SMRPQ and the validation measure were strongly positively correlated, and both demonstrated similar associations with affect, cognition, and behaviour at work. Therefore, it can be considered a valid and economical approach for assessing research performance of individual researchers across different disciplines, especially within nomothetic research (e.g. regarding personal antecedents of successful research).
... This has been due in part to the varied and multifaceted nature of research outputs-books, manuscripts, poetry, creative writing, maps, photographs, art, to news, entertainment, and many other kinds of texts (including in languages other than English)which often makes their presentation in accessible open formats more costly and complex (Gross and Ryan, 2015;Montgomery et al., 2018;Narayan et al., 2018). Moreover, the academic reward system has never favoured the humanities, where overall citation indices tend to be lower, with studies often focused on more localized contextual issues, or on detailed archives or manuscripts where visible outcomes may be long term (Ochsner et al., 2016;Hammarfelt and Haddow, 2018). Whereas the fields of physics and mathematics have had their own subject-specific open access repository, arXiv, and the biomedical sciences have been supported through the PubMed Central digital archiving repository, allowing readers free access to either pre-print or post-print versions, it is clear that the humanities have not yet created a publication 'culture' focused on the use of open digital repositories (Gross and Ryan, 2015). ...
Article
Full-text available
Open scholarship encompasses open access, open data, open source software, open educational resources, and all other forms of openness in the scholarly and research environment, using digital or computational techniques, or both. It can change how knowledge is created, preserved, and shared, and can better connect academics with communities they serve. Yet, the movement toward open scholarship has encountered significant challenges. This article begins by examining the history of open scholarship in Australia. It then reviews the literature to examine key barriers hampering uptake of open scholarship, with emphasis on the humanities. This involves a review of global, institutional, systemic, and financial obstacles, followed by a synthesis of how these barriers are influenced at diverse stakeholder levels: policymakers and peak bodies, publishers, senior university administrators, researchers, librarians, and platform providers. The review illustrates how universities are increasingly hard-pressed to sustain access to publicly funded research as journal, monograph, and open scholarship costs continue to rise. Those in academia voice concerns about the lack of appropriate open scholarship infrastructure and recognition for the adoption of open practices. Limited access to credible research has led, in some cases, to public misunderstanding about legitimacy in online sources. This article, therefore, represents an urgent call for more empirical research around ‘missed opportunities’ to promote open scholarship. Only by better understanding barriers and needs across the university landscape can we address current challenges to open scholarship so research can be presented in usable and understandable ways, with data made more freely available for reuse by the broader public.
... Up to now, while there are several indicators and sets of criteria, no multidisciplinary instrument for measuring research performance on a personal level exists (Ochsner, Hug, and Daniel 2016b). In order to derive an adequate set of criteria for the inter-individual and multidisciplinary measurement of research performance of scientists, we will consider different quantitative and qualitative indicators that are typically used, and discuss their advantages and disadvantages. ...
... The results of several studies conducted by Ochsner, Hug, and Daniel (2012, 2014, 2016b also emphasized that an evaluation of research quality based on quantitative indicators is limited in the social sciences and humanities. The authors argue, however, that an evaluation of research quality by means of qualitative criteria is promising within those disciplines, 'if a broad range of quality criteria are applied' (Ochsner, Hug, and Daniel 2016b: 64). ...
... Throughout a series of studies, the following criteria were agreed on by experts in German literature studies, English literature studies, as well as art history: (1) scholarly exchange, (2) innovation/originality, (3) rigour, (4) fostering cultural memory, (5) impact on research community, (6) connection to other research, (7) openness, (8) erudition, (9) enthusiasm, (10) vision, and (11) connection between research and teaching. The authors also developed a first version of a questionnaire to assess these aspects of research quality within the humanities (Ochsner, Hug, and Daniel 2016b). Since such qualitative indicators are considered to be particularly important in the humanities but also appear to be central to research quality in natural sciences, it consequently seems suitable to include them to assess research quality across different disciplines. ...
Article
Research is often specialized and varies in its nature between disciplines, making it difficult to assess and compare the performance of individual researchers. Specific qualitative and quantitative indicators are usually complex and do not work equally well for different research fields. Therefore, the aim of the present study was to develop an economical questionnaire that is valid across disciplines. We constructed a Short Multidisciplinary Research Performance Questionnaire (SMRPQ), with which researchers can briefly report 11 quantitative and qualitative performance aspects from four areas (research quality, facilitation, transfer/exchange, and reputation) in relation to their peer reference groups (fellow researchers with the same status and discipline). To validate this questionnaire, 557 German researchers from Physics, History, and Psychology fields (53% male, 34% post-docs, and 19% full professors) completed it, and for the purpose of convergent and discriminant validation additionally made assessments regarding specific quantitative and qualitative indicators of research performance as well as affective, cognitive, and behavioural aspects of their research activities (perceptions of positive affect, help-seeking, and procrastination). The results attested reliable measurement, endorsed the postulated structure of the newly developed instrument, and confirmed its invariance across the three disciplines. The SMRPQ and the validation measure were strongly positively correlated, and both demonstrated similar associations with affect, cognition, and behaviour at work. Therefore, it can be considered a valid and economical approach for assessing research performance of individual researchers across different disciplines, especially within nomothetic research (e.g. regarding personal antecedents of successful research).
... Evaluation of the social sciences and humanities should rely on different sources and criteria of research quality (Giménez-Toledo et al. 2017;Ochsner et al. 2016). At the same time, one might consider the possibility of changing the set of obligatory databases given in the first category of the assessment criteria. ...
Article
This study investigates what names of metrics or databases researchers use to present their research portfolio and how their use is influenced by the field. I have analysed the data comprising 3,695 self-presentation documents (82,710 pages) from various academic promotion procedures in Poland. My study aims to determine the differences in the use of scientometrics indicators across all fields of science. I have used 21 codes (metrics and databases’ names) for coding all documents, analysed the patterns of scientometric indicators use, and found out that there is a significant relation between publication patterns and patterns of scientometric indicators use. My analyses reveal that researchers in ‘Hard Sciences’ (except for mathematics) very often use metrics to describe their output, researchers in ‘Soft Sciences’ (except for economics) only occasionally use metrics, and scholars from ‘Arts’ hardly ever use metrics. My most noteworthy finding highlights that patterns of scientometric indicators use are related to the publication patterns in the given field. I conclude with several recommendations for various research policies and show what metrics could be used and expected in promotion procedures in various fields.
... Therefore in the humanities the lack of citation data remains a known problem, lamented several times over [Heinzkill, 1980;Linmans, 2009;Sula and Miller, 2014]. For these and other reasons the use of citations as a means to evaluate research in the humanities has also been questioned [Thelwall and Delgado, 2015;Ochsner et al., 2016a], with alternatives being proposed [Hammarfelt, 2014;Hug et al., 2014;Marchi and Lorenzetti, 2015;Ochsner et al., 2016b;Diaz-Faes and Bordons, 2017;Thelwall, 2017]. In any event, it appears clear that the availability of citation data would not completely solve the issue of research evaluation in the humanities [Hammarfelt, 2017]. ...
Thesis
Full-text available
A tradition of scholarship discusses the characteristics of different areas of knowledge, in particular after modern academia compartmentalized them into disciplines. The academic approach is often put to question: are there two or more cultures? Is an ever-increasing spe- cialization the only way to cope with information abundance or are holistic approaches helpful too? What is happening with the digital turn? If these questions are well studied for the sciences, our un- derstanding of how the humanities might differ in their own respect is far less advanced. In particular, modern academia might foster specific patterns of specialization in the humanities. Eventually, the recent rise in the application of digital methods to research, known as the digital humanities, might be introducing structural adaptations through the development of shared research technologies and the advent of organizational practices such as the laboratory. It therefore seems timely and urgent to map the intellectual organization of the humanities. This investigation depends on few traits such as the level of codification, the degree of agreement among scholars, the level of coordination of their efforts. These characteristics can be studied by measuring their influence on the outcomes of scientific communication. In particular, this thesis focuses on history as a discipline using bibliometric methods. In order to explore history in its complexity, an approach to create collaborative citation indexes in the humanities is proposed, resulting in a new dataset comprising monographs, journal articles and citations to primary sources. Historians’ publications were found to organize thematically and chronologically, sharing a limited set of core sources across small communities. Core sources act in two ways with respect to the intellectual organization: locally, by adding connectivity within communities, or globally as weak ties across communities. Over recent decades, fragmentation is on the rise in the intellectual networks of historians, and a comparison across a variety of specialisms from the human, natural and mathematical sciences revealed the fragility of such networks across the axes of citation and textual similarities. Humanists organize into more, smaller and scattered topical communities than scientists. A characterization of history is eventually proposed. Historians produce new historiographical knowledge with a focus on evidence or interpretation. The former aims at providing the community with an agreed-upon factual resource. Interpretive work is instead mainly focused on creating novel perspectives. A second axis refers to two modes of exploration of new ideas: in-breadth, where novelty relates to adding new, previously unknown pieces to the mosaic, or in-depth, if novelty then happens by improving on previous results. All combinations possible, historians tend to focus on in-breadth interpretations, with the immediate consequence that growth accentuates intellectual fragmentation in the absence of further consolidating factors such as theory or technologies. Research on evidence might have a different impact by potentially scaling-up in the digital space, and in so doing influence the modes of interpretation in turn. This process is not dissimilar to the gradual rise in importance of research technologies and collaborative competition in the mathematical and natural sciences. This is perhaps the promise of the digital humanities.
... Given the results regarding originality, however, it is likely that such differences do exist. The project " Developing and Testing Quality Criteria for Research in the Humanities " (Ochsner et al., 2016) applied a strict bottom-up approach and developed a framework for the exploration and development for quality criteria for SSH research (Hug and Ochsner, 2014) that consists of four pillars: adopting an inside-out approach (adequate representation of the scholarly community, also of young scholars, in the development process; discipline specific criteria), applying a sound measurement approach (linking indicators to quality criteria derived from the scholars' notions of quality), making the notions of quality explicit (apply methods that can elicit criteria from the scholars' tacit knowing of research quality to draw a comprehensive picture of what research quality is in a given discipline; make transparent which quality aspects are measured or included in the assessment and which are not), and striving for consensus (methods and especially criteria to be applied in research assessment have to be accepted by the community). This framework was applied to three humanities disciplines, known to be difficult to assess with scientometric methods: German literature studies, English literature studies and art history. ...
... The project lasted from 2008 to 2012 and was followed by a second project during the time period of 2013 to 2016. In these two projects, several bottom-up initiatives were funded that researched such diverse topics as, amongst others (for a complete overview of the projects, seeLoprieno et al., 2016), profiling in communication sciences (Probst et al., 2011), cooperation of research teams with university partners as well as external stakeholders (Perret et al., 2011), notions of quality of literature studies and art history scholars (Ochsner et al., 2016), evaluation procedures and quality conceptions in law studies (Lienhard et al., 2016), academic reputation and networks in economics (Hoffmann et al., 2015). At the same time, the Swiss Academy of Humanities and Social Sciences (SAGW) started a bottom-up initiative on reflections on research assessment in SSH disciplines. ...
Article
Full-text available
Research assessment in the social sciences and humanities (SSH) is delicate. Assessment procedures meet strong criticisms from SSH scholars and bibliometric research shows that the methods that are usually applied are ill-adapted to SSH research. While until recently research on assessment in the SSH disciplines focused on the deficiencies of the current assessment methods, we present some European initiatives that take a bottom-up approach. They focus on research practices in SSH and reflect on how to assess SSH research with its own approaches instead of applying and adjusting the methods developed for and in the natural and life sciences. This is an important development because we can learn from previous evaluation exercises that whenever scholars felt that assessment procedures were imposed in a top-down manner without proper adjustments to SSH research, it resulted in boycotts or resistance. Applying adequate evaluation methods not only helps foster a better valorization of SSH research within the research community, among policymakers and colleagues from the natural sciences, but it will also help society to better understand SSH’s contributions to solving major societal challenges. Therefore, taking the time to encourage bottom-up evaluation initiatives should result in being able to better confront the main challenges facing modern society. This article is published as part of a collection on the future of research assessment.
Article
In this article we attempt to reconstruct the tacit and implicit notions of quality in the humanities. This reconstruction is based on a series of semi-structured qualitative interviews with 33 humanities scholars. Applying Max Weber’s theory of authority, we argue the quality notions have two different sources—external and internal. External sources correspond to the Weber’s types of authority: traditional authorities (academic tradition, professors, PhD advisors), rational-legal authorities (research administrators, policy makers) and charismatic authorities (‘the great minds’, ‘the founding fathers’ in a given academic field). Internal sources providing the quality notion do not fit into Weberian classification. These sources are based on the personal experience of a humanities researcher’s evaluation practices, which cannot be reduced to either type of authority above. Combining the interview data and Max Weber’s theory of authority, we try to demonstrate the existence of four different and sometimes incompatible notions of quality in the humanities: administrative; individual; semi-administrative, semi-individual; moderate individual. These notions are interpreted as ideal types, which serve as a regulative ideas rather than objective representations of research evaluation reality. The manuscript is important to Research Evaluation for the following reasons: first, it reconstructs the different types of notions of quality, which are crucial in better understanding peer review and other qualitative research evaluation practices; second, it provides better understanding of individual evaluator’s premises; third, it provides opportunity to have a glimpse beyond dominant administrative quality notions and criteria as usually the perspectives of the humanities researchers are neglected.
Article
Full-text available
The present article seeks to discern the criteria or the quality of research, formulated and accepted within the scholarly community of the humanities. We argue that the scholars implicitly use these criteria opposing administrative evaluation. The analysis of these criteria revealed that they might be summarized in three broad categories – the novelty (originality, innovativeness) of the research; the excellence of the researcher (ability to conduct and describe the research); and the impact (academic as well as social-political). We argue that relevant criteria for the administrative evaluation of research in the humanities should draw on these perspectives.