Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-23T07:22:16.851Z Has data issue: false hasContentIssue false

Governing Science

How Science Policy Shapes Research Content

Published online by Cambridge University Press:  07 April 2016

Jochen Gläser
Affiliation:
Technischen Universität Berlin [[email protected]].
Grit Laudel
Affiliation:
Technischen Universität Berlin [[email protected]].

Abstract

This review explores contributions by science policy studies and the sociology of science to our understanding of the impact of governance on research content. Contributions are subsumed under two perspectives, namely an “impact of”—perspective that searches for effects of specific governance arrangements and an “impact on”—perspective that asks what factors contribute to the construction of research content and includes governance among them. Our review shows that little is known so far about the impact of governance on knowledge content. A research agenda does not necessarily need to include additional empirical phenomena but must address the macro-micro-macro link inherent to the question in its full complexity, and systematically exploit comparative approaches in order to establish causality. This requires interdisciplinary collaboration between science policy studies, the sociology of science, and bibliometrics, which all can contribute to the necessary analytical toolbox.

Résumé

Cet article explore les apports de deux domaines – les étude de politique des sciences et la sociologie des sciences – pour la compréhension de l’impact de la gouvernance sur le contenu de la connaissance scientifique. Ces apports sont regroupés dans deux perspectives principales, d’une part celle dite de l’« impact de » qui cherche à identifier les effets spécifiques des dispositifs de gouvernance, d’autre part celle dite de l’« impact sur » qui s’interroge sur les facteurs qui façonnent le contenu et qui inclue la gouvernance comme l’un de ces facteurs. Les auteurs montrent que l’on dispose au final que de peu de connaissances sur l’impact de la gouvernance sur le contenu de la science. Un agenda de recherche ne doit pas servir nécessairement à produire davantage de matériau empirique mais avant tout à saisir, dans toute sa complexité, le lien macro-micro-macro inhérent à cette question, tout en exploitant de façon systématique les approches comparées pour au final établir la causalité. Cela suppose l’élaboration d’une nouvelle « boîte à outils » analytique, fruit d’une collaboration interdisciplinaire à laquelle peuvent contribuer utilement les études de politique scientifique, la sociologie des sciences et l’approche bibliométrique.

Zusammenfassung

Dieser Review analysiert Beiträge der politikwissenschaftlichen und soziologischen Wissenschaftsforschung zu der Frage, wie Governance Forschungsinhalte beeinflusst. Wir gruppieren die Beiträge unter zwei Perspektiven, und zwar einer ‚Einfluss von‘ – Perspektive, die nach Effekten spezifischer Governance-Arrangements sucht, und einer ‚Einfluss auf‘ – Perspektive, die nach Einflüssen auf die Konstruktion wissenschaftlichen Wissens fragt und Governance als einen solchen Einfluss einschließt. Unser Review verdeutlicht, wie gering unsere gegenwärtiges Wissen über den Einfluss von Governance auf Forschungsinhalte noch ist. Eine Forschungsagenda muss nicht unbedingt zusätzliche empirische Phänomene einschließen. Sie muss aber die der Frage inhärente Makro-Mikro-Makro – Struktur in ihrer vollen Komplexität adressieren und systematisch vergleichende Ansätze für die Etablierung von Kausalität ausnutzen. Das erfordert interdisziplinäre Kooperation zwischen der politikwissenschaftlichen und soziologischen Wissenschaftsforschung sowie der Bibliometrie, da alle drei Gebiete zum nötigen analytischen Werkzeugkasten beitragen können.

Type
Varia
Copyright
Copyright © A.E.S. 2016 

1. Two fields, too little interaction

With this review, we want to explore contributions by science policy studies and the sociology of science to our understanding of the impact of governance on research content. This is not a straightforward task because the two fields are usually considered as separate and moving apart rather than converging. Recent reviews of science policy studies, or science policy and innovation studies, see them as detached from the sociology of science [Martin Reference Martin2012; Martin et al . 2012; Trousset Reference Trousset2014]. Jasanoff’s [Reference Jasanoff, Frodeman, Thompson Klein, Mitcham and Holbrook”2010] perspective is an exception because she sees both fields as belonging to the large interdisciplinary enterprise of science, technology and society (sts) studies, albeit without specifying any interactions between the fields that would justify this assessment. Bibliometric studies regarding mutual citations found relatively few such cases, and agree that the sociology of science, science policy studies and bibliometrics have moved apart since the 1970s [Van den Besselaar Reference Van den Besselaar2000; Van den Besselaar Reference Van den Besselaar2001; Bhupatiraju et al . 2012]. This separation is confirmed by the limited impact of a first attempt to establish a political sociology of science [Blume Reference Blume1974], of an attempt to link science policy decisions on research funding to the interests of science studies [Cozzens Reference Cozzens1986], and of the attempt to re-open a dialogue between bibliometrics and the sociology of science [Leydesdorff Reference Leydesdorff1989]. A more recent attempt to revive the idea of a political sociology of science [Frickel and Moore Reference Frickel, Moore, Frickel and Moore2005b] appears to be more successful, possibly due to the intense interest in politics developed by “STS as movement” (Rip Reference Rip1999; for an illustration see Woodhouse et al . 2002). However, the initiators of this revival also observe that the “new political sociology of science” constitutes a separate body of scholarship [Frickel and Moore Reference Frickel, Moore, Frickel and Moore2005b: 7].

There are scientific reasons for this separation between sociological and policy studies of science. The sociology of science went through several “turns”, all of which have in common a micro-sociological focus that became increasingly difficult to integrate with the macro-level concerns of science policy studies and bibliometrics. Communications turned inwards as the two fields grew apart and evolved. The sociology of science is part of what is called sts and appears to include the sociology of scientific knowledge and a “new political sociology of science.” Science policy studies are considered part of science and innovation policy studies or science and technology policy studies. Although the two fields share many empirical objects and some empirical methods, they differ in their research interests, approaches and framing of results. While we feel unable to resolve the terminological diversity, we can start from the observation that there is a stream of research focusing on the effects of policy processes and a stream of research focusing on the construction of scientific knowledge, which both are sufficiently well integrated internally to be considered as fields.

One reason why the two fields should talk to each other is the existence of the common research question that motivates our review. How does governance change research content? This question addresses a causal chain (more precisely, a web of causal links) that connects the domains of both fields. It is sufficiently complex and challenging to require a combination of expertise and efforts, and needs to be answered by both fields. Science policy studies must answer it because most of the policy processes studied aim at changes in the direction or quality of research. Conversely, sociological studies of links between conditions and outcomes of the social construction of scientific knowledge miss an important factor shaping knowledge if the governance of science is excluded from scrutiny.

The need to include factors from the domain of the other field has occasionally been admitted. Mayntz and Schimank [1998: 753] argued that in order to understand the mechanisms that channel external expectations towards science, the “performance level of the science system” needs to be included in the analysis of policy processes. More recently, Miller and Neff [2013: 299] confirmed that this remains a lacuna by noting that “[p]robably in part of the inherent messiness of the inner-workings of scientific communities and their settings, most S&T policy scholars have focused primarily on evaluating the inputs and outputs of science.”

For the sociology of science, Cozzens noticed as early as 1986:

The primary reason for undertaking policy relevant research in science studies is that it focuses our attention in a powerful way on an important institutional context of contemporary science which is relatively neglected in our work. The problem is not that we ignore science policy entirely, but rather that we do not take it systematically into account. Sociologists sometimes address their research to policy issues, but they have seldom taken the role of government agencies in scientific development as problematic in itself [Cozzens Reference Cozzens1986: 9-10].

About ten years later, other sts scholars noticed a neglect by the sociology of science of macro-structures and dominant institutions [Knorr-Cetina Reference Knorr-Cetina, Jasanoff, Markle, Petersen and Pinch1995b: 160-163; Kleinman Reference Kleinman1998: 285-291; Nowotny Reference Nowotny2007: 485]. This is not to say that sts scholars do not engage with science policy. They do so frequently and with a variety of interests. However, “politicizing science” (see Brown Reference Brown2015 for an overview of this research area) is different from systematically investigating the impact of science policy on the construction of scientific knowledge. The latter appears to occur only in the few studies focusing on such influences (e.g. Kleinman Reference Kleinman1998) or “if outside events intrude on the micro level” and “an STS person is around to notice” [Nowotny Reference Nowotny2007: 485; referring to Webster Reference Webster2007: 463-466]. The “new political sociology of science” illustrates rather than alleviates the problem that most studies of science appear to be interested in or able to investigate either governance or research content but not the link between the two [Frickel and Moore Reference Frickel and Moore2005a].

Although both fields consider themselves thriving, we would argue that the separation of foci on the production of research content and the politically shaped conditions for this production unnecessarily hinder the exploration of an important research topic. The aim of this review is therefore to discuss opportunities for an integration of science policy studies and sociology of science perspectives on the impact of governance on the content of research.

Setting out to explore contributions to a specific research question makes this review rather old-fashioned. We do not apply semi-automatic (keyword- or journal-based) download and mapping procedures that have gained ground in reviewing [Martin Reference Martin2012; Perkmann et al. Reference Perkmann, Tartari, McKelvey, Autio, Broström, D’Este, Fini, Geuna, Grimaldi, Hughes, Krabel, Kitson, Llerena, Lissoni, Salter and Sobrero2013; Trousset Reference Trousset2014]. Instead, we used our own knowledge of the literature and conducted a problem-driven search that relied on snowballing from references. In addition, we scanned the volumes of some major journals (Minerva, Science and Public Policy, Research Policy, Social Studies of Science, Science, Technology and Human Values) from 2010 to 2015.

This approach is risky because the STS and Science Policy literature is voluminous and scattered, and we apologise in advance for having overlooked important contributions. We are confident not to have missed whole research agendas and prefer risking incompleteness and one-sidedness of some judgements but providing a perspective that can be challenged, corrected and built on by others. In particular, we believe that this treatment of the literature leads to a clearer picture of research desiderata.

Our focus on empirical studies of linkages between governance and research content reveals that there is no clear separation of empirical domains or methods. While most applications of quantitative methods and most macro-level studies can be linked to science policy research, and sts studies mostly use qualitative methods at the micro-level, there are significant overlaps. However, two different perspectives on the causal link between governance and research can be distinguished. An “impact of”-perspective asks about the effects of specific governance arrangements and employs a wide range of methods to trace “pathways of impact” in order to identify changes in researchers’ behaviour or changes in research content caused by these governance arrangements. In contrast, the “impact on”-perspective focuses on the construction of scientific knowledge and asks how scientific knowledge is shaped and what factors contribute to this shaping. Our organisation of the review follows this distinction, bearing in mind that, like all others, this categorisation is imperfect, not least because it creates an apparent imbalance between science policy and sociological studies by including many sociological contributions in section 2 and reserving section 3 for studies that consider heterogeneous influences on research content. In terms of conclusions, we outline a research agenda.

2. What are the effects of governance on research content?

In this section we look at studies that take (changes in) the governance of science as their point of departure, and analyse their impact on the production of scientific knowledge. We focus on four major changes in the governance of science that have occurred in most oecd countries over the last decades, namely:

  1. - the transition from recurrent funding of research to a split funding mode that combines recurrent funding with temporary project-based support;

  2. - the incorporation of public policy goals in science policies;

  3. - the transformation of governance of higher education systems that is commonly referred to as a transition to “new public management”; and

  4. - the encouragement of knowledge transfer activities and collaboration between publicly funded research and commercial enterprises [Whitley Reference Whitley, Whitley, Gläser and Engwall2010].

Science policy studies have responded to these observations by analysing the impact of new funding arrangements including the new role of funding councils (2.1), the promotion of emerging fields and contributions to solutions of societal problems by state-funded research programmes (2.2), the consequences of performance-based block funding of universities and public research institutes (2.3), the impact of higher education reforms (2.4) and changing patterns of academy-industry collaborations (2.5).

The allocation of publications to these thematic areas required three difficult decisions. First, these major trends are by no means the only changes in the governance of science. There are many more, and some of these can be considered as potentially widespread and influential. We selected research linked to the five trends because we believe it to be representative of the way in which the impact of governance on research content is studied and because they encompass the major channel through which authority over research content is exercised, namely funding. Second, we had to allocate each study to one of the themes although they obviously overlap. For example, state policy goals are realised through the governance of research councils, the regulation of university-industry relationships, or higher education reforms. We solved this problem by assigning studies to one theme and cross-referencing them when necessary. Third, limiting the analysis of governance to dominant trends might make us overlook emerging issues, which is why we devote a section to recent developments whose scope and effects are currently difficult to assess (2.6). After presenting major ideas of research on the effects of governance, we will identify the common problems and methodological challenges of these approaches (2.7).

2.1. Transition to split funding modes and the rise of competitive project funding

The transition from exclusive block funding of research organisations to a split funding mode that combines block funding with project grants is one of the most important changes in the governance of research. Although the initiation and speed of this institutionalisation varied greatly between countries, its impact on research in many fields has been severe. In fields that depend on external project funding, researchers compete not only for recognition of their research but also for the opportunity to continue research at all. In the governance system, research funding agencies have emerged as powerful new actors that affect the direction of research.

These changes have enjoyed considerable attention in science studies. In particular, funding agencies have been studied in some depth because of their unique role and growing influence in the science system. They were considered as intermediary organisations that mediate between science policy, scientific communities, and researchers who apply for and receive grants [Braun Reference Braun1993]. Later, the concept of “boundary organizations” was introduced by Guston [Reference Guston2001] in order to emphasize their active role in mediating between policy and science (see also Kearnes and Wienroth Reference Kearnes and Wienroth2011). Principal-agent theory has been suggested as a theory that can explain structures of and processes in and around research councils [Braun Reference Braun1993; Rip Reference Rip1994; Guston Reference Guston1996; Braun Reference Braun1998; Van der Meulen Reference Van der Meulen1998; Braun and Guston Reference Braun and Guston2003; Caswill Reference Caswill2003], but has been criticised for not being able to do justice to the complex embeddedness of research councils [Morris Reference Morris2003; Shove Reference Shove2003].

The analyses of research councils as intermediary organisations include only very few general comments on their influence on the content of research. Braun [Reference Braun1998] makes a convincing theoretical argument that it is possible to influence the cognitive development of science through research council funding (see also Rip Reference Rip1994). How this influence occurs, and how specific changes of research content are achieved, has not yet been explored in any depth. Instead, research has focused on the process of selecting proposals for funding. Changes in research content have been treated indirectly as outcomes of these selection processes, if at all.

The consequences of decision-making by peer review for the content of research have been a concern of science studies ever since the first study of peer review at the nsf, which found the likelihood of having a grant approved did not differ from a random selection (Cole et al. Reference Cole, Cole and Simon1981; for a recent review see Van Arensbergen et al. Reference Van Arensbergen, van der Weijden and van den Besselaar2014). Later discussions held peer review decisions to select “excellent mediocrity”, i.e. good but not excellent research, mainstream research, and low-risk proposals [Chubin and Hackett Reference Chubin and Hackett1990; Travis and Collins Reference Travis and Collins1991: 336; Horrobin Reference Horrobin1996; Berezin Reference Berezin1998].

The increasing relative scarcity of research funding has triggered an “excellence turn” in science policy, namely attempts to fund research with the potential to transform or significantly advance its field. These attempts manifest themselves in new funding programmes, whose implementation challenges peer review to identify exceptional research (which is variously termed “excellent”, “breakthrough”, or “performative”). Contrary to the above-mentioned findings and beliefs that peer review is incapable of promoting exceptional research, appropriately “conditioned” peer reviews appear to be able to identify exceptional research. Studies of decision-making procedures in peer review have identified the conditions and practices supporting the identification of exceptional research [Dirk Reference Dirk1999; Guetzkow and Lamont Reference Guetzkow and Lamont2004; Heinze Reference Heinze2008; Luukkonen Reference Luukkonen2012]. Funding programmes that utilise peer review for the selection of “excellent” research proposals have been shown to be able to identify research leading to scientific innovation or research that was categorised as “excellent” by independent experts [Lewison Reference Lewison1999; Lal et al. Reference Lal, Hughes, Shipp, Lee, Richards and Zhu2011; Wagner and Alexander Reference Wagner and Alexander2013; Laudel and Gläser Reference Laudel and Gläser2014].

Researchers were also interested in the responses of peer review processes to the thematic heterogeneity of grant proposals. Within reviewers’ fields of expertise, Travis and Collins [Reference Travis and Collins1991] observed “cognitive cronyism”, i.e. a bias of assessors towards their own scientific perspectives. Lamont [Reference Lamont2009] and Huutoniemi [Reference Huutoniemi2012] identified several strategies that members of interdisciplinary panels use in order to achieve consensus. In interdisciplinary social sciences and humanities panels “deferring to expertise”, i.e. the delegation of decisions to panellists who belong to the fields addressed by a proposal, appears to be common [Lamont et al. Reference Lamont, Mallard and Guetzkow2006]. Langfeldt [Reference Langfeldt2001] observed discipline-specific interpretations of general guidelines for evaluations.

A separate, more recent strand of bibliometric research turned to the question whether peer review does indeed select the best applicants [Bornmann et al. Reference Bornmann, Wallon and Ledin2008; Van den Besselaar and Leydesdorff Reference Van den Besselaar and Leydesdorff2009; Campbell et al. Reference Campbell, Picard-Aitken, Côté, Caruso, Valentim, Edmonds, Williams, Macaluso, Robitaille, Bastien, Laframboise, Lebeau, Mirabel, Larivière and Archambault2010]. Unfortunately, the methodology of these studies seems questionable for three reasons. First, they contrast peer review decisions on grants with bibliometric measurements (publications and citations) of applicants’ pre-grant or post-grant performance, which provides very little information unless one is prepared to accept bibliometric indicators as more valid performance measures. Second, successful applicants are often compared to unsuccessful applicants, i.e. a control group that is inherently problematic [Neufeld and Hornbostel Reference Neufeld and Hornbostel2012]. Third, reducing the independent variable in studies of success to the award of one particular grant does not do justice to the complexity of funding situations. It is not entirely clear what this kind of study contributes to our knowledge about the impact of funding councils on research.

In contrast to this emphasis on selection processes, only very few studies actually investigated changes in the content of research due to the (anticipation of) peer review decisions. The only major exception is the link between competitive grant funding and research performance which has been investigated primarily in the context of evaluations of funding programmes. Studies of grant-funded individuals and research centres provide mixed evidence [Gaughan and Bozeman Reference Gaughan and Bozeman2002; Jacob and Lefgren Reference Jacob and Lefgren2011; Neufeld and von Ins Reference Neufeld and Hornbostel2011; Neufeld and Hornbostel Reference Neufeld and Hornbostel2012; Neufeld et al. Reference Neufeld, Huber and Wegner2013; Bloch et al. Reference Bloch, Graversen and Pedersen2014; Langfeldt et al. Reference Langfeldt, Benner, Sivertsen, Kristiansen, Aksnes, Brorstad Borlaug, Foss Hansen, Kallerud and Pelkonen2015]. It turns out that researchers receiving fellowships and “centers of excellence” do not necessarily produce better research than comparable researchers or centers without such funding. The only interesting finding in this context is that the award of prestigious grants is not associated with increased research performance but is associated with increased career success [Böhmer and Ins Reference Böhmer and von Ins2009; Bloch et al. Reference Bloch, Graversen and Pedersen2014].

An important observation that challenges the whole idea of competition for funding leading to better research was contributed by Butler and Biglia [Reference Butler and Biglia2001] in a study that, unfortunately, is difficult to find. Butler and Biglia found that while Australian research funded by grants from the National Health & Medical Research Council did indeed have a higher citation impact than unfunded research, research that did not have to rely on grants at all because it was funded by block grants had the highest impact [ibid.: 13-14]. In a similar vein, Auranen and Nieminen [2010: 831] find no clear causal link between the degree of competition in a science system and its publication performance and efficiency.

The few studies that attempt to identify thematic changes caused by research council funding are based on interviews and mostly rely on behavioural changes reported by interviewees. These studies appear to confirm an earlier observation that the “fundability” of a proposal is firmly integrated in considerations of “do-ability” [Fujimura Reference Fujimura1987]. Researchers respond to priorities set by funding councils and to their perception of peer review being biased towards mainstream and low-risk research [Morris Reference Morris2000; Gläser et al. Reference Gläser, Lange, Laudel, Schimank, Whitley, Gläser and Engwall2010; Leišytė et al. Reference Leišytė, Enders, Boer, Whitley, Gläser and Engwall2010]. Although window dressing plays an important role in attempts to secure funding, the thematic priorities and decision-making practices of funding councils do have an impact. Both the selection processes and their anticipation by researchers result in the latter orienting their research more towards mainstream, low risk, and applied topics. In addition, grant funding appears to hinder changes of research topics because selection processes favour proposals that are linked to applicants’ prior expertise. Researchers must “bootleg” money for the start of new research under the cover of existing grants [Hackett Reference Hackett1987: 143; Morris Reference Morris2003: 364-365; Laudel Reference Laudel2006: 496; Gläser et al. Reference Gläser, Laudel, Lettkemann, Merz and Sormani2016].

Thus, while grant funding increases the flexibility of a science system and supports the concentration of resources on the best performers, little is known about its effects on the content of research. Adherence by researchers to topics that can be expected to be funded may reduce the diversity of fields on the macro-level, may force them into specific directions, and may hinder rapid innovation. The “excellence turn” in science policy can be interpreted as a response to this concern. However, too little is yet known about the “pathways to impact”, the actual adaptation of research by researchers and research groups, and aggregate effects of such adaptation. For example, we know very little about those who lose in competitions for grant funding. What happens if researchers who need grants to conduct their research are unable to win any? There are some hints at low-level research of reduced validity and reliability [Laudel Reference Laudel2006; Gläser et al. Reference Gläser, Lange, Laudel, Schimank, Whitley, Gläser and Engwall2010] but, by and large, we just do not know.

2.2. Incorporation of public policy goals in science policies and the rise of targeted funding

Since the middle of the previous century, science policy has begun to incorporate public policy goals in an attempt to increase the contribution by science to solutions of societal problems [Remington Reference Remington1988; Behrens and Gray Reference Behrens and Gray2001: 179-180; Lepori et al. Reference Lepori, van den Besselaar, Dinges, Potí, Reale, Slipersæter, Thèves and van der Meulen2007; Hessels et al. Reference Hessels, Grin and Smits2011; Berman Reference Berman2012; Reference Berman2014].Footnote 1 Roughly at the same time, science outgrew the opportunities to fund its growth, and science policy began to face decisions about what science to fund [Cozzens Reference Cozzens1986]. As a consequence, expectations for science to provide specific knowledge content have been significantly enhanced, and have been inscribed in evaluation and funding procedures. Funding councils have incorporated such priorities into their procedures for allocating project finance in various ways (see e.g. Kearnes and Wienroth Reference Kearnes and Wienroth2011 on the responses of the UK’s Engineering and Physical Sciences Research Council to state expectations). In addition, many states have become more directly involved in supporting research in particular areas for policy purposes. This is why effects of this trend on the content of research are partly included in considerations of the impact of project-based funding (2.1), and higher education reforms (2.4) and academy-industry links (2.5). In this section, we consider the impact of state investments in “emerging fields”.

The trends described above created the need for science policy to fund research selectively. Scientists responded to this new funding strategy by casting their fields and topics as particularly promising. The intersection of these trends has been described as the emergence of “strategic research” and “promissory science”. The former concept refers to “basic research carried out with the expectation that it will produce a broad base of knowledge likely to form the background to the solution of recognised current or future practical problems” [Rip and Voß Reference Rip and Voß2013: 41; see also Van Lente and Rip Reference Van Lente Harro and Rip1998; Rip Reference Rip2002]. The latter concept was used by Hedgecoe [Reference Hedgecoe2003] to refer to the promises researchers make in order to attract support for what they rhetorically construct as a promising field. This echoes the observation of a “promise-requirement cycle” according to which promises made by scientists in order to secure funding become institutionalised as expectations on which fields have to deliver [Rip Reference Rip1997: 628-632]. Rip also pointed out that the attempts to raise state interest and employ science policy in the funding of research has furthered the use of “umbrella terms” (terms of unclear scope) because these terms are useful in gaining support for scientific enterprises that cannot yet be precisely described. Umbrella terms also serve to suggest integrated efforts where they do not exist. In her study of materials science, nanotechnology and synthetic biology, Bensaude-Vincent [2016: 54] confirms that researchers in these interdisciplinary “fields” “still remain strongly grounded in their referent disciplines” (for nanotechnology, see also Marcovich and Shinn Reference Marcovich and Shinn2014).

This basic constellation has been studied for many emerging “fields”, “technologies” or topics, among which nanotechnology is probably the most studied and synthetic biology is probably the most recent example [Molyneux-Hodgson and Meyer Reference Molyneux-Hodgson and Meyer2009; Bensaude-Vincent Reference Bensaude-Vincent, Merz and Sormani2016]. Science policy studies of emerging fields usually focus on the emergence of political priorities and funding instruments through interactions between scientists and political actors (e.g. Eisler Reference Eisler2013) or on the role of community-building funding instruments (e.g. Molyneux-Hodgson and Meyer Reference Molyneux-Hodgson and Meyer2009). The impact of the governance of emerging fields on the content of research has not yet enjoyed much attention.

While it is clear from studies of emerging fields that state interest contributes to the formation of fields and rapid growth, and that scientists attempt to exploit these new opportunities, we do not know how and with what results massive state investment changes the content of research. There can be little doubt that in addition to the window dressing triggered by targeted funding, such funding also increases research on the intended topics. Far less is known about the details of thematic changes. The few studies addressing responses to state funding hint at a complex picture that involves many intervening variables. For example, Leydesdorff and Gauthier [Reference Leydesdorff and Gauthier1996] compared responses by Dutch and Canadian scientists to “national innovation-oriented research programs”. They concluded that “Canadian researchers seem to have used the priority programs as an alternative source of funding, while their Dutch colleagues were able to use these programs to help their specialties grow above the national average, and in accordance with selected priorities” [ibid.: 448]. The authors ascribe these differences to the higher degree of integration of the Dutch research system and to “organizational slack (e.g. traditional or lump-sum financing)”, factors that made it possible for Dutch researchers to follow policy signals more closely with additional research, while their Canadian colleagues were forced to use the additional money for maintaining their existing lines of research [ibid.].

The inevitable indirect effects of political priorities in research funding are also still to be explored, in particular studies about knowledge that is not produced due to the foci created by politics (see also 3.3). Almost 20 years after Leydesdorff and Gauthier, a study by Laudel and Weyer [Reference Laudel, Weyer, Whitley and Gläser2014] showed that the high integration of the Dutch science system remains but the system now operates without slack. Under these conditions, state priorities can create a quasi-market failure that makes non-priority fields disappear. On the micro-level, Smith [Reference Smith2010] describes the perception by health inequality researchers that the pressure to produce “policy relevant research” limits their autonomy and creativity.

A particularly interesting situation emerges when the state simultaneously perceives the necessity to promote a field and to regulate it, as has been the case for research using human embryonic stem cells. Taken together, two studies by Furman et al. [Reference Furman, Murray and Stern2012] and Brunet and Dubois [Reference Brunet and Dubois2012] provide an opportunity to analyse the impact of state regulation on the content of a rapidly growing field. Furman et al. use bibliometrics to study the response of US science to the 2001 decision by the Bush administration to enable federal funding for research with existing human embryonic stem cell lines, to prohibit federal funding for the development of and research on new cell lines, and to place no restrictions on the use of other than federal funds for research with human embryonic stem cells. The authors observe a decline in research with human embryonic stem cells after 2001 until 2003. The subsequent recovery between 2004 and 2007 is due to research at elite universities (which had easy access to non-federal funding) and international collaboration. The results suggest that while the governance intervention led to a change in research content for researchers in some universities, these effects were soon compensated for at the national level by the opportunities for other researchers to circumvent the governance instrument.

These observations are confirmed by a study of French regulation, which was based on the bioethics law and thus applied to all research using human embryonic stem cells. Researchers had to apply for permission, which was granted if the research was “likely to enable significant therapeutic advances” and “cannot be pursued using alternative methods of comparable effectiveness in the present state of scientific knowledge” (Art. L.2151-5 of the law, quoted from Brunet and Dubois: 263). Brunet and Dubois compare French research with human embryonic stem cells to its counterpart in the UK, and ascribe both the reduced scale of and the fragmentation of French research to the regulation. They attribute the reduced scale to the necessity of applying for permission for research, which made researchers avoid the field. The fragmentation (many groups not collaborating with each other) was caused by the demand for proof of therapeutic advances, which oriented the research towards specific diseases and applied questions rather than fundamental research. In contrast to their colleagues in the US, French researchers could not circumvent this regulation.

Although attempts by the state to create and direct research capacities for the support of public policy goals have increased significantly, the impact of these attempts on the dynamics of knowledge production has received comparatively little attention. The state influences research largely by implementing public policy goals in existing governance instruments rather than by creating new instruments. The impact of these policies on the growth of research and its direction appears to be significant but, again, systematic knowledge regarding the link between specific governance practices that are applied under specific circumstances and have specific effects is missing.

2.3. Evaluations and the rise of performance-based funding

Since the late 1970s, many countries have altered their block funding of higher education institutions by replacing or supplementing input-based funding for research by performance-based funding. The logic behind these changes includes incentives (to reward better performance), redistribution of resources (to increase efficiency), and improved management (to provide necessary information for change). Performance is measured either by peer review (which is often informed by quantitative data) or by quantitative indicators. The information about research performance is then used as an input for funding formulae that allocate resources in a zero-sum game (see Whitley and Gläser Reference Whitley and Gläser2007; Auranen and Nieminen Reference Auranen and Nieminen2010; Hicks Reference Hicks2012 for overviews).Footnote 2

The effects of performance-based research funding schemes have attracted a great deal of attention from both stakeholders and researchers. Political discussions and research were centred on the same questions: Do the new funding schemes contribute to improving research performance? And do they have unintended negative effects? These questions are of particular interest to our review because they have been addressed by quantitative and qualitative investigations in higher education research, the sociology of science and science policy studies.

Most studies focus on system-level effects. Studies of the Spanish, English, Australian, Norwegian and Danish performance-based funding systems asked whether the introduction of new funding schemes changed research performance, and with what effects. The analyses applied a quasi-experimental logic that interpreted the introduction of performance-based funding as treatment, and causally ascribed changes occurring after the treatment to these schemes. Research performance was operationalised in terms of publication behaviour (publications in “good” journals, however determined) and citation impact of publications.

The major challenge faced by this approach is the control of all possible confounding variables. This challenge is most clearly illustrated by a discussion about the Spanish funding scheme, which provides increases in salaries for those academics who publish a certain number of articles in international peer-reviewed journals. Jiménez-Contreras et al. [Reference Jiménez-Contreras, De Moya Anegón and López-Cózar2003] claimed that the introduction of this scheme in 1989 caused a disproportionate increase in the number of Spanish articles in such journals because the increase occurred after the change in the performance-based salary increase scheme was introduced. The authors analysed possible confounding variables––investment in R&D, numbers of researchers––and concluded that the limited growth in these variables could not explain the increase in publication performance. They dismiss several other factors such as international mobility, international collaboration, and participation in European funding programmes, as well as the cumulative effect of policies.

The critique of this analysis by Osuna et al. [Reference Osuna, Cruz-Castro and Sanz-Menendez2011] is based on a review of five threats to the internal validity of analyses of time series [ibid.: 580-581]. They found the factors dismissed by Jiménez-Contreras et al. to be possible causes if time lags between funding and publication were taken into account, and demonstrated that a control group of researchers who were not subject to the performance-based funding scheme showed a similar increase in research performance. Most importantly, their discussion illustrated the problem of confounding variables: there were too many such variables to consider for a causal ascription of the effects to be possible.

The situation is even more complicated when performance-based funding schemes for universities are studied. The complex causal chains linking these to national-level changes in research performance include the strategic responses of universities to the funding schemes, the resulting changes in the situation of individual researchers, the strategic responses by researchers to their changed situation, and the aggregation dynamics of individual changes in research and publication behaviour. Each of these steps is co-shaped by a host of influences on the behaviour of individual and collective actors.Footnote 3 Was the increase of low-impact publications in Australia caused by a funding formula that rewards number of publications [Butler Reference Butler2003], by the preceding higher education reforms that confronted a large number of academics with increased expectations to conduct research and publish (a change described by Meek Reference Meek1991), or by an increasing competition for project grants that emphasises track record, i.e. prior publication (a development described by Gläser and Laudel Reference Gläser, Laudel, Whitley and Gläser2007; Gläser et al. Reference Gläser, Lange, Laudel, Schimank, Whitley, Gläser and Engwall2010)? Is it possible to ascribe the increase in “good” Danish publications to the performance-based funding scheme [Ingwersen and Larsen Reference Ingwersen and Larsen2014]? The authors excluded changes in research funding, the increase in academic staff, and the internal dynamics of the Web of Science database as explanations because the growth in productivity and impact exceeds the other trends. They do not consider the combined effect of these trends or the increasing concentration of research funding, either as an effect of competitive grant funding or due to the new programme for the funding of “Centers of Excellence”, which was launched in the same year as the performance-based funding scheme [Langfeldt et al. Reference Langfeldt, Benner, Sivertsen, Kristiansen, Aksnes, Brorstad Borlaug, Foss Hansen, Kallerud and Pelkonen2015]. They describe but do not explain the observation that at least one university does not conform to the pattern.

Aagaard and colleagues conducted the most complex study of this type to date. They analysed the introduction of the Norwegian publication indicator [Schneider et al. Reference Schneider, Aagaard, Bloch and Noyons2014; Aagaard Reference Aagaard2015; Aagaard et al. Reference Aagaard, Bloch and Schneider2015]. The authors combined bibliometric studies, surveys, and interviews in order to assess the impact of this indicator’s introduction on research. “Research” was operationalised as publication behaviour. In addition, managers and academics at universities were asked to report changes in their organisation that they ascribe to the indicator.

As in the other studies, causal attribution appears to be difficult. The authors observe an 82% increase in “publication points” in the time between the performance measure’s introduction in 2004 and 2012 [Aagaard et al. Reference Aagaard, Bloch and Schneider2015: 109], and argue that this increase cannot be fully explained by other factors. In particular, they report that the number of researchers with a publication in the four main universities has increased by 116% while the number of R&D personnel has risen by only 5% [ibid.: 110]. At the same time, they report several observations that cast doubt on the causal ascription, including:

  1. - the major increase in WoS publications (rather than “publication points”) occurring prior to the introduction of the indicator [Schneider et al. Reference Schneider, Aagaard, Bloch and Noyons2014: 547];

  2. - substantial increases in R&D funding (37%) and R&D personnel (21%, Aagaard et al.: 110);

  3. - “strong increases in publication activity among more recently established universities and among university colleges, many of which have previously had a much weaker focus on research” [ibid.]; and

  4. - changes in the publication indicator itself [ibid.].

Aagaard [Reference Aagaard2015] reports an observation that supports the assumption of a causal impact on publication behaviour, namely the fact that the indicator is used in decisions about recruitment, promotion, and salary increases. Unfortunately, no timeline can be attributed to the introduction of such measures, which makes it impossible to link the “trickling down” of incentives [ibid.] to the changes in publication behaviour over time.

Not only are studies on country-level effects limited to changes in publication practices and citation impact (rather than research practices), they also appear to be liable to a “post hoc ergo propter hoc” fallacy, i.e. the conclusion that A must be the cause of B because B follows A.Footnote 4 However, attempts to meet the complexity of causation head-on by conducting qualitative studies face interesting problems of their own. These studies can investigate the actual translation of changing governance environments for academics into a changing production of knowledge. However, the observation of effects remains at the micro-level. In most cases, the causal chain under study ends with self-reported or self-announced behavioural change [Stöckelová Reference Stöckelová2012; Linkova Reference Linkova2014; Aagaard Reference Aagaard2015]. Lucas [Reference Lucas2006] did not ask about changes in the content of knowledge but focused on strategies of coping with the British Research Assessment Exercise at different levels in the university. Attempts to comparatively analyse researchers’ field-specific responses to performance-based funding schemes [Gläser and Laudel Reference Gläser, Laudel, Whitley and Gläser2007; Gläser et al. Reference Gläser, Lange, Laudel, Schimank, Whitley, Gläser and Engwall2010; Leišytė et al. Reference Leišytė, Enders, Boer, Whitley, Gläser and Engwall2010] found, not surprisingly, that for researchers in the sciences external funding was by far a stronger environmental factor than university funding and accompanying incentives. For example, Australian universities translated performance-based funding schemes applied to them in similar schemes applied internally. However, little money arrived at the researcher level simply because Australian universities were under-funded [Gläser et al. Reference Gläser, Lange, Laudel, Schimank, Whitley, Gläser and Engwall2010]. Researchers in less resource-intensive fields are less dependent on grant funding agencies and university management but may be susceptible to hierarchical pressure (see also Hammarfelt and de Rijcke Reference Hammarfelt and Rijcke2015).

Although they are able to explore researchers’ responses to their resource and governance environments, micro-level studies share some limitations. Interview-based studies can explore changes in interviewees’ problem choices as described in interviews. The exploration of research content in interviews is possible and can be triangulated with individual-level bibliometrics [Gläser and Laudel Reference Gläser, Laudel, Whitley and Gläser2007: 134-135; 2015a]. Nevertheless, two challenging questions remain. To what extent are reported changes in problem choices due to changes in governance? And do these changes add up to macro-level shifts in knowledge production? Answering the first question would require comparative ethnographies, whose resource demands exceed current funding patterns for social science research. The second question can be answered if detailed micro-level studies of changing research content can be linked to field-level changes by advanced bibliometric methods. Both types of methods have not yet been applied.

Taken together, studies on the impact of performance-based funding schemes on the content of research express a general dilemma of research on the impact of governance changes on research content. Studies on country-level effects, which are important in the research context of science policy studies, are forced to limit their measurement of research practices to indicators of publication behaviour and citation impact of publications, and must black-box the complex causal web that mediates the relationship between knowledge production, publication of new findings and their reception. Micro-level studies of changes in research practices and their causes can draw a more complex picture but are still largely unable to link their findings back to the aggregate dynamics at the levels of countries or scientific communities.

2.4. Higher education reforms and the rise of “new public management”

Over the past three decades, many oecd countries have introduced substantial reforms to the governance of universities [Paradeise et al. Reference Paradeise, Reale, Goastellec, Paradeise, Reale, Bleiklie and Ferlie2009]. So far, these changes have predominantly been studied by higher education researchers. They usually involved (and involve) the development of new relationships with the state and other extra-mural agencies as well as shifts in the internal management of academic institutions, which are often summarised under the generic title of “New Public Management” or npm [Schimank Reference Schimank2005; De Boer et al. Reference De Boer, Enders and Leišytė2007; Ferlie et al. Reference Ferlie, Musselin, Andresani, Paradeise, Reale, Bleiklie and Ferlie2009]. Against a common background of a growing discrepancy between performance expectation and public funding of academic research and teaching, states have increasingly sought to reconstruct universities as quasi-independent collective agents on which they can rely to realise national education and research goals.

These changes result in major reconstructions of authority relations in higher education. The increase in university autonomy from the state appears to be very limited at best in spite of the npm rhetoric that aims at shifting responsibilities from the state to universities [Westerheijden et al. Reference Westerheijden, De Boer, Enders, Paradeise, Reale, Bleiklie and Ferlie2009; Capano Reference Capano2011]. At the same time, the internal redistribution of authority from academics and their collegial decision fora to a managerial hierarchy of university senior management, deans, and heads of departments has progressed in many countries [Schimank Reference Schimank2005; Paradeise et al. Reference Paradeise, Reale, Goastellec, Paradeise, Reale, Bleiklie and Ferlie2009; Musselin Reference Musselin2013; Reference Musselin, Whitley and Gläser2014]. Evaluations of research performance have become one of the most important tools for state governance of universities and university management [Whitley and Gläser Reference Whitley and Gläser2007].

Although changes in university management significantly alter authority relations in universities, surprisingly little research has been conducted on the impact of these changes on research content. A theoretical argument by Whitley and Gläser [Reference Whitley, Gläser, Whitley and Gläser2014] points to principal limitations of universities’ abilities to “manage” research and thus to limitations of higher education reforms. However, there is little empirical research to support or refute this argument. Higher education research has described academics’ resistance to npm [Bauer and Henkel Reference Bauer, Henkel, Henkel and Little1998; Anderson Reference Anderson2008; Moscati Reference Moscati, Amaral, Bleiklie and Musselin2008] and the impact of npm on academic identities [Henkel Reference Henkel2000; Reference Henkel2005] but has done little to explore how npm affects the conduct and content of research. Studies investigating the impact of npm on research content at all focus on just one governance instrument––performance-based funding––and treat changed authority relations as a context of this relationship (see 2.3, and e.g. Morris Reference Morris2002).

Three exceptions should be mentioned. In their analysis of an evaluation exercise as a tool for profile-building at German universities, Meier and Schimank [Reference Meier, Schimank, Richard Whitley, Gläser and Engwall2010] ascribe the possibility of profile building to changed authority relations, to which the evaluations themselves contributed. The extent to which profile building changes research content could not be assessed at the time of the investigation. Laudel and Weyer [Reference Laudel, Weyer, Whitley and Gläser2014] link changed authority relations at universities and the tight integration of the Dutch science system to the possible disappearance of research specialties, to which profile-building activities of universities also contribute. Louvel’s [Reference Louvel, Whitley, Gläser and Engwall2010] analysis of shifting authority relations in French laboratories describes significant changes in the conduct of research but does not address research content.

The few studies of consequences of npm for research content again demonstrate the difficulty of causally ascribing changes in scientific knowledge to a particular governance process. The impact of npm on research content can only be assessed in the context of all other influences. So far, it seems to be rather weak in most countries because the opportunities by management to interfere with the recruitment of academics or to make them redundant are limited. In addition, opportunities for management to direct researchers’ choices of problems and approaches is limited due to the specific nature of scientific work [Musselin 2007; Whitley Reference Whitley, Engwall and Weaire2008; Whitley and Gläser Reference Whitley, Gläser, Whitley and Gläser2014]. This is why “governance by funding” is likely to remain a stronger influence than hierarchical steering.

2.5. Academy-industry relationships and the rise of privatised knowledge

At least four interrelated processes have contributed to changes in relationships between publicly funded research and industry. First, the state has significantly increased its demand for contributions by science to technological progress and economic growth, particularly through innovation [Behrens and Gray Reference Behrens and Gray2001: 179-180; Coriat and Orsi Reference Coriat and Orsi2002: 1493-1495; Kearnes and Wienroth Reference Kearnes and Wienroth2011; Berman Reference Berman2012; Reference Berman2014]. Second, the growth of scientific knowledge has increased the number of epistemic links between science and industry and thus of science-based industries [Mahdi and Pavitt Reference Mahdi and Pavitt1997; Koumpis and Pavitt Reference Koumpis and Pavitt1999]. This increases industry’s demand for public research support as well as opportunities for academic researchers to access relevant information, equipment and materials [Meyer-Krahmer and Schmoch Reference Meyer-Krahmer and Schmoch1998; D’Este and Perkmann Reference D’Este and Perkmann2011]. Third, the increased science base of many industries and its extension towards fundamental scientific research have increased opportunities for making profits by investing in research and either selling its results or betting on capital gains [Pavitt Reference Pavitt2001; Coriat and Orsi Reference Coriat and Orsi2002: 1500-1501]. Finally, the growing scarcity of public research funding has increased the willingness of researchers to seek funding from industry [Meyer-Krahmer and Schmoch Reference Meyer-Krahmer and Schmoch1998; Jankowski Reference Jankowski1999; Laudel Reference Laudel2006; Lam Reference Lam2010; D’Este and Perkmann Reference D’Este and Perkmann2011].

These trends made the volume and variety of links between university research and commercial interests grow and increased the latter’s influence on research content. We exclude from our consideration the major concern of science and innovation policy studies, namely the question whether increased academy-industry links contribute to innovation and economic success. Instead, we focus on consequences for the production of scientific knowledge by these researchers and their scientific communities that result from new incentives for researchers to engage in commercialization activities and the support by the state of forms that improve the opportunities for commercial actors to influence the content of publicly funded research.Footnote 5

The impact of industry interests on the content of research can be considered at different levels and for different forms of academy-industry links. Levels include the content that is directly linked to industry interests––the content of collaborative or sponsored projects, more general research agendas of researchers having relationships with industry, and the knowledge production of scientific communities whose members engage in commercialization of research. At each of these levels, the different forms of academy-industry relations can exercise specific influence on research (Table 1). Investigations of the impact of university-industry links on the conduct and content of research included these forms in various combinations. A frequently used independent variable is “receiving support from industry” [Blumenthal et al. Reference Blumenthal, Campbell, Causino and Seashor1996; Behrens and Gray Reference Behrens and Gray2001; Evans Reference Evans2010a; Reference Evans2010b]. Murray and Stern [Reference Murray and Stern2007] and Campbell et al. [Reference Campbell, Weissman, Causino and Blumenthal2000]. investigated the impact of patenting research findings. Evans [Reference Evans2010a; Reference Evans2010b]. also included collaboration with industry as indicated by co-authorships.

Table 1 Findings on the impact of four forms of university-industry links on the contents of research on three levels of aggregation levels of research

Changes in the content of individual projects linked to industry have received little attention so far, probably for the reasons that explain the general avoidance of corporate research [Penders et al. Reference Penders, Verbakel and Nelis2009]. The only aspect of research content that has been explored is performance (see e.g. the review by Baldini Reference Baldini2008: 295-302). The studies reported by Baldini ask how research performance (measured by publications and citations) is associated with innovation performance (measured by patents or start-ups), albeit without establishing causality.

There is evidence concerning impacts of industry links on graduate student projects but, again, the evidence is mixed [Baldini Reference Baldini2008]. Some studies found no negative effects (e.g. Behrens and Gray Reference Behrens and Gray2001), while other studies reported by Baldini [2008: 302-304] point to enforced secrecy and reduced learning experiences. The only exception is research on the manipulation of outcomes in the sponsor’s favour. The suspicion that research projects funded by commercial enterprises are biased has initiated discussions and meta-studies in biomedical research. In his reviews of these studies, Krimsky [Reference Krimsky2013] concludes that “[i]ndustry-sponsored trials are more likely than trials sponsored by non-profit organisations, including government agencies, to yield results that are consistent with the sponsor’s commercial interest” [Krimsky Reference Krimsky2013: 582]. He also finds a small number of pharmaco-economic studies to show such an effect [ibid.]. Investigations of industry-sponsored research on specific commodities such as tobacco unambiguously demonstrate that this research is biased towards funders’ interests. Krimsky also points out that a variety of reasons may cause “funding effects”, and argues for ethnographies to determine the actual mechanisms at work between funding and bias. A study by Sismondo [Reference Sismondo2009] points to one of these mechanisms. Commercial services have emerged that design, schedule, and launch publications about drugs as part of pharmaceutical companies’ marketing strategies. Researchers from universities are asked to author or co-author such publications. Nothing is known yet about the impact of such “ghost-written” publications on a community’s knowledge production.

Beyond the immediate impact on the content of collaborative or sponsored research, the more general and potentially more sustainable impact of academy-industry links on academic researchers’ research agendas is of particular interest. Qualitative and quantitative studies have found sponsored research to influence research agendas by moving them towards more applied research or towards research with more potential for collaboration [Blumenthal et al. Reference Blumenthal, Campbell, Causino and Seashor1996; Lam Reference Lam2010]. The reverse also occurs: some researchers tend to “shy away” from problems associated with commercialising results [Owen-Smith and Powell Reference Owen-Smith and Powell2001]. Furthermore, the influence on research agendas works both ways: Cohen et al. [Reference Cohen, Nelson and Walsh2002] showed publicly funded research to have a large impact not only on project completion but also on the research agendas of industrial R&D across much of the manufacturing sector.

Academy-industry relationships do not only influence the agendas of researchers who have links to industry or commercialise their research findings themselves. Several studies have found that research agendas are influenced by the perception that too much knowledge of a field is controlled by intellectual property rights. Some researchers tend to avoid such fields [Eisenberg Reference Eisenberg, Cooper Dreyfuss, Leenheer Zimmermann and First2001: 225, 233; Walsh et al. Reference Walsh, Cohen and Cho2007; Murray et al. Reference Murray, Aghion, Dewatripont, Kolev and Stern2009; Evans Reference Evans2010a: 761]. The consequences of this intellectual migration away from intellectual property rights towards the directions in which fields evolve have yet to be investigated.

Research on academic spin-offs has found little change in university research agendas due to double affiliation of academics or collaboration with spin-offs. In their analysis of French spin-offs from cnrs laboratories, Shinn and Lamy [Reference Shinn and Lamy2006] distinguished three types of scientist-entrepreneurs but did not associate impact on university research agendas with any type. Zomer et al. [2010: 347] did not find Dutch research-based spin-off companies to alter the research agendas of university-based researchers directly but did not exclude the possibility of a “soft impact” through the academics’ changed awareness of practical problems.Footnote 7 In contrast, Cooper [Reference Cooper2009] found that the potential to create for-profit start-ups from research findings did influence problem choices. Unfortunately, he did not provide specific information about consequences for research content.

The impact of academy-industry relationships on the community level of knowledge production has been ascribed to aggregated changes in the conduct rather than content of research at the individual level. Concerning conduct, Blumenthal et al. [Reference Blumenthal, Campbell, Causino and Seashor1996] found industry support to be associated with increased secrecy and delayed publication. Campbell et al. [Reference Campbell, Weissman, Causino and Blumenthal2000] found that commercial activities were associated with data withholding.

The most complex study so far on relationships between industry collaboration, sharing and the diffusion of knowledge in scientific communities was conducted by Evans [Reference Evans2010a; Reference Evans2010b]. Combining qualitative and quantitative methods in an investigation of the Arabidopsis community, he observed that industry collaboration (both at organisational and individual levels) decreases the likelihood of materials sharing but increases the sharing of pre-publication manuscripts. Evans also found industry collaboration to slow or reduce the diffusion of knowledge but not to limit its thematic breadth (which he measured by calculating the annual proportion of citing articles that did not share coded scientific terms with the focal article). These results expand prior observations by Murray and Stern [Reference Murray and Stern2007], who found declining citation rates of research findings after patenting and thus a reduced diffusion of results. Evans's findings are indirectly confirmed by a study of research using genetically engineered mice. This use was first aggressively controlled by patent holders [Murray et al. Reference Murray, Aghion, Dewatripont, Kolev and Stern2009]. Murray et al. found a significant increase in the level and diversity of follow-on research after limitations on the use of these mice were reduced by nih agreements with patent holders. Further confirmation can be derived from observations of researchers’ “avoidance” of topics they associate with restrictions due to intellectual property rights [Eisenberg Reference Eisenberg, Cooper Dreyfuss, Leenheer Zimmermann and First2001: 225, 233; Owen-Smith and Powell Reference Owen-Smith and Powell2001; Walsh et al. Reference Walsh, Cohen and Cho2007; Evans Reference Evans2010a: 761].Footnote 8 Other scientists ignore or actively resist restrictions created by intellectual property rights [Murray Reference Murray2010].

Taken together, the studies investigating the impacts of academy-industry links on the content of research have established that such an impact exists, and that it varies between fields, but usually end there. This limitation is caused by two methodological problems. The first problem is the studies’ self-restriction to academy-industry links. Similar to the studies of performance-based funding schemes, studies of academy-industry links do not take into account other influences on project content or research agendas. They observe that, in most cases, collaboration with or funding from industry constitutes only a proportion of a researcher’s agenda but do not investigate how this proportion interacts with other influences. They also observe variations between fields but do not investigate the properties of fields that are responsible for the variation, let alone the ways in which these properties affect the building of research agendas in conjunction with academy-industry links. Since variations in the sharing of information about research are linked to the epistemic practices of fields.Footnote 9 These field-specific practices of information sharing constitute the background against which influences of academic-industry relationships on information and materials sharing must be assessed.

The second, connected, problem is the choice of empirical methods for investigating the impact on research content. Investigations are dominated by surveys and bibliometric studies, both of which are limited when it comes to exploring research content. The very few interview-based studies restrict themselves to asking about attitudes and behavioural change without exploring the consequences for research content. Ethnographies and in-depth explorations of research content in interviews are missing in the toolbox of studies investigating academy-industry links, which is why findings about research content are limited to the basic-applied dimension and to inconsistent findings about research quality.

The most promising results have been contributed by studies of knowledge diffusion. The creative bibliometric approaches by Evans [Reference Evans2010a; Reference Evans2010b] and Murray and colleagues [Murray and Stern Reference Murray and Stern2007; Murray et al. Reference Murray, Aghion, Dewatripont, Kolev and Stern2009] enable the observation of diffusion processes at the level of scientific communities. This level is crucial for many questions about research content but rarely addressed due to methodological problems. Extending the methodological ideas of diffusion studies to other problems of changing research content at the community level is certainly worthwhile.

2.6. Emerging trends in the governance of science

In addition to the main trends discussed in the previous sections we would like to highlight two emerging trends in the governance of science that have found the attention of science studies, namely the increasing activity of civil society actors in the governance of research and “governance by knowledge.” The dynamics of these trends is difficult to assess as yet. Both are linked to the emergence of new interests concerning contributions by science to solutions of societal problems or problems expected from science or science-based interventions. We limit our following discussion to those studies that include changes in research content, thereby excluding a large body of literature on public engagement with science and participatory science.

An increasing number of empirical studies has been devoted to attempts by new actors to influence the production of scientific knowledge. By “new actors” we mean collective actors that represent specific interests concerning science, which they believe to be insufficiently included in traditional governance arrangements. Examples include patient organisations, foundations concerned with public health or cures for diseases, and environmental movements.Footnote 10 We will consider them as social movements, although the interests are often represented by organisations that are not co-extensive with the movements themselves. These movements often identify knowledge that has not been produced (see also 3.3).

The new actors mainly use existing instruments in their attempts to shape research content. They lobby for a change of state priorities, mobilize resources for dedicated research organisations or contract research, or influence the agenda setting of research councils [Epstein Reference Epstein1996; Moore Reference Moore, Frickel and Moore2005: 305-308; Morello-Frosch et al. Reference Morello-Frosch, Zavestoski, Brown, Gasior Altman, McCormick, Mayer, Frickel and Moore2005: 263-265; Brown et al. Reference Brown, McCormick, Mayer, Zavestoski, Morello-Frosch, Gasior Altman and Senier2006; Frickel et al. Reference Frickel, Gibbon, Howard, Kempner, Ottinger and Hess2010]. Thus, their success––the actual change of research content they achieve––depends on the conditions already discussed in the previous sections, in particular the strong positions of researchers in the ultimate problem choices. For example, Pittens et al. [Reference Pittens, Elberse, Visse, Abma and Broerse2014] showed that although the involvement of patient groups in the setting of research agendas and the latter's translation into research programmes had a strong influence on these programmes, patient groups' priorities disappeared in the subsequent implementation of research programmes due to their exclusion from that phase.

Social movements also use less common channels such as directly influencing scientists, e.g. by building and maintaining networks of scientists working on topics they are interested in, or by befriending scientists [Panofsky Reference Panofsky2011]. Of these alternative means of influencing research content, we would like to briefly discuss influencing research content by providing knowledge. We consider this mode of influencing research content as particularly interesting because it utilises the way in which scientific communities intentionally and unintentionally shape the research of their members. Researchers select problems and approaches based on the state of the art, i.e. by taking in and evaluating the knowledge produced by their communities. Providing additional data may change the input to these selection processes and thus the content of knowledge produced by researchers. Although the contribution of data for research by non-scientists has a long tradition in a variety of fields (see e.g. Moore Reference Moore, Frickel and Moore2005), this rarely occurs with the aim of influencing the content of research. The shaping of research agendas through additional research information was observed in the case of patients and patient organisations [Brown et al. Reference Brown, McCormick, Mayer, Zavestoski, Morello-Frosch, Gasior Altman and Senier2006: 525; Panofsky Reference Panofsky2011: 38; Polich Reference Polich2011; Pols Reference Pols2013; Rabeharisoa et al. Reference Rabeharisoa, Moreira and Akrich2014]. While these studies confirm the effectiveness of this “governance by knowledge”, the actual change of research content brought about has not yet been sufficiently studied.

The same applies to similar attempts of industrial enterprises to “govern by knowledge.” Sismondo’s [Reference Sismondo2009] account of “ghost writing”––the preparation of research articles about the efficacy of drugs as part of marketing strategies––demonstrates that firms deliberately introduce knowledge in the scientific discourse. Although medical practitioners may well be their major target group, these articles are likely to have an impact on medical research as well. Unfortunately, this impact has not yet been studied. Firms also do the opposite by excluding knowledge from use. Again, the impact of this strategy on research agendas remains under-studied.

2.7. Strengths and weaknesses of effect-searching studies

The empirical studies discussed in the preceding five sections apply five distinct strategies to approach research content. A first strategy involves measuring macro-level effects such as speed or directions of knowledge diffusion. This strategy exploits citation links between publications, and links citation dynamics to other properties of publications or their authors. A second, frequently applied, strategy uses publication and citation indicators for measuring one particular aspect of research content, namely performance as expressed in volume or quality of research. This strategy, which attempts to draw conclusions about macro-level variables (the performance of the science system) from aggregate effects, seems particularly vulnerable in the light of the many problems associated with publication-based and citation-based indicators. Evaluative bibliometrics has been conducting an intensive discussion of these indicators’ validity and statistical reliability, field-specific behaviour, and necessary normalisation. Methodological advances that were necessary in the context of bibliometric evaluations do not always seem to be matched by the application of indicators in research projects on the governance of science.

A third strategy of measuring research content utilises publication indicators as proxies of publication practices. This approach has the advantage of matching the conclusions that are drawn to what is actually measured by the indicator. At the same time, it operates a step away from research content. Publication practices can be assumed to be linked to research practices and research content. However, the nature of this link is unknown, and is likely to vary across fields of research, career stage, and situations of researchers. Consequently, no conclusions about research content can be drawn from analyses of publication behaviour.

A similar argument must be applied to the fourth strategy, which attempts to measure research practices by surveys. Respondents are asked how their research practices (especially their choices of research problems) have changed.Footnote 11 This strategy suffers from a similar problem to that of exploring publication practices. Although practices of problem choice are more tightly coupled with research content than publication practices, the link between a general description of changing problem choice given by respondents to surveys, and changes in the content of knowledge produced by them, is far from clear. This is why statements like “research practices of these scholars were investigated using both publication statistics and responses to questionnaires” [Hammarfelt and de Rijcke Reference Hammarfelt and Rijcke2015: 64] simply do not ring true––whatever data were collected by these methods are unlikely to represent actual research practices.

Finally, a fifth strategy––using semi-structured interviews and ethnographies––enables conclusions to be drawn about research content because it can be used to explore with scientists their production of scientific knowledge in great detail.Footnote 12 This last strategy complements analyses of knowledge diffusion at the macro-level by exploring the reception and use of knowledge in the production of new knowledge and establishing the complex influences shaping knowledge production, which in turn can be traced back to governance. However, these studies leave us with micro-level effects which, albeit established with some validity, cannot yet be linked to the macro-level dynamics of knowledge. We still lack a “research content description language” in which (changes in) knowledge content can be described in a generalised way. Our vocabulary––“mainstream”, “low-risk”, “applied”, “interdisciplinary”––is too coarse to appropriately grasp effects like the disappearance or emergence of topics, methods, or kinds of data in particular research communities.

We are left with a dilemma, then. With few exceptions, studies of macro-level changes cannot convincingly establish causality due to the impossibility of measuring research content, the reduction of complexity enforced by quantitative methods, and the necessity of black-boxing processes of causation. Studies of micro-level responses, on the other hand, can address research content and establish causality in micro-situations without being able (yet) to link these changes to aggregation processes and macro-level dynamics.

3. How is research content shaped?

In this section, we discuss studies that start from the construction of scientific knowledge and ask how these construction processes are shaped. Most contributions to this question stem from the core of the sociology of science, the sociology of scientific knowledge or studies of science, technology and society (sts)––we again skip terminological developments for the sake of content. These studies unwittingly contributed two fundamental observations to our knowledge about the shaping of research content by governance. First, laboratory studies again and again observed the high degree of autonomy researchers have in their choices of research problems and methods. This does not mean that these choices are made without any outside influence––the material environment, colleagues and collaborators, the literature, career considerations and many other factors influence the decisions made by researchers. However, all these influences are processed by researchers, who shape what they consider as “do-able problems” [Fujimura Reference Fujimura1987] accordingly. All researchers are thus “obligatory points of passage” for influences on their research content.

Second, these studies demonstrated that researchers’ opportunities to change the directions of their research are limited in most cases. Researchers formulate research problems and personal research programmes using their current scientific knowledge and considering their experimental systems [Knorr-Cetina Reference Knorr-Cetina1981, Reference Knorr-Cetina1995a; Pinch Reference Pinch1986; Latour and Woolgar Reference Latour and Woolgar1986 [1979]; Rheinberger Reference Rheinberger1994, Reference Rheinberger1996]. Their research proposals and publications are assessed in the light of their prior work (e.g. Myers Reference Myers1990; Laudel Reference Laudel2006). Although this orienting influence of material and intellectual resources accumulated in prior research does not make radical change impossible, it makes it more difficult and thus more unlikely.

These two observations explain why researchers often deviate from expectations communicated by the governance of research. Even researchers who are willing and able to follow these signals still need to translate them into their research contexts. This translation is shaped by the knowledge and expectations of the researchers’ scientific communities, opportunities provided by their local work environment, and access to collaboration. Researchers must balance all these factors in order to construct a “doable” problem [Fujimura Reference Fujimura1987] whose solution is relevant to the particular scientific community targeted by scientists. It is not surprising that many of these problems do not fully coincide with the intentions communicated by governance.

The ways in which individual researchers and research groups process conditions of research shaped by governance are thus central to understanding causal influences on research content. This processing is an inseparable part of the construction of scientific knowledge and thus belongs to the domain of constructivist science studies. We first discuss the few exceptions, i.e. studies investigating the impact of complex conditions on research from an organisational or governance perspective (3.1). The second section reviews contributions to our problem by constructivist science studies (3.2). We then introduce an important contribution by the sociology of science to our topic, namely the recent interest of science studies in the absence of knowledge, which in some instances can be traced back to the impact of governance (3.3). As a conclusion to this section, we outline missed opportunities and necessary contributions of knowledge-focused studies (3.4).

3.1. Conditions for excellent research

The then famous internationally comparative study by Pelz and Andrews [Reference Pelz and Andrews1966] was probably the first to ask how organisations shape the conduct of research. Studies with a focus on organisational environments were conducted until the 1970s but were then marginalised by the constructivist turn. They were put back on the agenda by Law [Reference Law1994] and Vaughan [Reference Vaughan1999], neither of whom established a clear link between organisational phenomena and the content of research. In particular, although Vaughan convincingly demonstrates the fact that organisations “can complicate and manipulate the entire knowledge-production process” [Vaughan Reference Vaughan1999:931], her investigation of a single case of technological development cannot establish necessary or sufficient conditions for specific influences of an organisation on knowledge production, or explain how this impact would vary between technologies. The impact of organisations on the content of research is likely to vary with organisational structures and field-specific epistemic practices. These causal links still need to be established.

More recently, researchers have attempted to identify favourable conditions for “breakthrough” or exceptionally creative research. Hollingsworth and Hollingsworth [Hollingsworth Reference Hollingsworth and Hannaway2008; Hollingsworth and Hollingsworth Reference Hollingsworth and Hollingsworth2011] identified exceptional research in the life sciences and looked for common conditions under which this research took place. They concluded that “major discoveries tended to occur more frequently in organisational contexts that were relatively small and had high degrees of autonomy, flexibility, and the capacity to adapt rapidly to the fast pace of change in the global environment of science” [Hollingsworth Reference Hollingsworth and Hannaway2008: 321]. Intraorganisational conditions include moderately high scientific diversity (across the organisation and internalised in the scientists it recruits) as well as high intensities of communication and social integration [Hollingsworth and Hollingsworth Reference Hollingsworth and Hollingsworth2011: 17-40]. With a similar objective, Heinze et al. used a survey in which experts from human genetics and nanoscience/nanotechnology nominated more than 400 highly creative research accomplishments [Heinze et al. Reference Heinze, Shapira, Senker and Kuhlmann2007] and conducted 20 case studies of research groups in which such accomplishments occurred [Heinze et al. Reference Heinze, Shapira, Rogers and Senker2009]. They found “that creative accomplishments are associated with small group size, organisational contexts with sufficient access to a complementary variety of technical skills, stable research sponsorship, timely access to extra-mural skills and resources, and facilitating leadership” [Heinze et al. Reference Heinze, Shapira, Rogers and Senker2009: 610].

The findings of both studies are very general, mainly because the researchers looked for commonalities of all their cases in order to identify the most important conditions. The downside of this analytical strategy is that it does not significantly contribute to explanations because it cannot reveal what mechanisms produce specific outcomes under specific circumstances. This kind of explanation requires comparative case studies that systematically vary important conditions and outcomes, as has been attempted by Laudel and Gläser [Reference Laudel and Gläser2014]. The same idea shaped an internationally comparative study of the impact of changing authority relations on conditions for scientific innovations. The project started from the observation that the overall changes in the governance of research (see section 2. above) have altered authority relations concerning research content, and asked how these changes have modified the conditions for scientific innovations. The comparison of the development of innovations in physics [Bose-Einstein condensation, Laudel et al. Reference Laudel, Lettkemann, Ramuz, Wedlin, Woolley, Whitley and Gläser2014b], biology [evolutionary developmental biology, Laudel et al. Reference Laudel, Benninghoff, Lettkemann, Håkansson, Whitley and Gläser2014a], education research (international large-scale student assessments, Gläser et al. Reference Gläser, Aljets, Gorga, Hedemo, Håkansson, Laudel, Whitley and Gläser2014) and linguistics (computerised corpus linguistics, Engwall et al. Reference Engwall, Aljets, Hedmo, Ramuz, Whitley and Gläser2014) in four countries demonstrated the varied impact of changing authority relations [Whitley Reference Whitley, Whitley and Gläser2014]. In particular, it turned out that, while changes in authority relations might have led to more flexibility at the level of national science systems, many rigidities remained in place and new ones (such as the short-termism of funding) have emerged. The opportunities to develop innovation depends on the former’s epistemic characteristics, which translate into the innovation-specific necessary features of careers, resource allocation schemes, and evaluation practices [Whitley Reference Whitley, Whitley and Gläser2014].

3.2. The analysis of governance in constructivist studies of scientific research

The foundational constructivist studies saw practices of governance affecting the production of scientific knowledge but did not consider them because they did not seem to make a difference for the questions ethnographic observers wanted to answer at that time. Knorr-Cetina [1981: 68-93; 1982] observed the impact of governance—the constant need for scientist to adjust their research agenda to interests of other actors such as funding agencies—and concluded that instead of scientific communities, “transepistemic arenas” constitute the relevant context for researchers. Latour and Woolgar [1986 [1979]: 187-233] derived from their ethnography the model of a “cycle of credibility” that links the production of knowledge claims to the conditions under which the resources for this production are obtained. Fujimura’ [Reference Fujimura1987] showed that researchers construct “do-able” research problems, and that the “do-ability” of research problems does not only depend on epistemic factors but on conditions constructed by governance as well.

In spite of these observations, there are no systematic accounts of how (by what means and with what effects) the governance arrangements in which researchers are embedded, and to which they adapt, modify the content of the knowledge produced by those researchers. The sociology of scientific knowledge was (and still is) interested in different questions. The questions asked address a deeper level of researchers’ engagement with material objects, knowledge, or with each other. Governance, and its function of maintaining research processes, constitutes a background that is implicitly disregarded. There are degrees of disregard, however, which create a spectrum.

We describe this spectrum of considering governance with examples from more recent studies of research content. At one end, we find studies that look so deeply into the construction of scientific knowledge that they do not even see governance. For example, Hoeppe [Reference Hoeppe2014] conducts an ethnomethodological study to answer the question “How, then, do researchers achieve agreement on what constitutes a successful combination of data?” [ibid: 245]. He describes the institutional context and the governance of data production and sharing [ibid: 245-246] and proceeds to studying practices of “working data together”. Organisational conditions and governance completely disappear, and the practical problems faced by researchers are the only focus.

A next step from this pole towards increasing consideration of governance is the ethnography by Owen-Smith [Reference Owen-Smith2001], who studies the management of laboratory work and explores in detail the connection between social differentiation and practices of scientific discussions in the lab. Although the social differentiation is greatly influenced by funding (some researchers have their own funding, others depend on the director’s grants), the conditions for acquiring these grants and their impact on the directions of research in the laboratory are not discussed. Conditions shaped by governance are present as a background throughout the study but are not treated as relevant to the question asked.

Hackett [Reference Hackett2005] also discusses the functioning of research groups. He describes the role of a research group's “identity”, the way control is exercised in the lab, approaches to risk, competition, and funding. The outside world plays a far more important role in his account than in Owen-Smith’s because he considers the embeddedness of research groups in the competitive world of bioscience as an important condition. However, the conditions shaped by governance (competition, conditions under which grants are awarded, careers) are not considered with regard to their influence on research content. They form a background that obviously shapes behaviour but whose impact on research content remains obscure.

Kleinman [Reference Kleinman1998] explicitly argued that laboratory studies had neglected the embeddedness of laboratories in larger structures, and set out to explore the impact of such structures:

I entered the laboratory knowing that this lab had relations with university administration, with for-profit companies with which the laboratory collaborates, and with commercial suppliers of research materials. I knew, furthermore, that matters of intellectual property were of concern to the lab leader and lab members. I did not know what these factors meant for lab practices or how they affected laboratory life. Finding this out was the aim of my project [Kleinman Reference Kleinman1998: 291].

Kleinman showed that agrichemical companies defined the research agenda of the laboratory, and the processes through which this occurred. The laboratory worked in an applied agrichemical field, and the disease control agent it worked on was compared to commercial fungicides with regard to their cost-benefit ratio. He also described how the lab’s dependence on a commercially produced research material, Taq polymerase, created problems for research, and how difficult it would have been to circumvent buying the polymerase due to patent protection. Turning to the university administration and its role in patenting and licencing issues, he finally demonstrated the potential of this embeddedness of the lab to hinder research collaborations.

In his laboratory study, Kleinman identified three “pathways of impact” that make governance issues “reach through” to research content. He was not able to establish the extent to which they actually change research content because his ethnography was limited to showing that the research he observed proceeded as it did under the specific conditions created by governance. He could not consider the impact of alternative governance arrangements.

These laboratory studies mark a spectrum of the rising awareness of, and interest in, issues of governance. The collection of laboratory studies we perused is unevenly distributed across this spectrum. Most of the studies were at the “Hoeppe pole” of the spectrum, while only few were at the “Kleinman pole” (Tousignant Reference Tousignant2013 on the cessation of research due to a lack of resources in a Senegalese lab coming closest). We hasten to add that we do not judge the value of any of these studies. Our spectrum expresses the degree to which different kinds of research questions asked by laboratory studies are positioned to yield findings on the ways in which governance includes research content. We do not imply any normative statement about “right” and “wrong” or “important” and “unimportant” research questions.

3.3. Undone science

Another stream of research relevant to our question about the impact of governance on research content concerns knowledge that has not been produced. As the small but very rapidly growing body of literature on this topic indicates, this question is as complex as its dominant counterpart––the production of knowledge––and fraught with additional ontological, conceptual and methodological problems [Frickel Reference Frickel2014a; Reference Frickel, Kleinman and Moore2014b]. The topic is also inseparable from the impact of governance on research content because the notion of governance having an impact implies that different governance would produce different knowledge. Any governance that contributes to the existence of specific knowledge also contributes to the non-existence of other knowledge. This is why studies addressing our question explicitly or implicitly struggle with the problem of counterfactuals. What would have happened if the governance were different?

There are many forms of absent knowledge and many factors contributing to absences [Gross Reference Gross2007]. Theoretical and methodological traditions, the presence and absence of opportunities to produce specific knowledge and interests in the presence of some or the absence of other specific knowledge can all contribute to “undone science” [Frickel et al. Reference Frickel, Gibbon, Howard, Kempner, Ottinger and Hess2010]. In the context of our research question, we are interested in science that is undone because of governance. In these cases, the lack of specific knowledge is often pointed out by interested parties (Hess Reference Hess2007: 22, see also 2.6). These cases are very useful for the study of undone science because specific absences of knowledge are not easily identified beyond the trivial case of researchers framing their contributions by describing gaps in their community’s knowledge. Furthermore, we can limit our discussion to cases in which the absence of scientific knowledge is due to specific research practices that are shaped or maintained by governance.

Of the four case studies presented by Frickel et al. [Reference Frickel, Gibbon, Howard, Kempner, Ottinger and Hess2010], the “chlorine sunset controversy” is of particular interest to our question. The controversy addressed a regulatory paradigm that influenced the construction and articulation of research priorities. The regulatory paradigm required that each chlorinated chemical be individually tested for harmfulness. Opponents pointed out that the entire class of chlorinated chemicals was likely to be dangerous and demanded research that systematically addressed this problem. They lost, not least because the chemical industry had a strong interest in maintaining the traditional paradigm.

These observations resonate with Kleinman and Suryanarayanan’s [Reference Kleinman and Suryanarayanan2013] study of the “Colony Collapse Disorder”––a honey bee colony’s sudden loss in its adult population. The authors demonstrate that the dominant “toxicological epistemic form” underlying the regulatory paradigm for insecticides exclusively focused on lethal doses of individual insecticides for individual honey bees and thus “ignored-meaning that it failed to study, indeed could not study or would not consider seriously-possible evidence of the effects of low or ‘sublethal’ levels of insecticides” [ibid: 497-498]. Information provided by commercial beekeepers requires a different epistemic form for approaching the problem but is largely ignored. As in the case of chlorinated chemicals, a dominant epistemic form that is incapable of producing specific knowledge is maintained by governance arrangements for regulating a specific industry.

These two examples illustrate both the potential and the current limitations of studies of undone science. Such studies provide the epistemological opportunity to enhance our understanding of the ways in which governance shapes research content by investigating what happens in the “shadow” of governance (which does not imply that all this is necessarily unintentional). However, realising this potential requires overcoming the exclusive focus on consequences of undone science outside science and including the impact of undone science on the knowledge production of scientific communities.

3.4. Strength and weaknesses of influence-searching studies

This brief review of some recent research on the construction of scientific knowledge indicates a great but largely unused potential. The ways in which conditions of research are translated into changed knowledge have been studied in great detail by many scholars. Many of these studies also include a comparative assessment of influences in complex situations. They explore the content of knowledge production, often benefitting from a long and deep immersion in the field under study. sts has the research tradition and expertise that makes it possible to trace the impact of governance to its ends––the content of knowledge being produced by the governed research.Footnote 13

From the point of view of our research question, it is a pity that sts has other interests than exploring the causal link between governance and research content. One possible reason for this reluctance may be the fear of a loss of “empirical resolution”: “If we choose a unit of analysis larger than the actual site of action, we remain removed from the indeterminacy which marks the situation” [Knorr-Cetina Reference Knorr-Cetina1981: 43]. Another reason is likely to be the dominant descriptive orientation of the sts mainstream [Frickel Reference Frickel2014a: 89], which is at odds with the inherently causal interest underlying this review, an interest we appear to share with the new political sociology of science [ibid.; Frickel and Moore Reference Frickel, Moore, Frickel and Moore2005b: 8-9].

Addressing causality in the impact of governance on research content would also raise two additional questions. First, such research would need to be comparative because assessing variations of governance and changes in knowledge content is necessary for causal ascription. It is very likely that governance has not surfaced more strongly in most ethnographies because they are single-case studies.Since no variation of governance can be observed in single-case studies, governance is treated as a negligible background condition. The necessity to systematically vary conditions created by governance in studies of knowledge construction points to a problem of research capacity. The necessary depth for exploring research content can only be reached for one case in projects conducted by just one researcher. Second, a comparative approach would necessarily include the comparison of research practices and research content across fields. This poses interesting challenges to sts [Laudel and Gläser Reference Laudel and Gläser2014; Gläser and Laudel Reference Gläser and Laudel2015b] but, again, sts is the field best equipped to meet them.

4. Conclusions

The question on which we focused this review––“How does governance shape research content?”––is only one of many questions investigated by science policy studies and the sociology of science, and certainly not the most popular among these. We would nevertheless insist that this question is theoretically relevant to both fields and of considerable practical importance. It is theoretically interesting to science policy studies because it demands a more systematic exploration of one of its most important dependent variables. Our discussion of the state of the art suggests that this exploration is likely to require a revision of both methodologies and theories addressing the impact of specific practices and instruments of governance on research content.

Our research question is also theoretically important to the sociology of science because, a few interesting attempts to develop a political sociology of science notwithstanding, the field still appears to be bifurcated in the investigation of the political effects of science, on the one hand and laboratory studies that largely ignore political influences on science, on the other hand. We note a particular absence of knowledge that needs to be overcome by a specific research agenda.

The practical importance is illustrated by studies that do contribute to our research question. All political actors with an interest concerning the directions, conduct or performance of science should be interested in the effects of governance on research content. We expect they would like to know whether governance instruments achieve their stated effects, what other effects they have, and how governance could be modified to serve their interests.

As a conclusion to this review, we would like to outline a research agenda by asking three questions. The first question addresses the state of our knowledge. What do we already know about influences of governance on research content? The answer to this question is somewhat disappointing. We have too few pieces of the puzzle to see even the outline of the picture. We know that the most important channel through which authority over research content is exercised––the allocation of resources––is subject to a struggle between an increasing number of actors interested in research content, and is utilised in an increasing number of governance instruments. We know that the use of this channel is effective—funders of research can achieve changes in research content—and that its efficiency is reduced by window dressing. Competition for funding appears to increase performance by way of redistributing resources to the best performers however identified, and may reduce the diversity of approaches and thus change the content of knowledge production. Resource-intensive scientific innovations appear to remain the privilege of a small scientific elite. State regulation and privatisation of scientific knowledge may affect directions of research by triggering an avoidance of certain topics, and may also affect the diffusion of knowledge in scientific communities.

With few exceptions, these effects have been established on the micro-level, and aggregate effects on the levels of national and international scientific communities are generally under-researched. Findings on macro-level effects are also limited in scope because they were established for individual countries or fields, and the grounds on which they could be generalised remain unclear.

A crucial methodological problem of studies on the effects of specific governance instruments is produced by the fact that these instruments always operate in complex situations, in which they overlap with numerous other governance arrangements and non-governance factors. This is why causal attribution of macro-level changes in research to particular governance instruments is so difficult. Research at the micro-level can address the complexity of the situation but is not yet able to provide conclusions about macro-level effects. It is also liable to idiosyncrasy because it cannot describe research content in a framework that supports generalisation.

The limited state of our knowledge and the reasons for it we identified in this review lead us to the second question. How can research on the impact of governance on research content be developed? We do not see the necessity of adding to the list of topics raised in the literature. Instead, we would like to emphasis two strategic tasks. Our analysis suggests that research on the impact of governance on research content must find ways to address the macro-micro-macro links and the causal processes producing them. The macro-micro link must be addressed in its full complexity. At the macro level, this requires taking into account the embeddedness of specific governance instruments in systems of governance consisting of a multitude of instruments and processes. At the micro-level, governance must be studied as embedded in complex social situations to which researchers respond. Studying consequences for the content of knowledge at the macro-level requires considering both aggregate and synthetic effects of micro-level changes in the production of scientific knowledge.

Furthermore, identifying impact means establishing causality. This requires comparative approaches for both governance and fields. Studying the effects of one particular governance instrument in one country cannot tell us much about the ways in which this governance instrument changes knowledge content. We would need to know what happens when this governance instrument is embedded in a different system of governance, and what happens when it is absent. The same strategy must be applied to fields. The epistemic practices of researchers that governance is supposed to change vary enormously. Again, establishing causality requires assessing the impact of governance instruments on specific research practices, and analysing field-specific effects. Both comparative research across fields and the aggregation of changes in knowledge content depend on comparable descriptions of relevant epistemic properties of fields and knowledge. So far, we have very few tools for this, e.g. Whitley’s [2000 [1984]] comparative framework for fields using task uncertainty and mutual interdependence. Our own comparative research has led to lists of interesting variables but not yet to a framework [Gläser et al. Reference Gläser, Lange, Laudel, Schimank, Whitley, Gläser and Engwall2010; Laudel and Gläser Reference Laudel and Gläser2014; Gläser and Laudel Reference Gläser and Laudel2015b]. The comparative description of field-specific conditions for and outcomes of knowledge construction processes remains a major task on which not only investigations of the impact of governance on research content depend.

These strategic tasks lead us to a third question. How can such a research agenda be realised? As an answer to this question we would like to highlight the need for interdisciplinary collaboration and the necessity of questioning the way in which we conduct research. Our review should have made clear why we think that science policy studies and the sociology of science need to collaborate. Science policy studies contribute their experience with the investigation of macro-structures and of the exercise of authority over research content in these structures. They are best equipped to investigate the first links of the causal chain, which translate macro-level changes in governance into changes in the situation of researchers.

The sociology of science can contribute its rich experience in engaging with research content and its construction at the micro-level. Researchers and research groups constitute “obligatory points of passage” for governance because, in order to change research content, governance must affect the choice of problems or approaches. This is the domain of sts. It has the analytical toolbox for studying the enactment of governance in research situations and the mutual shaping of governance and research. sts also contributes a rich knowledge of the specificity of knowledge production processes, and can help understand how governance instruments that are supposed to operate across all sciences, social sciences and humanities have field-specific effects.

A third field we believe to be necessary in this collaboration is bibliometrics. In this review, we discussed several contributions that demonstrate the potential of bibliometrics for contributing to the analysis of macro-micro links (the contribution by existing scientific knowledge to shaping the situation of researchers), micro-level dynamics (research trajectories of individuals and groups), and micro-macro links (the aggregation of individual processes to macro-level knowledge dynamics). In particular, we believe that bibliometrics is the best, if not the only, tool for analysing the micro-macro link between individual knowledge production and the knowledge dynamics in scientific communities. Integrating bibliometric methods in the research agenda outlined above can potentially solve several of its crucial methodological problems.

Although it seems unlikely that our review could incite a “gold rush” towards a new collaborative research enterprise between science policy studies and the sociology of science, we believe to have demonstrated that there is a research interest linking the two fields, a state of the art on which we could build, and a set of difficult and therefore exciting set of problems to be solved. We hope the knowledge provided and the absences pointed out in this review will contribute to creating a research agenda.

Acknowledgements

The authors would like to thank Michel Dubois, Richard Whitley and an anonymous reviewer for their helpful comments and suggestions.

Footnotes

1 The incorporation of public policy goals in science policy is of course not completely new, as military research most strikingly illustrates. We follow Whitley’s [Reference Whitley, Whitley, Gläser and Engwall2010] argument that scope, modes and effects of this incorporation have changed significantly since the 1970s.

2 For a recent review of studies of evaluation exercises and performance metrics, see de Rijcke et al. [Reference De Rijcke, Wouters, Rushforth, Franssen and Hammarfelt2015].

3 This is why questionnaires are useless if designed as opinion polls, as studies on the effects of the British Research Assessment Exercise have amply demonstrated [Gläser et al. Reference Gläser, Laudel, Hinze and Butler2002]. Surveys that ask respondents what actually happened in their organisations in response to policy measures (e.g. Aagaard Reference Aagaard2015) appear to be much more fruitful. Appreciating the difficulties of establishing causal macro-micro-macro links, Rafols et al. [Reference Rafols, Leydesdorff, O’Hare, Nightingale and Stirling2012] wisely published their proposal of a mechanism by which the use of unsuitable evaluation indicators (journal rankings) can reduce the diversity of research under the title “How journal rankings can suppress interdisciplinary research” (emphasis added) rather than suggesting that it does, for which there is no evidence.

4 The study by Franzoni et al. [Reference Franzoni, Scellato and Stephan2011] which attempts to establish a causal link between incentive systems and submissions to the journal Science, is particularly problematic because it operates with very limited information about these incentive systems. We can immediately point out two grave errors in their categorization of national incentive systems. The Australian “Research Quality Framework” was never introduced (see, e.g. Donovan Reference Donovan2008: 58 or simply the Wikipedia entry for “Research Quality Framework”) and thus could not have an effect on submissions to the journal Science. The laws on performance-based salaries in Germany apply only to professors appointed after the introduction of the laws, and thus could scarcely have an effect on submissions to Science in the years up to 2009 (see e.g. Lange Reference Lange, Whitley and Gläser2007: 163, 169).

5 A related but different topic concerns behavioral differences between researchers in universities and industry (see Kleinman and Vallas 2001 for a general argument about converging conditions for research in academia and industry). The question of whether, why, and how researchers in industry share information, and how their information sharing behavior differs from the behavior of their colleagues in industry, has been a long-standing issue in science and innovation policy research [Von Hippel Reference Von Hippel1987; Kreiner and Schultz Reference Kreiner and Schultz1993; Haeussler Reference Haeussler2011]. Unfortunately, industrial research has not yet been subjected to the same scrutiny as publicly funded research (see Penders et al. Reference Penders, Verbakel and Nelis2009 for an interesting account of the reasons for this asymmetry).

6 Studies of biological research by Morris [Reference Morris2000) and of government-sponsored research in public health by Smith [Reference Smith2014] suggest that window dressing and negotiations with the funder of research often lead to a compromise between the interests of funder and researcher.

7 The same appears to apply to technological platforms. Merz and Biniok [Reference Merz and Biniok2010] analysed technological platforms in micro- and nanotechnology in Switzerland. They found the platforms to “provide new means to increase contact and interrelation between academic science and industry” [ibid.: 120] but observed little research collaboration emerging from these contacts. Joint use of these platforms by academic science and industry does not seem to affect the former’s research agenda.

8 According to Owen-Smith and Powell [2001: 106-107], some researchers respond to the threat of use restrictions by utilizing patents as protection. They patent their results in order to protect their freedom of use (see also Packer and Webster Reference Packer and Webster1996: 444; Rappert and Webster Reference Rappert and Webster1997: 122). This utilization of patents resembles the use of “copyleft” in the communal production of open source software as described e.g. by Lerner and Tirole [Reference Lerner and Tirole2000].

9 For example, Velden (Reference Velden2013) found researchers in synthetic chemistry having experiences of being scooped and therefore deciding strategically on information sharing, while experimental physicists she observed were much more open and experienced less scooping. She attributed the difference to synthetic chemistry' individualised research practices, reliance on individual skills and short duration of research processes, which she contrasted with experimental physics' collaborative, long-term research processes based on the construction of dedicated equipment.

10 This type of actor is of course not completely new. The involvement of social movements in the governance of science has been traced back to at least the late 1960s. However, the number and range of such movements appear to have risen significantly in the last three decades (see e.g. Moore Reference Moore, Frickel and Moore2005: 301-304 for the US).

11 Unfortunately, some surveys ask respondents directly how their practices have changed in response to a specific governance instrument. This passes the study’s question on to respondents and collects only respondents’ subjective theories about the impact of a governance instrument.

12 In order to realise this potential, interview-based studies must of course avoid the trap of simply passing on the research question to informants (see above, note 10).

13 The potential of ethnographic studies for the exploration of governance issues is illustrated by a recent study that challenges the idea of unintended consequences of quantitative performance evaluations. Rushforth and de Rijcke [Reference Rushforth and Rijcke2015] describe the role of the journal impact factor in the everyday conduct of research in two Dutch research groups. This study is of particular interest because it shows how this indicator is firmly embedded in frames and interactions of research groups and serves as a “judgement device” in decisions on publication behaviour. Although the study does not explore changes in research content, it introduces an important new perspective on the relationship between endogenous and exogenous evaluations in science, and clearly demonstrates that the impact of this relationship on research content could be identified.

References

BIBLIOGRAPHY

Aagaard, Kaare, 2015. How Incentives Trickle down: Local Use of a National Bibliometric Indicator System, Science and Public Policy, 42: 725-737.CrossRefGoogle Scholar
Aagaard, Kaare, Bloch, Cater and Schneider, Jesper W., 2015. Impacts of performance-based research funding systems: The case of the Norwegian Publication Indicator. Research Evaluation, 24: 106-117.Google Scholar
Anderson, Gina, 2008. Mapping Academic Resistance in the Managerial University. Organization, 15 (2): 251-270.Google Scholar
Auranen, Otto and Nieminen, Mika, 2010. University Research Funding and Publication Performance—An International Comparison, Research Policy, 39 (6): 822-834.Google Scholar
Baldini, Nicola, 2008. Negative Effects of University Patenting: Myths and Grounded Evidence, Scientometrics, 75 (2): 289-311.Google Scholar
Bauer, Marianne and Henkel, Mary, 1998. “Academic Responses to Quality Reforms in Higher Education”, in Henkel, M. and Little, B., eds., Changing Relationships Between Higher Education and the State (London, Jessica Kingsley: 236-262).Google Scholar
Behrens, Teresa R. and Gray, Denis O., 2001. Unintended Consequences of Cooperative Research: Impact of Industry Sponsorship on Climate for Academic Freedom and Other Graduate Student Outcome, Research Policy 30 (2): 179-199.Google Scholar
Bensaude-Vincent, Bernadette, 2016. “Building Multidisciplinary Research Fields: The Cases of Materials Science, Nanotechnology and Synthetic Biology”, in Merz, M. and Sormani, Ph., eds., The Local Configuration of New Research Fields (Dordrecht, Springer International Publishing: 45-60).Google Scholar
Berezin, Alexander, 1998. “The Perils of Centralized Research Funding System”, Knowledge, Technology & Policy 11 (3): 5-26.Google Scholar
Berman, Elizabeth Popp, 2012. Creating the Market University: How Academic Science Became an Economic Engine (Princeton, Princeton University Press).Google Scholar
Berman, Elizabeth Popp, 2014. “Not Just Neoliberalism: Economization in US Science and Technology Policy”, Science, Technology & Human Values 39 (3): 397-431.Google Scholar
Bhupatiraju, Samyukta, Nomaler, Önder, Triulzi, Giorgio and Verspagen, Bart, 2012. “Knowledge Flows––Analyzing the Core Literature of Innovation, Entrepreneurship and Science and Technology Studies, Research Policy, 41 (7): 1205-1218.Google Scholar
Bloch, Carter, Graversen, EbbeKrogh and Pedersen, HeidiSkovgaard, 2014. “Competitive Research Grants and Their Impact on Career Performance”, Minerva, 52 (1): 77-96.Google Scholar
Blume, Stuart S., 1974. Toward a Political Sociology of Science (New York, The Free Press).Google Scholar
Blumenthal, David, Campbell, Eric G., Causino, Nancyanne and Seashor, Karen Louis, 1996. “Participation of Life-Science Faculty in Research Relationships with Industry”, The New England Journal of Medicine, 335 (23): 1734-1739.Google Scholar
Böhmer, Susan and von Ins, Markus, 2009. Different––not Just by Label: Research-Oriented Academic Careers in Germany, Research Evaluation, 18 (3): 177-184.Google Scholar
Bornmann, Lutz, Wallon, Gerlind and Ledin, Anna, 2008. “Does the Committee Peer Review Select the Best Applicants for Funding? An Investigation of the Selection Process for Two European Molecular Biology Organization Programmes”, PLoS ONE, 3 (10): e3480.CrossRefGoogle Scholar
Braun, Dietmar, 1993. “Who Governs Intermediary Agencies? Principal-Agent Relations in Research Policy-Making”, Journal of Public Policy 13 (2): 135-162.Google Scholar
Braun, Dietmar, 1998. The Role of Funding Agencies In The Cognitive Development Of Science, Research Policy, 27 (8): 807-821.CrossRefGoogle Scholar
Braun, Dietmar and Guston, David H., 2003. Principal-Agent Theory and Research Policy: An Introduction, Science and Public Policy 30 (5): 302-308.Google Scholar
Brown, Mark B., 2015. Politicizing Science: Conceptions of Politics in Science and Technology Studies, Social Studies of Science 45 (1): 3-30.Google Scholar
Brown, Phil, McCormick, Sabrina, Mayer, Brian, Zavestoski, Stephen, Morello-Frosch, Rachel, Gasior Altman, Rebecca and Senier, Laura, 2006. “‘A Lab of Our Own’: Environmental Causation of Breast Cancer and Challenges to the Dominant Epidemiological Paradigm”, Science, Technology & Human Values, 31 (5): 499-536.Google Scholar
Brunet, Philippe and Dubois, Michel, 2012. “Stem Cells and Technoscience: Sociology of the Emergence and Regulation of a Field of Biomedical Research in France”, Revue Française de Sociologie, 53 (3): 251-286.Google Scholar
Butler, Linda, 2003. Explaining Australia’s Increased Share of ISI Publications––The Effects of a Funding Formula Based on Publication Counts, Research Policy, 32: 143-155.Google Scholar
Butler, Linda and Biglia, Bev, 2001. Analysing the Journal output of NHMRC Research Grants Schemes (Canberra, National Health & Medical Research Council).Google Scholar
Campbell, David, Picard-Aitken, Michelle, Côté, Grégoire, Caruso, Julie, Valentim, Rodolfo, Edmonds, Stuart, Williams, Gregory T., Macaluso, Benoît, Robitaille, Jean-Pierre, Bastien, Nicolas, Laframboise, Marie-Claude, Lebeau, Louis-Michel, Mirabel, Philippe, Larivière, Vincent and Archambault, Éric, 2010. Bibliometrics as a Performance Measurement Tool for Research Evaluation: The Case of Research Funded by the National Cancer Institute of Canada, American Journal of Evaluation, 31 (1): 66-83.Google Scholar
Campbell, Eric G., Weissman, Joel S., Causino, Nancyanne and Blumenthal, David, 2000. “Data Withholding in Academic Medicine: Characteristics of Faculty Denied Access to Research Results and Biomaterials”, Research Policy, 29: 303-312.CrossRefGoogle Scholar
Capano, Giliberto, 2011. “Government Continues to Do its Job. A Comparative Study of Governance Shifts in the Higher Education Sector”, Public Administration, 89 (4): 1622-1642.Google Scholar
Caswill, C., 2003. Principals, Agents and Contracts, Science and Public Policy, 30 (5): 337-346.Google Scholar
Chubin, Daryl E. and Hackett, Edward J., 1990. Peerless Science: Peer Review and U.S. Science Policy (Albany, State University of New York Press).Google Scholar
Cohen, Wesley M., Nelson, Richard R. and Walsh, John P., 2002. “Links and Impacts: The Influence of Public Research on Industrial R&D”, Management Science 48 (1): 1-23.Google Scholar
Cole, Stephen, Cole, Jonathan R. and Simon, Gary A., 1981. “Chance and Consensus in Peer Review”, Science, 214 (4523): 881-886.Google Scholar
Cooper, Mark H., 2009. “Commercialization of the University and Problem Choice by Academic Biological Scientists”, Science, Technology & Human Values, 34 (5): 629-653.CrossRefGoogle Scholar
Coriat, Benjamin and Orsi, Fabienne, 2002. “Establishing a New Intellectual Property Rights Regime in the United States: Origins, Content and Problems”, Research Policy, 31 (8-9): 1491-1507.Google Scholar
Cozzens, Susan E., 1986. “Theme Section “Funding and Knowledge Growth”: Editor’s Introduction”, Social Studies of Science, 19: 9-21.CrossRefGoogle Scholar
D’Este, Pablo and Perkmann, Markus, 2011. “Why Do Academics Engage with Industry? The Entrepreneurial University and Individual Motivations”, The Journal of Technology Transfer, 36 (3): 316-339.Google Scholar
De Boer, Harry, Enders, Jürgen and Leišytė, Liudvika, 2007. “Public Sector Reform in Dutch Higher Education: The Organizational Transformation of the University”, Public Administration, 85 (1): 27-46.Google Scholar
De Rijcke, Sarah, Wouters, Paul F., Rushforth, Alex D., Franssen, Thomas P. and Hammarfelt, Björn, 2015. “Evaluation Practices and Effects of Indicator Use—A Literature Review”, Research Evaluation.Google Scholar
Dirk, Lynn, 1999. “A Measure of Originality: The Elements of Science”, Social Studies of Science, 29 (5): 765-776.Google Scholar
Donovan, Claire, 2008. “The Australian Research Quality Framework: A Live Experiment in Capturing the Social, Economic, Environmental, and Cultural Returns of Publicly Funded Research”, New Directions for Evaluation, 118: 47-60.CrossRefGoogle Scholar
Eisenberg, Rebecca S., 2001. “Bargaining Over the Transfer of Proprietary Research Tools: Is this Market Failing or Emerging?”, in Cooper Dreyfuss, R., Leenheer Zimmermann, D. and First, H., eds., Expanding the Boundaries of Intellectual Property (Oxford, Oxford University Press, 223-249).Google Scholar
Eisler, Matthew N., 2013. “‘The Ennobling Unity of Science and Technology’: Materials Sciences and Engineering, the Department of Energy, and the Nanotechnology Enigma”, Minerva, 51 (2): 225-251.Google Scholar
Engwall, Lars, Aljets, Enno, Hedmo, Tina and Ramuz, Raphaël, 2014. “Computer Corpus Linguistics: An Innovation in the Humanities”, in Whitley, R. and Gläser, J., eds., Organizational Transformation and Scientific Change: The Impact of Institutional Restructuring on Universities and Intellectual Innovation (Bingley, Emerald Group Publishing Limited, 331-365).Google Scholar
Epstein, Steven, 1996. Impure Science: AIDS, Activism, and the Politics of Knowledge (Berkeley, University of California Press).Google Scholar
Evans, James A., 2010a. Industry Collaboration, Scientific Sharing, and the Dissemination of Knowledge, Social Studies of Science, 40 (5): 757-791.Google Scholar
Evans, James A., 2010b. “Industry Induces Academic Science to Know Less about More”, American Journal of Sociology 116 (2): 389-452.Google Scholar
Ferlie, Ewan, Musselin, Christine and Andresani, Gianluca, 2009. “The ‘Steering’ of Higher Education Systems: A Public Management Perspective”, in Paradeise, C., Reale, E., Bleiklie, I. and Ferlie, E., eds., University Governance––Western European Comparative Perspectives (Dordrecht, Springer Science and Business Media, 1-20).Google Scholar
Franzoni, Chiara, Scellato, Giuseppe and Stephan, Paula, 2011. “Changing Incentives to Publish”, Science 333 (6043): 702-703.Google Scholar
Frickel, Scott, 2014a. “Absences: Methodological Note about Nothing, in Particular”, Social Epistemology, 28 (1): 86-95.Google Scholar
Frickel, Scott, 2014b. “Not Here and Everywhere: The Non-Production of Scientific Knowledge”, in Kleinman, D. L. and Moore, K., eds., Routledge Handbook of Science, Technology, and Society (New York, Routledge, 263-276).Google Scholar
Frickel, Scott, Gibbon, Sahra, Howard, Jeff, Kempner, Joanna, Ottinger, Gwen and Hess, David J., 2010. “Undone Science: Charting Social Movement and Civil Society Challenges to Research Agenda Setting”, Science, Technology & Human Values, 35 (4): 444-473.Google Scholar
Frickel, Scott and Moore, Kelly, eds., 2005a. The New Political Sociology of Science: Institutions, Networks, and Power (Madison, University of Wisconsin Press).Google Scholar
Frickel, Scott and Moore, Kelly, 2005b. “Prospects and Challenges for a New Political Sociology of Science”, in Frickel, S. and Moore, K., eds., The New Political Sociology of Science: Institutions, Networks, and Power, (Madison, University of Wisconsin Press, 3-31).Google Scholar
Fujimura, Joan, 1987. “Constructing ‘Do-able’ problems in cancer research: articulating alignment”, Social Studies of Science, 17: 257-293.Google Scholar
Furman, Jeffrey L., Murray, Fiona and Stern, Scott, 2012. “Growing Stem Cells: The Impact of Federal Funding Policy on the US Scientific Frontier”, Journal of Policy Analysis and Management, 31 (3): 661-705.Google Scholar
Gaughan, Monica and Bozeman, Barry, 2002. “Using Curriculum Vitae to Compare Some Impacts of NSF Research Grants with Research Center Funding”, Research Evaluation, 11 (1): 17-26.Google Scholar
Gläser, Jochen, Aljets, Enno, Gorga, Adriana, Hedemo, Tina, Håkansson, Elias and Laudel, Grit, 2014. “Path Dependence and Policy Steering in the Social Sciences: The Varied Impact of International Large Scale Student Assessment on the Educational Sciences in Four European Countries”, in Whitley, R. and Gläser, J., eds., Organizational Transformation and Scientific Change: The Impact of Institutional Restructuring on Universities and Intellectual Innovation (Bingley, UK: Emerald Group: 267-295).Google Scholar
Gläser, Jochen, Lange, Stefan, Laudel, Grit and Schimank, Uwe, 2010. “The Limits of Universality: How Field-Specific Epistemic Conditions affect Authority Relations and their Consequences”, in Whitley, R., Gläser, J. and Engwall, L., eds., Reconfiguring Knowledge Production: Changing Authority Relationships in the Sciences and Their Consequences for Intellectual Innovation (Oxford, Oxford University Press: 291-324).Google Scholar
Gläser, Jochen and Laudel, Grit, 2007. “Evaluation without Evaluators: The Impact of Funding Formulae on Australian University Research”, in Whitley, Richard and Gläser, Jochen, eds., The Changing Governance of the Sciences: The Advent of Research Evaluation Systems (Dordrecht, Springer, 127-151).Google Scholar
Gläser, Jochen and Laudel, Grit, 2015a. “A Bibliometric Reconstruction of Research Trails for Qualitative Investigations of Scientific Innovations”, Historical Social Research––Historische Sozialforschung, 40 (3): 299-330.Google Scholar
Gläser, Jochen and Laudel, Grit, 2015b. “Cold Atom Gases, Hedgehogs, and Snakes: The Methodological Challenges of Comparing Scientific Things”, Nature and Culture, 10 (3): 303-332.Google Scholar
Gläser, Jochen, Laudel, Grit, Hinze, Sybille and Butler, Linda, 2002. Impact of Evaluation-Based Funding on the Production of Scientific Knowledge: What to Worry about, and How to Find out (Expertise für das BMBF), http://www.sciencepolicystudies.de/dok/expertise-glae-lau-hin-but.pdf.Google Scholar
Gläser, Jochen, Laudel, Grit and Lettkemann, Eric, 2016. Hidden in Plain Sight: The Impact of Generic Governance on the Emergence of Research Fields. In Merz, Martina and Sormani, Philippe, eds., The Local Configuration of New Research Fields (Dordrecht, Springer International Publishing: 25-43).Google Scholar
Gross, Matthias, 2007. “The Unknown in Process: Dynamic Connections of Ignorance, Non-Knowledge and Related Concepts”, Current Sociology, 55 (5): 742-759.Google Scholar
Guetzkow, Joshua and Lamont, Michèle, 2004. “What is Originality in the Humanities and the Social Sciences?”, American Sociological Review, 69 (2).Google Scholar
Guston, David H., 1996. “Principal-Agent Theory and the Structure of Science Policy”, Science and Public Policy, 23 (4): 229-240.Google Scholar
Guston, David H., 2001. “Boundary Organizations in Environmental Policy and Science: An Introduction”, Science, Technology & Human Values, 26 (4): 399-408.Google Scholar
Hackett, Edward J., 1987. “Funding and Academic Research in the Life Sciences: Results of an Exploratory Study”, Science & Technology Studies, 5 (3/4): 134-147.Google Scholar
Hackett, Edward J., 2005. “Essential Tensions: Identity, Control, and Risk in Research”, Social Studies of Science, 35 (5): 787-826.Google Scholar
Haeussler, Carolin, 2011. “Information-Sharing in Academia and the Industry: A Comparative Study”, Research Policy, 40 (1): 105-122.Google Scholar
Hammarfelt, Björn and Rijcke, Sarah de, 2015. “Accountability in Context: Effects of Research Evaluation Systems on Publication Practices, Disciplinary Norms, and Individual Working Routines in The Faculty of Arts at Uppsala University”, Research Evaluation, 24: 63-77.Google Scholar
Hedgecoe, Adam M., 2003. “Terminology and the Construction of Scientific Disciplines: The Case of Pharmacogenomics”, Science, Technology & Human Values, 28 (4): 513-537.Google Scholar
Heinze, Thomas, 2008. “How to Sponsor Ground-Breaking Research: A Comparison of Funding Schemes”, Science and Public Policy, 35 (5): 802-818.Google Scholar
Heinze, Thomas, Shapira, Philip, Rogers, Juan D. and Senker, Jacqueline M., 2009. “Organizational and institutional influences on creativity in scientific research”, Research Policy, 38: 610-623.Google Scholar
Heinze, Thomas, Shapira, Philip, Senker, Jacqueline and Kuhlmann, Stefan, 2007. “Identifying creative research accomplishments: Methodology and results for nanotechnology and human genetics”, Scientometrics, 70 (1): 125-152.Google Scholar
Henkel, Mary, 2000. Academic Identities and Policy Change in Higher Education (London, Jessica Kingsley).Google Scholar
Henkel, Mary, 2005. “Academic Identity and Autonomy in a Changing Policy Environment”, Higher Education, 49: 155-176.Google Scholar
Hess, David J., 2007. Alternative Pathways in Science and Industry: Activism, Innovation, and the Environment in an Era of Globalization (Cambridge, MIT Press).Google Scholar
Hessels, Laurens K., Grin, John and Smits, Ruud E. H. M., 2011. “The Effects of a Changing Institutional Environment on Academic Research Practices: Three Cases from Agricultural Science”, Science and Public Policy, 38 (7): 555-568.Google Scholar
Hicks, Diana, 2012. “Performance-Based University Research Funding Systems”, Research Policy, 41: 251-261.Google Scholar
Hoeppe, Götz, 2014. “Working Data Together: The Accountability and Reflexivity of Digital Astronomical Practice”, Social Studies of Science, 44 (2): 243-270.Google Scholar
Hollingsworth, J. Rogers, 2008. “Scientific Discoveries: An Institutionalist and Path-Dependent Perspective”, in Hannaway, C., ed., Biomedicine in the Twentieth Century: Practices, Policies, and Politics, (Bethesda, National Institutes of Health: 317-353).Google Scholar
Hollingsworth, J. Rogers and Hollingsworth, Ellen Jane, 2011. Major Discoveries, Creativity, and the Dynamics of Science (Vienna, edition echoraum).Google Scholar
Horrobin, David F., 1996. Peer Review of Grant Applications: A Harbinger for Mediocrity in Clinical Research?, Lancet, 348 (9037): 1293-1295.Google Scholar
Huutoniemi, Katri, 2012. Communicating and Compromising on Disciplinary Expertise in the Peer Review of Research Proposals, Social Studies of Science, 42 (6): 897-921.Google Scholar
Ingwersen, Peter and Larsen, Birger, 2014. “Influence of a Performance Indicator on Danish Research Production and Citation Impact”, 2000-12, Scientometrics, 101: 1325-1344.Google Scholar
Jacob, Brian and Lefgren, Lars, 2011. “The Impact of nih Postdoctoral Training Grants on Scientific Productivity”, Research Policy, 40 (6): 864-874.CrossRefGoogle Scholar
Jankowski, John E., 1999. Trends in Academic Research Spending, Alliances, and Commercialization. The Journal of Technology Transfer 24 (1): 55-68.Google Scholar
Jasanoff, Sheila, 2010. “A Field of its Own: the Emergence of Science and Technology Studies”, in Frodeman, R., Thompson Klein, J., Mitcham, C. and Holbrook”, B., eds., The Oxford Handbook of Interdisciplinarity (Oxford, Oxford University Press: 191-205).Google Scholar
Jiménez-Contreras, Evaristo, De Moya Anegón, Félix and López-Cózar, Emilio Delgado, 2003. “The Evolution of Research Activity in Spain—The Impact of The National Commission for the Evaluation of Research Activity (cneai)”, Research Policy, 32 (1): 123-142.Google Scholar
Kearnes, Matthew and Wienroth, Matthias, 2011. “Tools of the Trade: UK Research Intermediaries and the Politics of Impacts”, Minerva, 49 (2): 153-174.Google Scholar
Kleinman, Daniel Lee, 1998. “Untangling Context: Understanding a University Laboratory in the Commercial World”, Science, Technology, & Human Values 23 (3): 285-314.Google Scholar
Kleinman, Daniel Lee and Suryanarayanan, Sainath, 2013. “Dying Bees and the Social Production of Ignorance”, Science, Technology & Human Values, 38 (4): 492-517.Google Scholar
Kleinman, Daniel Lee and Vallas, Steven P., 2001. “Science, Capitalism, and the Rise of The ‘Knowledge Worker’: The Changing Structure of Knowledge Production in the United States”, Theory and Society, 30 (4): 451-492.Google Scholar
Knorr-Cetina, Karin, 1981. The Manufacture of Knowledge: An Essay on the Constructivist and Contextual Nature of Science (Oxford, Pergamon Press).Google Scholar
Knorr-Cetina, Karin, 1982. “Scientific Communities or Transepistemic Arenas of Research? A Critique of Quasi-Economic Models of Science”, Social Studies of Science, 12: 101-130.Google Scholar
Knorr-Cetina, Karin, 1995a. “How Superorganisms Change: Consensus Formation and the Social Ontology of High-Energy Physics Experiments”, Social Studies of Science, 25 (1): 119-147.Google Scholar
Knorr-Cetina, Karin, 1995b. Laboratory Studies. The Cultural Approach to the Study of Science”, in Jasanoff, S., Markle, G. E., Petersen, J. C. and Pinch, T., eds., Handbook of Science and Technology Studies (London, SAGE: 140-166).Google Scholar
Koumpis, Konstantinos and Pavitt, Keith, 1999. “Corporate Activities in Speech Recognition and Natural Language: Another ‘New Science’-Based Technology”, International Journal of Innovation Management 3 (3): 335-366.Google Scholar
Kreiner, Kristian and Schultz, Maiken, 1993. “Informal Collaboration in Research-and-Development: The Formation of Networks across Organizations”, Organization Studies, 14 (2): 189-209.Google Scholar
Krimsky, Sheldon, 2013. Do Financial Conflicts of Interest Bias Research?: An Inquiry into the “Funding Effect” Hypothesis, Science, Technology & Human Values, 38 (4): 566-587.Google Scholar
Lal, Bhavya, Hughes, Mary Elizabeth, Shipp, Stephanie, Lee, Elizabeth C., Richards, Amy Marshall and Zhu, Adrienne, 2011. “Outcome Evaluation of the National Institutes of Health (nih) Director’s Pioneer Award (ndpa), FY 2004–2005” (Washington, ida Science and Technology Policy Institute).Google Scholar
Lam, Alice, 2010. “From ‘Ivory Tower Traditionalists’ to ‘Entrepreneurial Scientists?’”, Social Studies of Science, 40 (2): 307-340.Google Scholar
Lamont, Michèle, 2009. How Professors Think: Inside the Curious World of Academic Judgment (Harvard, Harvard University).Google Scholar
Lamont, Michèle, Mallard, Gregorie and Guetzkow, Joshua, 2006. “Beyond Blind Faith: Overcoming the Obstacles to Interdisciplinary Evaluation”, Research Evaluation 15 (1): 43-57.Google Scholar
Lange, Stefan, 2007. “The Basic State of Research in Germany: Conditions of Knowledge Production Pre-Evaluation”, in Whitley, Richard and Gläser, Jochen (eds.), The Changing Gouvernance of the Sciences, (Dordrecht, Springer, 153-170).Google Scholar
Langfeldt, Liv, 2001. “The Decision-Making Constraints and Processes of Grant Peer Review, and their Effects on the Review Outcome”, Social Studies of Science, 31 (6): 820-841.Google Scholar
Langfeldt, Liv, Benner, Mats, Sivertsen, Gunnar, Kristiansen, Ernst H., Aksnes, Dag W., Brorstad Borlaug, Siri, Foss Hansen, Hanne, Kallerud, Egil and Pelkonen, Antti, 2015. “Excellence and Growth Dynamics: A Comparative Study of The Matthew Effect”, Science and Public Policy, 42 (5): 661-675.Google Scholar
Latour, Bruno and Woolgar, Steve, 1986 [1979]. Laboratory Life: The Construction of Scientific Facts, (Princeton, Princeton University Press).Google Scholar
Laudel, Grit, 2006. “The Art of Getting Funded: How Scientists Adapt to their Funding Conditions”, Science and Public Policy, 33 (7): 489-504.Google Scholar
Laudel, Grit, Benninghoff, Martin, Lettkemann, Eric and Håkansson, Elias, 2014. “Highly Adaptable but not Invulnerable: Necessary and Facilitating Conditions for Research in Evolutionary Developmental Biology”, in Whitley, R. and Gläser, J., eds., Organizational Transformation and Scientific Change: The Impact of Institutional Restructuring on Universities and Intellectual Innovation (Bingley, Emerald Group Publishing Limited: 235-265).Google Scholar
Laudel, Grit and Gläser, Jochen, 2014. Beyond Breakthrough Research: Epistemic Properties of Research and their Consequences for Research Funding, Research Policy, 43 (7): 1204-1216.Google Scholar
Laudel, Grit, Lettkemann, Eric, Ramuz, Raphaël, Wedlin, Linda and Woolley, Richard, 2014. “Cold Atoms-Hot Research: High Risks, High Rewards in Five Different Authority Structures”, in Whitley, R. and Gläser, J., eds., Organizational Transformation and Scientific Change: The Impact of Institutional Restructuring on Universities and Intellectual Innovation (Bingley, Emerald Group Publishing Limited, 203-234).Google Scholar
Laudel, Grit and Weyer, Elke, 2014. “Where have all the Scientists Gone? Building Research Profiles at Dutch Universities and its Consequences for Research”, in Whitley, R. and Gläser, J., eds., Organizational Transformation and Scientific Change: The Impact of Institutional Restructuring on Universities and Intellectual Innovation (Bingley, Emerald Group Publishing Limited, 111-140).Google Scholar
Law, John, 1994. Organizing Modernity (Cambridge, Blackwell Publishers).Google Scholar
Leišytė, Liudvika, Enders, Jürgen and Boer, Harry De, 2010. “Mediating Problem Choice: Academic Researchers’ Responses to Changes in their Institutional Environment”, in Whitley, R., Gläser, J. and Engwall, L., eds), Reconfiguring knowledge production : changing authority relationships in the sciences and their consequences for intellectual innovation (Oxford, Oxford University Press 266-290).Google Scholar
Lepori, Benedetto, van den Besselaar, Peter, Dinges, Michael, Potí, Bianca, Reale, Emanuela, Slipersæter, Stig, Thèves, Jean and van der Meulen, Barend, 2007. “Comparing the Evolution of National Research Policies: What Patterns of Change?”, Science and Public Policy, 34 (6): 372-388.Google Scholar
Lerner, Josh and Tirole, Jean, 2000. The Simple Economics of Open Source (Cambridge, National Bureau of Economic Research).Google Scholar
Lewison, Grant, 1999. The definition and calibration of biomedical subfields. Scientometrics 46 (3): 529-537.Google Scholar
Leydesdorff, Loet, 1989. The Relations between Qualitative Theory and Scientometric Methods in Science and Technology Studies: Introduction to the Topical Issue, Scientometrics, 15 (5-6): 333-347.Google Scholar
Leydesdorff, Loet and Gauthier, Élaine, 1996. The Evaluation of National Performance in Selected Priority Areas Using Scientometric Methods, Research Policy, 25 (3): 431-450.Google Scholar
Linkova, Marcela, 2014. “Unable to Resist: Researchers’ Responses to Research Assessment in the Czech Republic”, Human Affairs, 24 (1): 78-88.Google Scholar
Louvel, Severine, 2010. “Changing Authority Relations within French Academic Research Units since the 1960s: From Patronage to Partnership”, in Whitley, R., Gläser, J. and Engwall, L., eds., Reconfiguring Knowledge Production (Oxford, Oxford University Press, 184-210).Google Scholar
Lucas, Lisa, 2006. The Research Game in Academic Life (Maidenhead: srhe/Open University Press).Google Scholar
Luukkonen, Terttu, 2012. Conservatism and Risk-Taking in Peer Review: Emerging Erc Practices, Research Evaluation, 21 (1): 48-60.Google Scholar
Mahdi, Surya and Pavitt, Keith, 1997. “Key National Factors in the Emergence of Computational Chemistry Firms”, International Journal of Innovation Management, 1 (4): 355-386.Google Scholar
Marcovich, Anne and Shinn, Terry, 2014. Toward a New Dimension: Exploring the Nanoscale (Oxford, Oxford University Press).Google Scholar
Martin, Ben R., 2012. “The Evolution of Science Policy and Innovation Studies”, Research Policy, 41 (7): 1219-1239.Google Scholar
Martin, Ben R., Nightingale, Paul and Yegros-Yegros, Alfredo, 2012. “Science and Technology Studies: Exploring the Knowledge Base”, Research Policy, 41 (7): 1182-1204.Google Scholar
Mayntz, Renate and Schimank, Uwe, 1998. “Linking Theory and Practice: Introduction”, Research Policy, 27 (8): 747-755.Google Scholar
Meek, V. Lynn, 1991. The Transformation of Australian Higher Education from Binary to Unitary System, Higher Education, 21 (4): 461-494.Google Scholar
Meier, Frank and Schimank, Uwe, 2010. “Mission Now Possible: Profile Building and Leadership in German Universities”, in Richard Whitley, R., Gläser, J. and Engwall, L., eds., Reconfiguring Knowledge Production: Changing Authority Relationships in the Sciences and their Consequences for Intellectual Innovation (Oxford, Oxford University Press: 211-236).Google Scholar
Merz, Martina and Biniok, Peter, 2010. “How Technological Platforms Reconfigure Science-Industry Relations: The Case of Micro- and Nanotechnology”, Minerva, 48 (2): 105-124.Google Scholar
Meyer-Krahmer, Frieder and Schmoch, Ulrich, 1998. “Science-based technologies: university-industry interactions in four fields”, Research Policy, 27 (8): 835-851.Google Scholar
Miller, Thaddeus R. and Neff, Mark W., 2013. De-Facto Science Policy in the Making: How Scientists Shape Science Policy and Why it Matters (or, Why STS and STP Scholars Should Socialize), Minerva, 51 (3): 295-315.Google Scholar
Molyneux-Hodgson, Susan and Meyer, Morgan, 2009. Tales of Emergence—“Synthetic Biology as a Scientific Community in the Making, BioSocieties, 4 (2-3): 129-145.Google Scholar
Moore, Kelly, 2005. “Powered by the People: Scientific Authority in Participatory Science”, in Frickel, S. and Moore, K., eds., The New Political Sociology of Science: Institutions, Networks, and Power (Madison, University of Wisconsin Press, 299-323).Google Scholar
Morello-Frosch, Rachel, Zavestoski, Stephen, Brown, Phil, Gasior Altman, Rebecca, McCormick, Sabrina and Mayer, Brian, 2005. “Embodied Health Movements: Responses to a ‘Scientized’ World”, in Frickel, S. and Moore, K., eds., The New Political Sociology of Science: Institutions, Networks, and Power (Madison, University of Wisconsin Press, 244-271).Google Scholar
Morris, Norma, 2000. “Science Policy in Action: Policy and The Researcher”, Minerva, 38 (4): 425-451.Google Scholar
Morris, Norma, 2002. “The developing role of departments”, Research Policy, 31 (5): 817-833.Google Scholar
Morris, Norma, 2003. “Academic Researchers as ‘agents’ of science policy”, Science and Public Policy, 30 (5): 359-370.Google Scholar
Moscati, Roberto, 2008. “Transforming a Centalised System of Higher Education: Reform and Academic Resistance in Italy”, in Amaral, A., Bleiklie, I. and Musselin, C., eds., From Governance to Identity: A Festschrift for Mary Henkel (Dordrecht, Springer: 131-137).Google Scholar
Murray, Fiona, 2010. “The Oncomouse that Roared: Hybrid Exchange Strategies as a Source of Distinction at the Boundary of Overlapping Institutions”, AJS, 116 (2): 341-388.Google Scholar
Murray, Fiona, Aghion, Philippe, Dewatripont, Mathias, Kolev, Julian and Stern, Scott, 2009. Of Mice and Academics: Examining the Effect of Openness in Innovation (Cambridge, National Bureau of Economic Research).Google Scholar
Murray, Fiona and Stern, Scott, 2007. “Do Formal Intellectual Property Rights Hinder the Free Flow of Scientific Knowledge?: An Empirical Test of the Anti-Commons Hypothesis”, Journal of Economic Behavior & Organization, 63 (4): 648-687.Google Scholar
Musselin, Christine, 2006. “Are Universities Specific Organisations?”in Krücken, G., Castor, C., Kosmützky, A. and Torka, M., eds., Towards a Multiversity?: Universities between Global Trends and National Traditions (Bielefeld, Transcript Verlag: 63-84).Google Scholar
Musselin, Christine, 2013. “How Peer Review Empowers the Academic Profession and University Managers: Changes in Relationships between the State, Universities and The Professoriate”, Research Policy, 42 (5): 1165-1173.Google Scholar
Musselin, Christine, 2014. “Empowerment of French Universities by Funding and Evaluation Agencies”, in Whitley, R. and Gläser, J., eds., Organizational Transformation and Scientific Change: The Impact of Institutional Restructuring on Universities and Intellectual Innovation (Bingley, Emerald Group Publishing Limited: 51-76).Google Scholar
Myers, Greg, 1990. Writing Biology: Texts and The Social Construction of Scientific Knowledge (Madison, The University of Wisconsin Press).Google Scholar
Neufeld, Jörg and Hornbostel, Stefan, 2012. “Funding programmes for young scientists––Do the ‘best’ apply?”, Research Evaluation, 21 (3): 1-10.Google Scholar
Neufeld, Jörg, Huber, Nathalie and Wegner, Antje, 2013. “Peer Review-Based Selection Decisions in Individual Research Funding, Applicants’ Publication Strategies and Performance: The Case of the ERC Starting Grants”, Research Evaluation, 22 (4): 237-247.Google Scholar
Neufeld, Jörg and Ins, Markus von, 2011. “Informed Peer Review and Uninformed Bibliometrics?”, Research Evaluation, 20 (1): 31-46.Google Scholar
Nowotny, Helga, 2007. How Many Policy Rooms are There?: Evidence-Based and Other Kinds of Science Policies, Science, Technology & Human Values, 32 (4): 479-490.Google Scholar
Osuna, Carmen, Cruz-Castro, Laura and Sanz-Menendez, Luis, 2011. Overturning some Assumptions about the Effects of Evaluation Systems on Publication Performance, Scientometrics, 86: 575-592.Google Scholar
Owen-Smith, Jason, 2001. “Managing Laboratory Work through Skepticism: Processes of Evaluation and Control”, American Sociological Review, 66 (3): 427-452.Google Scholar
Owen-Smith, Jason and Powell, Walter W., 2001. “To Patent or Not: Faculty Decisions and Institutional Success at Technology Transfer”, The Journal of Technology Transfer, 26 (1-2): 99-114.Google Scholar
Packer, Kathryn and Webster, Andrew, 1996. “Patenting Culture in Science: Reinventing the Scientific Wheel of Credibility”, Science, Technology, & Human Values, 21 (4): 427-453.Google Scholar
Panofsky, Aaron, 2011. “Generating Sociability to Drive Science: Patient Advocacy Organizations and Genetics Research”, Social Studies of Science, 41 (1): 31-57.Google Scholar
Paradeise, Catherine, Reale, Emanuela and Goastellec, Gaële, 2009. “A Comparative Approach to Higher Education Reforms in Western Europe”, in Paradeise, C., Reale, E., Bleiklie, I. and Ferlie, E., eds., University Governance—Western European Comparative Perspectives (Dordrecht: Springer Science and Business Media: 197-245).Google Scholar
Pavitt, Keith, 2001. “Public Policies to Support Basic Research: What Can the Rest of the World Learn from US Theory and Practice? (And What They Should Not Learn)”, Industrial and Corporate Change, 10 (3): 761-779.Google Scholar
Pelz, Donald C. and Andrews, Frank M., 1966. Scientists in organizations. Productive Climates for Research and Development (New York, Wiley).Google Scholar
Penders, Bart, Verbakel, John M. A. and Nelis, Annemiek, 2009. “The Social Study of Corporate Science: A Research Manifesto”, Bulletin of Science, Technology & Society, 29 (6): 439-446.Google Scholar
Perkmann, Markus, Tartari, Valentina, McKelvey, Maureen, Autio, Erkko, Broström, Anders, D’Este, Pablo, Fini, Riccardo, Geuna, Aldo, Grimaldi, Rosa, Hughes, Alan, Krabel, Stefan, Kitson, Michael, Llerena, Patrick, Lissoni, Franceso, Salter, Ammon and Sobrero, Maurizio, 2013. “Academic Engagement and Commercialisation: A Review of the Literature on University-Industry Relations”, Research Policy, 42 (2): 423-442.Google Scholar
Pinch, Trevor, 1986. Confronting Nature: The Sociology of Solar Neutrino Detection (Dordrecht, Reidel).Google Scholar
Pittens, Carina A. C. M., Elberse, Janneke E., Visse, Merel, Abma, Tineke A. and Broerse, Jacqueline E. W., 2014. “Research Agendas Involving Patients: Factors that Facilitate or Impede Translation of Patients’ Perspectives in Programming and Implementation”, Science and Public Policy.Google Scholar
Polich, Ginger R., 2011. “Rare Disease Patient Groups as Clinical Researchers”, Drug Discovery Today, 17 (3/4): 167-172.Google Scholar
Pols, Jeannette, 2013. “Knowing Patients: Turning Patient Knowledge into Science”, Science, Technology & Human Values. doi: 10.1177/0162243913504306.Google Scholar
Rabeharisoa, Vololona, Moreira, Tiago and Akrich, Madeleine, 2014. “Evidence-Based Activism: Patients’, Users’ and Activists’ Groups in Knowledge Society”, BioSocieties, 9 (2): 111-128.Google Scholar
Rafols, Ismael, Leydesdorff, Loet, O’Hare, Alice, Nightingale, Paul and Stirling, Andy, 2012. “How Journal Rankings can Suppress Interdisciplinary Research: A Comparison between Innovation Studies and Business & Management”, Research Policy, 41 (7): 1262-1282.Google Scholar
Rappert, Brian and Webster, Andrew, 1997. “Regimes of Ordering: The Commerzialization of Intellectual Property in Industrial-Academic Collaborations”, Technology Analysis & Strategic Management, 9 (2): 115-130.Google Scholar
Remington, John A., 1988. “Beyond Big Science in America: The Binding of Inquiry”, Social Studies of Science, 18 (1): 45-72.Google Scholar
Rheinberger, Hans-Jörg, 1994. “Experimental Systems: Historiality, Narration, and Deconstruction”, Science in Context, 7 (1): 65-81.Google Scholar
Rheinberger, Hans-Jörg, 1996. “Comparing Experimental Systems: Protein Synthesis in Microbes and in Animal Tissue at Cambridge (Ernest F. Gale) and at the Massachusetts General Hospital (Paul J. Zamecnik), 1945-1960”, Journal of the History of Biology, 29: 387-416.Google Scholar
Rip, Arie, 1994. “The Republic of Science in the 1990s”, Higher Education, 28: 3-32.Google Scholar
Rip, Arie, 1997. “A Cognitive Approach to Relevance of Science”, Social Science Information, 36 (4): 615-640.Google Scholar
Rip, Arie, 1999. “STS in Europe”, Science Technology & Society, 4 (1): 73-80.Google Scholar
Rip, Arie, 2002. “Regional Innovation Systems and the Advent of Strategic Science”, The Journal of Technology Transfer, 27 (1): 123-131.Google Scholar
Rip, Arie and Voß, Jan-Peter, 2013. “Umbrella Terms as a Conduit in The Governance of Emerging Science and Technology”, Science, Technology & Innovation Studies, 9 (2): 39-60.Google Scholar
Rushforth, Alexander and Rijcke, Sarah de, 2015. “Accounting for Impact? The Journal Impact Factor and the Making of Biomedical Research in the Netherlands”, Minerva, 53 (2): 117-139.Google Scholar
Schimank, Uwe, 2005. “‘New ‘Public Management’ and the academic profession: Reflections on the German situation”, Minerva, 43: 361-376.Google Scholar
Schneider, Jesper W., Aagaard, Kaare and Bloch, Carter W., 2014. “What Happens when Funding is Linked to (Differentiated) Publication Counts? New Insights from an Evaluation of the Norwegian Publication Indicator”, in Noyons, E., ed., Context Counts: Pathways to Master Big and Little Data. Proceedings of the Science and Technology Indicators Conference 2014 Leiden (Leiden, Universiteit Leiden, 543-550).Google Scholar
Shinn, Terry and Lamy, Erwan, 2006. “Paths of Commercial Knowledge: Forms and Consequences of University-Enterprise Synergy in Scientist-Sponsored Firms”, Research Policy, 35 (10): 1465-1476.Google Scholar
Shove, Elizabeth, 2003. “Principals, agents and research programmes”, Science and Public Policy, 30 (5): 371-381.Google Scholar
Sismondo, Sergio, 2009. “Ghosts in the Machine: Publication Planning in the Medical Sciences”, Social Studies of Science, 39 (2): 171-198.Google Scholar
Smith, Katherine, 2010. “Research, Policy and Funding––Academic Treadmills and the Squeeze on Intellectual Spaces”, The British Journal of Sociology, 61 (1): 176-195.Google Scholar
Smith, Katherine E., 2014. “The Politics of Ideas: The Complex Interplay of Health Inequalities Research and Policy”, Science and Public Policy, 41 (5): 561-574.Google Scholar
Stöckelová, Tereza, 2012. “Immutable Mobiles Derailed: STS, Geopolitics, and Research Assessment”, Science, Technology & Human Values, 37 (2): 286-311.Google Scholar
Tousignant, Noemi, 2013. “Broken Tempos: Of Means and Memory in a Senegalese University Laboratory”, Social Studies of Science. doi: 10.1177/0306312713482187.Google Scholar
Travis, G. D. L. and Collins, H. M., 1991. “New Light on Old Boys: Cognitive and Institutional Particularism in the Peer Review System”, Science, Technology, & Human Values, 16 (3): 322-341.Google Scholar
Trousset, Sarah, 2014. “Current Trends in Science and Technology Policy Research: An Examination of Published Works from 2010-2012”, Policy Studies Journal, 42: S87-S117.Google Scholar
Van Arensbergen, Pleun, van der Weijden, Inge and van den Besselaar, Peter, 2014. “The selection of talent as a group process. A literature review on the social dynamics of decision making in grant panels”, Research Evaluation, 23 (4): 298-311.Google Scholar
Van den Besselaar, Peter, 2000. “Communication between science and technology studies journals: A case study in differentiation and integration in scientific fields”, Scientometrics, 47 (2): 169-193.Google Scholar
Van den Besselaar, Peter, 2001. “The cognitive and social structure of STS”, Scientometrics, 51 (2): 441-460.Google Scholar
Van den Besselaar, Peter and Leydesdorff, Loet, 2009. “Past Performance, Peer Review and Project Selection: A Case Study in the Social and Behavioral Sciences”, Research Evaluation, 18 (4): 273-288.Google Scholar
Van der Meulen, Barend, 1998. “Science Policies as Principal-Agent Games: Institutionalization and Path Dependency in The Relation between Government and Science”, Research Policy, 27: 397-414.Google Scholar
Van Lente Harro, and Rip, Arie, 1998. “The Rise of Membrane Technology: From Rhetorics to Social Reality”, Social Studies of Science, 28 (2): 221-254.Google Scholar
Vaughan, Diane, 1999. “The Role of the Organization in the Production of Techno-Scientific Knowledge”, Social Studies of Science, 29 (6): 913-943.Google Scholar
Velden, Theresa, 2013. “Explaining Field Differences in Openness and Sharing in Scientific Communities”, Proceedings of the 2013 conference on computer supported cooperative work (San Antonio, Texas, USA, ACM: 445-458).Google Scholar
Von Hippel, Eric, 1987. “Cooperation between Rivals: Informal Know-How Trading”, Research Policy, 16 (6): 291-302.Google Scholar
Wagner, Caroline S. and Alexander, Jeffrey, 2013. “Evaluating Transformative Research Programmes: A Case Study of the NSF Small Grants for Exploratory Research Programme”, Research Evaluation, 22: 187-197.Google Scholar
Walsh, John P., Cohen, Wesley M. and Cho, Charlene, 2007. “Where Excludability Matters: Material Versus Intellectual Property in Academic Biomedical Research”, Research Policy, 36 (8): 1184-1203.Google Scholar
Webster, Andrew, 2007. “Crossing Boundaries: Social Science in the Policy Room”, Science, Technology & Human Values, 32 (4): 458-478.Google Scholar
Westerheijden, Don F., De Boer, Harry and Enders, Jürgen, 2009. “Netherlands: An “Echternach” Procession in Different Directions: Oscillating Steps towards Reform”, in Paradeise, C., Reale, E., Bleiklie, I. and Ferlie, E., eds., University Governance: Western European Comparative Perspectives (Dordrecht, Springer: 103-125).Google Scholar
Whitley, Richard, 2000 [1984]. The Intellectual and Social Organization of the Sciences (Oxford, Clarendon Press).Google Scholar
Whitley, Richard, 2008. “Universities as Strategic Actors: Limitations and Variations”, in Engwall, L. and Weaire, D., eds., The Unviersity in the Market (Stockholm, Wenner-Gren Foundation, 23-37).Google Scholar
Whitley, Richard, 2010. “Reconfiguring the public sciences: The impact of governance changes on authority and innovation in public science systems”, in Whitley, R., Gläser, J. and Engwall, L., eds., Reconfiguring Knowledge Production (Oxford, Oxford University Press: 3-47).Google Scholar
Whitley, Richard, 2014. “How do Institutional Changes Affect Scientific Innovations? The Effects of Shifts in Authority Relationships, Protected Space, and Flexibility”, in Whitley, R. and Gläser, J., eds., Organizational Transformation and Scientific Change: The Impact of Institutional Restructuring on Universities and Intellectual Innovation (Bingley, Emerald Group Publishing Limited, 367-406).Google Scholar
Whitley, Richard and Gläser, Jochen, eds., 2007. The Changing Governance of the Sciences: The Advent of Research Evaluation Systems (Dordrecht, Springer).Google Scholar
Whitley, Richard and Gläser, Jochen, 2014. “The Impact of Institutional Reforms on the Nature of Universities as Organisations”, in Whitley, R. and Gläser, J., eds., Organizational Transformation and Scientific Change: The Impact of Institutional Restructuring on Universities and Intellectual Innovation (Bingley, Emerald Group, 19-49).Google Scholar
Woodhouse, Edward, Hess, David, Breyman, Steve and Martin, Brian, 2002. “Science Studies and Activism: Possibilities and Problems for Reconstructivist Agendas”, Social Studies of Science, 32 (2): 297-319.Google Scholar
Zomer, Arend H., Jongbloed, Ben W. A. and Enders, Jürgen, 2010. “Do Spin-Offs Make the Academics’ Heads Spin?”, Minerva, 48 (3): 331-353.Google Scholar
Figure 0

Table 1 Findings on the impact of four forms of university-industry links on the contents of research on three levels of aggregation levels of research