Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-22T07:06:35.477Z Has data issue: false hasContentIssue false

Whose open science are we talking about? From open science in psychology to open science in applied linguistics

Published online by Cambridge University Press:  27 July 2023

Meng Liu*
Affiliation:
Beijing Foreign Studies University, Beijing, China
Rights & Permissions [Opens in a new window]

Extract

“An extremely timely, thoughtful, and well-informed discussion in applied linguistics and beyond concerning open science. The essay seeks to engage with a broad audience and to dispel some of the persisting myths around open science and would be read fruitfully by many in the field”.

Type
Christopher Brumfit Essay Prize 2022 Winner
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Comments from the reviewing panel:

“An extremely timely, thoughtful, and well-informed discussion in applied linguistics and beyond concerning open science. The essay seeks to engage with a broad audience and to dispel some of the persisting myths around open science and would be read fruitfully by many in the field”.

1. Introduction

The momentum for open science is on the rise in applied linguistics. In 2022 alone, there were already several exciting developments: Open Science in Applied Linguistics (Plonsky, Reference Plonskyin press), the first edited volume on this topic, was being prepared for publication, with several postprints shared online; Open Applied Linguistics (http://openappliedlinguistics.org/), a newly established research network affiliated with the International Association of Applied Linguistics (AILA) held a two-day symposium to promote open science in applied linguistics (Liu et al., Reference Liu, Chong, Marsden, McManus, Morgan-Short, Al-Hoorie, Plonsky, Bolibaugh, Hiver, Winke, Huensch and Hui2023); following the symposium, AILA officially endorsed the open science guidelines and example practices statement by Open Applied Linguistics (Liu & Chong, Reference Liu and Chong2022), formally recognising open science's value to our field; the Postprint Pledge, an initiative calling for applied linguists to share accepted manuscripts online, was launched (Al-Hoorie & Hiver, Reference Al-Hoorie and Hiver2023); and two calls for papers were circulated: one by Studies in Second Language Acquisition on replication in second language research, one by Language Testing on open science practices in language testing and assessment. As we entered 2023, the momentum continued to grow: in April, Language Learning published a conceptual review (Marsden & Morgan-Short, Reference Marsden and Morgan-Short2023) along with open peer commentaries, examining whether and why open research practices are the future for the study of language learning; the 56th annual conference of the British Association for Applied Linguistics (BAAL) is set for August, with a theme centred on “opening up applied linguistics”.

These developments show that open science is becoming increasingly salient in our field. Broadly speaking, open science refers to various movements and practices aimed at making research and related scholarly activities, such as teaching, training, and collaboration, more open, inclusive, and transparent for the benefit of the academic community and beyond (UNESCO, 2021). Nonetheless, misconceptions about open science still abound. For instance, some believe open science is a synonym for open access while others have the impression of open science as exclusive to psychology and irrelevant to applied linguistics. To debunk the former is relatively easy – by the end of this essay, the reader should already see open science as a much broader notion than open access. To debunk the latter, however, is a more challenging task. This is because open science in applied linguistics is indeed closely intertwined with open science in psychology. In fact, at the moment, most contributions to open science in our field are from psychology-adjacent researchers – mostly quantitative or experimental researchers who have been exposed to open science in psychology in one way or another, myself included. This phenomenon itself is perhaps not unexpected. After all, to speak of open science, one can hardly get around the contributions from psychology. Over the past decade, psychologists have spearheaded many revolutionary initiatives for open science that are not only transforming psychology but also spreading its influence to neighbouring fields. Anyone who has been exposed to the discourse of open science in recent years must have heard of open science lingos such as “preregistration” or “registered reports”, most of which were originated in psychology as a response to the “reproducibility crisis” that reached its peak in early 2010s (Parsons et al., Reference Parsons, Azevedo, Elsherif, Guay, Shahim, Govaart, Norris, O'Mahony, Parker and Todorovic2022).

To be accompanied by psychology as a neighbouring field that is making huge strides toward open science is both a blessing and a “curse”. It is a blessing in the sense that we have a rich source of inspiration to draw from, to promote changes towards openness without having to “reinvent the wheel”. Nonetheless, it can also be a “curse” in the sense that the discourse, tools, and resources were created by psychologists and for psychology, and these may or may not reflect the priorities and needs of applied linguists. I have enclosed the term “curse” in quotations to clarify that I am not implying a fatalistic doom associated with the impact of psychology. Rather, I use the term to highlight the complexity of open science and I call for a critical reflection on not only the benefits but also the potential limitations of open science in psychology with the ultimate goal to develop an open science by applied linguists and for applied linguistics.

2. Psychology's crisis of confidence

2.1 The reproducibility crisis

To better appreciate psychology's dedication to the movement of open science, it is important to go back in history to examine its point of origin – what is commonly referred to as the “reproducibilityFootnote 1 crisis” (also known as “replicability crisis” or “replication crisis”) in psychology (Parsons et al., Reference Parsons, Azevedo, Elsherif, Guay, Shahim, Govaart, Norris, O'Mahony, Parker and Todorovic2022).

Psychology as a field began to seriously reckon with the reproducibility of its research around 2010. At that time, a series of unfortunate and scandalous events generated severe doubt regarding the reliability of published findings (Świątkowski & Dompnier, Reference Świątkowski and Dompnier2017): in 2011, a top-tier journal of social psychology published a dubious paper on “precognition”, which claimed evidentiary support for seeing into the future. Worse yet, the same journal later refused to publish replication studies that showed evidence to the contrary. Social priming, a hot topic in social psychology concerning many novel and surprising effects (e.g., being primed with “old” stereotypes causes people to walk more slowly) was also confronted with failures to replicate, culminating in an open letter from Daniel Kahneman, the Nobel Laureate, to call for greater efforts in replication. There was also the scandal of Diederik Stapel, a prominent professor of social psychology at a prestigious university, who was found to have fabricated data for at least 30 publications.

Although these events primarily involve social psychology, this crisis is not exclusive to the field. In fact, a large-scale study (Open Science Collaboration, 2015) examined the replicability of psychological research in general and found over 60% of the results failed to replicate.

2.2 Questionable research practices and false positives

The rather grim picture of unreliable findings across the board deviated dramatically from the common vision of science as self-correcting and cumulative. Psychologists started to reconsider the soundness of conventional research practices and scientific standards. Scientific integrity was placed under the spotlight with attention being diverted to not only extreme forms of scientific misconduct (e.g., fraud) but also more subtle and, some may consider, greyer areas of misconduct – questionable research practices (QRPs). While QRPs may be committed in good faith or appear innocuous, the consequence of such practices can be quite costly – the literature is flooded with false positive findings that mislead future directions and waste limited resources (Simmons et al., Reference Simmons, Nelson and Simonsohn2011). P-hacking and HARKing (hypothesising after the results are known; Kerr, Reference Kerr1998) are perhaps the two most typical QRPs. P-hacking (Parsons et al., Reference Parsons, Azevedo, Elsherif, Guay, Shahim, Govaart, Norris, O'Mahony, Parker and Todorovic2022) refers to exploiting practices that could inflate the possibility of getting a statistically significant result. For example, one may run multiple analyses with different combinations of variables and report only statistically significant results. HARKing is defined by Kerr (Reference Kerr1998) as presenting a post hoc hypothesis (i.e., generated from the results of analysis) as if it were hypothesised a priori, which creates the illusion of the research as confirmatory when in fact it was exploratory.

3. The blessings of the credibility revolution

The crisis of confidence may be seen as a blessing in disguise for psychologists, with profound positive effects on the field in the long term. It highlighted the essential need to enhance rigour and transparency within the scientific process. Consequently, psychologists have actively led the charge in guaranteeing that research is carried out, documented, and shared in a manner that upholds scientific integrity. Below, I go over three of the most prominent initiatives and changes spurred by the reproducibility crisis, all falling under what has been collectively referred to as the credibility revolution (Vazire, Reference Vazire2018). Although psychologists may not have created all of these practices, the credibility revolution has certainly played a key role in elevating these as the new norms and standards. Moreover, it is also a blessing for neighbouring fields including our own, as many of these initiatives already are benefiting applied linguistics.

3.1 Preregistration

Preregistration is the practice of specifying a research plan prior to data collection and analysis (e.g., Nosek et al., Reference Nosek, Ebersole, DeHaven and Mellor2018). It is created as a time-stamped record in an independent registry such as the Open Science Framework (OSF) to declare commitment to the analytic steps specified without the influence of research outcomes. Preregistration is intended to reduce the potential for researchers to engage in questionable research practices, such as p-hacking and HARKing. The necessity and benefit of minimising bias in hypothesis-driven research are certainly not limited to psychology, and arguments in favour of this initiative have been made in our field too (Huensch, Reference Huensch and Plonskyin press; Mertzen et al., Reference Mertzen, Lago and Vasishth2021).

A special case of preregistration is registered reports, a type of scholarly publication in which study plans are examined and approved before research is conducted (Chambers & Tzavella, Reference Chambers and Tzavella2022). In line with preregistration's purpose of reducing bias, registered reports take a step further to remove the incentive for positive or novel findings. They achieve this by assessing the publishability of articles based solely on the research question, theory, and design. First launched in Cortex in 2012, registered reports have been adopted by over 300 journals across a diverse range of fields, including applied linguistics: Language Learning (Marsden et al., Reference Marsden, Morgan-Short, Trofimovich and Ellis2018), Bilingualism: Language and Cognition, and Second Language Research have taken the lead to embrace registered reports as an article type.

3.2 Replication

Replication used to be, and sometimes still is, mis-characterised as the “boring, rote, clean-up work of science” (Nosek & Errington, Reference Nosek and Errington2020, p. 7). The credibility revolution has certainly been a major driving force to promote replication to the top of psychology's research agenda (Makel et al., Reference Makel, Plucker and Hegarty2012). It is worth noting that our field appears to have developed an interest in replication in parallel with psychology. Back in the early 2010s, there was already a monograph dedicated to replication research in applied linguistics (Porte, Reference Porte2012), and calls for a more central role for replication on our research agenda were issued even earlier (Polio & Gass, Reference Polio and Gass1997; see Porte & Richards, Reference Porte and Richards2012, for a discussion from the perspectives of both quantitative and qualitative research). Over the past decade, our field has witnessed positive developments regarding replication: more journals now accept replication as an article type (e.g., Language Teaching, Studies in Second Language Acquisition, Applied Psycholinguistics); training workshops and books (e.g., Porte & McManus, Reference Porte and McManus2019) help reduce the methodological barrier to replication; awards have been created to further incentivise replication efforts (e.g., the IRIS replication award; Marsden et al., Reference Marsden, Mackey, Plonsky, Mackey and Marsden2016); and a generally positive attitude towards replication has also been observed among applied linguists (McManus, Reference McManus2022). While the uptake of replication practices at the present moment may still fall short of expectations, the rising momentum of open science is likely to boost further engagement.

3.3 Open data and materials

Sharing data and materials is an important step forward in promoting transparency and reproducibility, as it allows other researchers to evaluate the data and materials used to draw conclusions in a research article. The openness of materials is also a minimum requirement for direct or close replication. To incentivise these open practices, journals now also offer open science badges, including in our own field (e.g., Language Awareness, Modern Language Journal). Over the years, digital infrastructures have been built to facilitate open sharing. For instance, the OSF, one of the most influential and comprehensive platforms in support of open science, was founded in 2013. It is worth noting that our field invested in open data and materials very early on, with IRIS (Instruments and Data for Research in Language Studies; Marsden et al., Reference Marsden, Mackey, Plonsky, Mackey and Marsden2016) being established in 2011.

4. The “curse” of a biased discourse

Indeed, amazing progress has taken place as a result of the credibility revolution that transformed the way research is conducted. While the blessings of the credibility revolution inspire hope and confidence, the current discourse on open science in psychology also comes with its “curse”. As we can easily see from the major developments of open science outlined above, reproducibility has occupied the central position. Initiatives and practices such as preregistration, data sharing, and replication all revolve around reproducibility (and replicability) as the central concern. Of course, the fact that reproducibility has been prioritised by psychologists in the past decade should not come as a surprise: after all, it is the “reproducibility crisis” that started the movement in the first place. Nonetheless, I argue that this preoccupation with reproducibility as a central value comes at a price – the limitation of open science to a narrow set of methodologies and researchers. It is a curse both for the psychologists and for us too, as it contributes to the misconception of open science as exclusive to some research and irrelevant to other research.

4.1 The science in psychology's open science

In the context of open science in psychology, reproducibility is typically presented as a determinant feature of science. Readers who have flipped through papers on this topic must have encountered a recurring theme, often expressed as: reproducibility is the cornerstone of science. To name one example, a high-impact publication in Science by a large team of psychological researchers begins with the declaration that “reproducibility is a defining feature of science” (Open Science Collaboration, 2015, p. 943). Statements in a similar vein are routinely presented in the literature, to the extent that rarely would a psychologist raise an eyebrow when seeing them. Granted, the prevalence of this characterisation of science – as embodied in reproducibility – is largely in line with the dominance of the hypothetico-deductive model of the scientific method in psychology (e.g., Munafò et al., Reference Munafò, Nosek, Bishop, Button, Chambers, Percie du Sert, Simonsohn, Wagenmakers, Ware and Ioannidis2017) and a reflection of psychology's positivist roots and the dominance of experimental and quantitative researchers in the field (Bennett, Reference Bennett2021; Pownall et al., Reference Pownall, Talbot, Kilby and Branney2023).

While it is perfectly understandable why the discourse may have developed in this particular manner, tensions remain between reproducibility on one hand and some, if not all, qualitative research paradigms on the other. In contrast to quantitative or positivist science where a central concern is minimising bias, qualitative researchers who subscribe to constructivist, interpretive, or critical paradigms, for example, embrace researcher subjectivity and the contextual embeddedness of research outcomes, not as a nuisance to be removed, but as a resource to be reflected on and learned from (e.g., Braun & Clarke, Reference Braun and Clarke2019; Gough & Madill, Reference Gough and Madill2012). In this sense, reproducibility is no longer a meaningful concept nor a viable criterion for quality, especially for researchers working with highly idiosyncratic and situated findings (Leonelli, Reference Leonelli, Fiorito, Scheall and Suprinyak2018). To place at the very centre of open science non-universal or highly paradigm-exclusive values will inevitably limit the potential of open science and discourage researchers who do not share such values from joining the conversation – a situation that is neither desirable nor necessary. In fact, in a recent survey on applied linguists’ perception of open science (Liu & De Cat, Reference Liu, De Cat and Plonskyin press), the concern of the “quantitative bias” (p. 19) in open science has already been voiced by some qualitative researchers.

5. Looking to the future

While the “curse” of the biased discourse may be an open issue that psychologists need to grapple with in the years to come, we as applied linguists whose discourse on open science is still emerging have the opportunity to be proactive. It is essential that we further expand our understanding of open science by incorporating a broader range of perspectives that include, but are not limited to, psychology.

5.1 A new era for open science: UNESCO's Recommendation

The publication of the UNESCO Recommendation on Open Science in 2021 signifies a major milestone. Generated through an inclusive, transparent, and multistakeholder consultative process, the Recommendation is an important step towards establishing global frameworks and standards for open science.

The UNESCO Recommendation stands out from previous efforts in multiple ways and thereby symbolises the beginning of a new era for open science. Foremost, the Recommendation's global impact is unparalleled – it is expected to influence the national laws and practices of United Nations’ 193 member states, which further consolidates open science's status as the new norm for research. Furthermore, the Recommendation merits recognition for its comprehensive definition of open science, which explicitly broadens open science as an inclusive concept relevant to all disciplines – “including basic and applied sciences, natural and social sciences and the humanities” (UNESCO, 2021, p. 7). In line with this inclusive definition, the Recommendation takes a holistic approach to open science by outlining an extensive blueprint that encompasses key pillars, values, and principles (Table 1).

Table 1. Open science: Key pillars, values and principles (extracted from UNESCO, 2021, pp. 11–19)

The multifaceted nature of open science, as well as its potential to revolutionise scientific research practices, is evident. From the four key pillars we can already see that the scope of open science extends much beyond the credibility revolution, where initiatives primarily concentrated on the first two pillars – open scientific knowledge and open science infrastructures. The third pillar, open engagement of society actors, through approaches such as participatory science, reminds us of the need to actively involve a diverse range of contributors beyond traditional academia.

The fourth pillar, open dialogue with other knowledge systems, is particularly worth highlighting in the context of this essay. The recognition of dialogue with indigenous peoples, marginalised scholars, and local communities as a key pillar attests to, among others, the crucial importance of epistemic diversity for open science. Relatedly, “flexibility” is also underscored as a core principle (Table 1), which explicitly cautions against “a one-size-fits-all way of practicing open science” (UNESCO, 2021, p. 19). The Recommendation's broad envisioning holds significant implications, as it rejects a simplistic and monolithic understanding of open science in favour of a more inclusive and nuanced one that values the needs and contributions of diverse research paradigms.

5.2 Harnessing the UNESCO Recommendation and charting our own course

Standing at the beginning of a new era for open science, we have the opportunity to harness the UNESCO Recommendation and chart our own course. For starters, we could initiate dialogues that embody the Recommendation's inclusive vision of open science, calling for open communication between researchers from different paradigms and encouraging those who have not yet contributed to the open science discourse to share their perspectives. Importantly, we must acknowledge the complexity of bridging the epistemological differences across paradigms, as these differences may defy any easy consensus. Still, by fostering open dialogues that represent the diverse epistemological positions within our field, we can work towards an understanding of open science that is well-contextualised in the unique characteristics and priorities of applied linguistics. As an interdisciplinary field underpinned by a variety of paradigms and methodologies, we are well-positioned to contribute to an inclusive understanding of open science, which will also strengthen and enrich our contribution to the broader open science movement.

The Recommendation can also serve as a critical tool for us to reflect on the distribution of our efforts. For instance, we can see from Table 1 that the majority of the efforts in our field, similar to psychology, have mainly focused on the first set of principles – transparency, scrutiny, critique, and reproducibility. Without a doubt, continued investment in these areas is warranted, especially considering there are many open issues in these areas that have not yet found sustainable solutions (see Marsden & Morgan-Short, Reference Marsden and Morgan-Short2023 for a comprehensive review and suggestions). At the same time, the Recommendation serves as a reminder to address areas that are equally important but have not yet been sufficiently engaged by our community, and collectively, we could devise plans to foster a more balanced development in the long term.

While the UNESCO Recommendation offers promising opportunities, it is equally important to recognise the challenges ahead. Undoubtedly, the Recommendation opens the door for a more inclusive open science, but the practical complexities rising from the ever-evolving landscape of science, as well as the fast-changing and unpredictable world we live in today, pose significant challenges that cannot be underestimated. Like any reform, however well-intended it may be, there is always the risk of unintended consequences, and we must remain vigilant that our efforts to promote and implement open science do not inadvertently perpetuate or even exacerbate existing biases and inequities (Dutta et al., Reference Dutta, Ramasubramanian, Barrett, Elers, Sarwatay, Raghunath, Kaur, Dutta, Jayan, Rahman, Tallam, Roy, Falnikar, Johnson, Mandal, Dutta, Basnyat, Soriano, Pavarala and Zapata2021). It remains to be seen how effectively the principles of the Recommendation can be and will be implemented to redress the existing biases in open science.

Last but not least, it is important that we stay open to evolving our views on open science as new developments continue to shape the research field, which likely require us to chart beyond the UNESCO Recommendation. A notable recent example is the rapid advancement of artificial intelligence (e.g., large language models), which presents new opportunities and challenges for research conduct and dissemination that prompt us to reimagine open science's scope and practice. As we collectively navigate the complexities of open science in applied linguistics, fully embracing an ethos of openness and actively engaging with new possibilities will be crucial in setting us closer to a more open and inclusive future.

6. Conclusion

Psychologists have spearheaded many revolutionary initiatives for open science that are not only transforming psychology but also influencing neighbouring fields such as applied linguistics. Beyond doubt, psychology's credibility revolution is a blessing that has benefited our field in more than one way but it is also important to critically reflect on the potential bias in the discourse of open science in psychology and the potential risk of excluding and marginalising alternative paradigms. As UNESCO ushers in a new era of open science, it is time for us, a field that takes pride in its rich epistemic diversity, to further chart our own course for open science – by applied linguists, for applied linguistics.

Meng Liu is Assistant Professor at the School of English and International Studies, Beijing Foreign Studies University and Associate at Cambridge Digital Humanities, University of Cambridge. Meng earned her Ph.D. from the University of Cambridge, where her research focused on the motivation of Chinese students to learn multiple foreign languages. Meng is interested in the psychology of language learning and research methods. She was a Methods Fellow at Cambridge Digital Humanities (2021–2022) and a Data Champion at the University of Cambridge (2021–2022). She co-founded and co-convenes Open Applied Linguistics (http://openappliedlinguistics.org/), an AILA Research Network dedicated to the promotion of open scholarship in applied linguistics. Her work was recognised with an Honourable Mention for the 2022 Faculty of Education Doctoral Student Excellence Award at the University of Cambridge. Meng has published in journals such as Language Teaching, System, Journal of Multilingual and Multicultural Development, among other venues.

Footnotes

1 While reproducibility and replicability are often used interchangeably, they have different emphases: reproducibility refers to verifying study results with the same data and methods whereas replicability refers to obtaining consistent results in a new study using the same or similar methods and different data.

References

Al-Hoorie, A. H., & Hiver, P. (2023). The postprint pledge – toward a culture of researcher-driven initiatives: A commentary on “(why) are open research practices the future for the study of language learning?”. Language Learning, 14. doi:10.1111/lang.12577Google Scholar
Bennett, E. A. (2021). Open science from a qualitative, feminist perspective: Epistemological dogmas and a call for critical examination. Psychology of Women Quarterly, 45(4), 448456. doi:10.1177/03616843211036460CrossRefGoogle Scholar
Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health, 11(4), 589597. doi:10.1080/2159676X.2019.1628806CrossRefGoogle Scholar
Chambers, C. D., & Tzavella, L. (2022). The past, present and future of registered reports. Nature Human Behaviour, 6(1), 2942. doi:10.1038/s41562-021-01193-7CrossRefGoogle ScholarPubMed
Dutta, M., Ramasubramanian, S., Barrett, M., Elers, C., Sarwatay, D., Raghunath, P., Kaur, S., Dutta, D., Jayan, P., Rahman, M., Tallam, E., Roy, S., Falnikar, A., Johnson, G. M., Mandal, I., Dutta, U., Basnyat, I., Soriano, C., Pavarala, V., … Zapata, D. (2021). Decolonizing open science: Southern interventions. Journal of Communication, 71(5), 803826. doi:10.1093/joc/jqab027Google Scholar
Gough, B., & Madill, A. (2012). Subjectivity in psychological science: From problem to prospect. Psychological Methods, 17(3), 374384. doi:10.1037/a0029313CrossRefGoogle ScholarPubMed
Huensch, A. (in press). Open science and preregistration. In Plonsky, L. (Ed.), Open science in applied linguistics (pp. 122). John Benjamins.Google Scholar
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196217. doi:10.1207/s15327957pspr0203_4CrossRefGoogle ScholarPubMed
Leonelli, S. (2018). Rethinking reproducibility as a criterion for research quality. In Fiorito, L., Scheall, S., & Suprinyak, C. E. (Eds.), Research in the history of economic thought and methodology (Vol. 36, pp. 129146). Emerald Publishing Limited. doi:10.1108/S0743-41542018000036B009Google Scholar
Liu, M., & Chong, S. W. (2022). Open Applied Linguistics Research Network Statement. Retrieved from https://openappliedlinguistics.org/ren-statement-on-os-in-alGoogle Scholar
Liu, M., Chong, S. W., Marsden, E., McManus, K., Morgan-Short, K., Al-Hoorie, A. H., Plonsky, L., Bolibaugh, C., Hiver, P., Winke, P., Huensch, A., & Hui, B. (2023). Open scholarship in applied linguistics: What, why, and how. Language Teaching, 56(3), 432437. doi:10.1017/S0261444822000349CrossRefGoogle Scholar
Liu, M., & De Cat, C. (in press). Open science in applied linguistics: A preliminary survey. In Plonsky, L. (Ed.), Open science in applied linguistics. John Benjamins.Google Scholar
Makel, M. C., Plucker, J. A., & Hegarty, B. (2012). Replications in psychology research: How often do they really occur? Perspectives on Psychological Science, 7(6), 537542. doi:10.1177/1745691612460688CrossRefGoogle ScholarPubMed
Marsden, E., Mackey, A., & Plonsky, L. (2016). The IRIS repository of instruments for research into second languages: Advancing methodology and practice. In Mackey, A. & Marsden, E. (Eds.), Advancing methodology and practice: The IRIS repository of instruments for research into second languages (pp. 121). Routledge.Google Scholar
Marsden, E., & Morgan-Short, K. (2023). (Why) are open research practices the future for the study of language learning? Language Learning, 144. doi:10.1111/lang.12568Google Scholar
Marsden, E., Morgan-Short, K., Trofimovich, P., & Ellis, N. C. (2018). Introducing registered reports at language learning: Promoting transparency, replication, and a synthetic ethic in the language sciences. Language Learning, 68(2), 309320. doi:10.1111/lang.12284CrossRefGoogle Scholar
McManus, K. (2022). Are replication studies infrequent because of negative attitudes? Insights from a survey of attitudes and practices in second language research. Studies in Second Language Acquisition, 44(5), 14101423. doi:10.1017/S0272263121000838CrossRefGoogle Scholar
Mertzen, D., Lago, S., & Vasishth, S. (2021). The benefits of preregistration for hypothesis-driven bilingualism research. Bilingualism: Language and Cognition, 24(5), 807812. doi:10.1017/S1366728921000031CrossRefGoogle Scholar
Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., Percie du Sert, N., Simonsohn, U., Wagenmakers, E.-J., Ware, J. J., & Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), 19. doi:10.1038/s41562-016-0021CrossRefGoogle ScholarPubMed
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 26002606. doi:10.1073/pnas.1708274114CrossRefGoogle ScholarPubMed
Nosek, B. A., & Errington, T. M. (2020). What is replication? PLOS Biology, 18(3), 18. doi:10.1371/journal.pbio.3000691CrossRefGoogle ScholarPubMed
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), 943952. doi: 10.1126/science.aac4716Google Scholar
Parsons, S., Azevedo, F., Elsherif, M. M., Guay, S., Shahim, O. N., Govaart, G. H., Norris, E., O'Mahony, A., Parker, A. J., & Todorovic, A. (2022). A community-sourced glossary of open scholarship terms. Nature Human Behaviour, 6(3), 312318. doi:10.1038/s41562-021-01269-4CrossRefGoogle ScholarPubMed
Plonsky, L. (in press). Open science in applied linguistics. John Benjamins.Google Scholar
Polio, C., & Gass, S. (1997). Replication and reporting: A commentary. Studies in Second Language Acquisition, 19(4), 499508. doi: 10.1017/S027226319700404XCrossRefGoogle Scholar
Porte, G. (Ed.) (2012). Replication research in applied linguistics. Cambridge University Press.Google Scholar
Porte, G., & McManus, K. (2019). Doing replication research in applied linguistics. Routledge.Google Scholar
Porte, G., & Richards, K. (2012). Focus article: Replication in second language writing research. Journal of Second Language Writing, 21(3), 284293. doi:10.1016/j.jslw.2012.05.002CrossRefGoogle Scholar
Pownall, M., Talbot, C. V., Kilby, L., & Branney, P. (2023). Opportunities, challenges and tensions: Open science through a lens of qualitative social psychology. British Journal of Social Psychology, 121. doi:10.1111/bjso.12628Google ScholarPubMed
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 13591366. doi:10.1177/0956797611417632CrossRefGoogle ScholarPubMed
Świątkowski, W., & Dompnier, B. (2017). Replicability crisis in social psychology: Looking at the past to find new pathways for the future. International Review of Social Psychology, 30(1), 111. doi:10.5334/irsp.66CrossRefGoogle Scholar
UNESCO. (2021). UNESCO Recommendation on open science (pp. 136). UNESCO. Retrieved from https://unesdoc.unesco.org/ark:/48223/pf0000379949.locale=enGoogle Scholar
Vazire, S. (2018). Implications of the credibility revolution for productivity, creativity, and progress. Perspectives on Psychological Science, 13(4), 411417. doi:10.1177/1745691617751884CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Open science: Key pillars, values and principles (extracted from UNESCO, 2021, pp. 11–19)