We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Over the last decades, archaeology has experienced a transformative revolution in the wake of the digital that has shaped the ways in which it is researched and published. A key concept, openness, has emerged from this shift. This article explores digital approaches to data management conducted within the framework of the PERAIA project, which provides a comprehensive open database and a web application that integrate data on archaeological heritage spanning from late prehistory to antiquity, covering the Aegean area (Crete) and northeastern Libya / northwestern Egypt (Marmarica). We used a methodology that integrates legacy data with historical aerial and satellite imagery to identify archaeological features in the landscape, thereby enriching them with associated environmental and historical (meta)data. Our open data practices reflect a commitment to open science, in which digital technology and the LOUD+FAIR principles have been at the core of the project to achieve data openness, fair access to information, and enhanced data reusability potential.
Open data promises various benefits, including stimulating innovation, improving transparency and public decision-making, and enhancing the reproducibility of scientific research. Nevertheless, numerous studies have highlighted myriad challenges related to preparing, disseminating, processing, and reusing open data, with newer studies revealing similar issues to those identified a decade prior. Several researchers have proposed the open data ecosystem (ODE) as a lens for studying and devising interventions to address these issues. Since actors in the ecosystem are individually and collectively impacted by the sustainability of the ecosystem, all have a role in tackling the challenges in the ODE. This paper asks what the contributions of open data intermediaries may be in addressing these challenges. Open data intermediaries are third-party actors providing specialized resources and capabilities to (i) enhance the supply, flow, and/or use of open data and/or (ii) strengthen the relationships among various open data stakeholders. They are critical in ensuring the flow of resources within the ODE. Through semi-structured interviews and a validation exercise in the European Union context, this study explores the potential contribution of open data intermediaries and the specific ODE challenges they may address. This study identified 20 potential contributions, addressing 27 challenges. The findings of this study pave the way for further inquiry into the internal incentives (viable business models) and external incentives (policies and regulations) to direct the contributions of open data intermediaries toward addressing challenges in the ODE.
One of the goals of open science is to promote the transparency and accessibility of research. Sharing data and materials used in network research is critical to these goals. In this paper, we present recommendations for whether, what, when, and where network data and materials should be shared. We recommend that network data and materials should be shared, but access to or use of shared data and materials may be restricted if necessary to avoid harm or comply with regulations. Researchers should share the network data and materials necessary to reproduce reported results via a publicly accessible repository when an associated manuscript is published. To ensure the adoption of these recommendations, network journals should require sharing, and network associations and academic institutions should reward sharing.
A number of data governance policies have recently been introduced or revised by the Indian Government with the stated goal of unlocking the developmental and economic potential of data. The policies seek to implement standardized frameworks for public data management and establish platforms for data exchange. However, India has a longstanding history of record-keeping and information transparency practices, which are crucial in the context of data management. These connections have not been explicitly addressed in recent policies like the Draft National Data Governance Framework, 2022. To understand if record management has a role to play in modern public data governance, we analyze the key new data governance framework and the associated Indian Urban Data Exchange platform as a case study. The study examines the exchange where public records serve as a potential source of data. It evaluates the coverage and the actors involved in the creation of this data to understand the impact of records management on government departments’ ability to publish datasets. We conclude that while India recognizes the importance of data as a public good, it needs to integrate digital records management practices more effectively into its policies to ensure accurate, up-to-date, and accessible data for public benefit.
This paper discusses the challenges and opportunities in accessing data to improve workplace relations law enforcement, with reference to minimum employment standards such as wages and working hours regulation. Our paper highlights some innovative examples of government and trade union efforts to collect and use data to improve the detection of noncompliance. These examples reveal the potential of data science as a compliance tool but also suggest the importance of realizing a data ecosystem that is capable of being utilized by machine learning applications. The effectiveness of using data and data science tools to improve workplace law enforcement is impacted by the ability of regulatory actors to access useful data they do not collect or hold themselves. Under “open data” principles, government data is increasingly made available to the public so that it can be combined with nongovernment data to generate value. Through mapping and analysis of the Australian workplace relations data ecosystem, we show that data availability relevant to workplace law compliance falls well short of open data principles. However, we argue that with the right protocols in place, improved data collection and sharing will assist regulatory actors in the effective enforcement of workplace laws.
This commentary explores the potential of private companies to advance scientific progress and solve social challenges through opening and sharing their data. Open data can accelerate scientific discoveries, foster collaboration, and promote long-term business success. However, concerns regarding data privacy and security can hinder data sharing. Companies have options to mitigate the challenges through developing data governance mechanisms, collaborating with stakeholders, communicating the benefits, and creating incentives for data sharing, among others. Ultimately, open data has immense potential to drive positive social impact and business value, and companies can explore solutions for their specific circumstances and tailor them to their specific needs.
One of the drivers for pushing for open data as a form of corruption control stems from the belief that in making government operations more transparent, it would be possible to hold public officials accountable for how public resources are spent. These large datasets would then be open to the public for scrutiny and analysis, resulting in lower levels of corruption. Though data quality has been largely studied and many advancements have been made, it has not been extensively applied to open data, with some aspects of data quality receiving more attention than others. One key aspect however—accuracy—seems to have been overlooked. This gap resulted in our inquiry: how is accurate open data produced and how might breakdowns in this process introduce opportunities for corruption? We study a government agency situated within the Brazilian Federal Government in order to understand in what ways is accuracy compromised. Adopting a distributed cognition (DCog) theoretical framework, we found that the production of open data is not a neutral activity, instead it is a distributed process performed by individuals and artifacts. This distributed cognitive process creates opportunities for data to be concealed and misrepresented. Two models mapping data production were generated, the combination of which provided an insight into how cognitive processes are distributed, how data flow, are transformed, stored, and processed, and what instances provide opportunities for data inaccuracies and misrepresentations to occur. The results obtained have the potential to aid policymakers in improving data accuracy.
Sector-specific regulations apply in several network industries. The telecom sector and infrastructures such as utilities have been regulated based on the notion that they are natural monopolies and need to be regulated to prevent facilitation of monopolies. However, in the beginning of the internet era, large tech escaped regulation.1
The past decade has seen the rise of “data portals” as online devices for making data public. They have been accorded a prominent status in political speeches, policy documents, and official communications as sites of innovation, transparency, accountability, and participation. Drawing on research on data portals around the world, data portal software, and associated infrastructures, this paper explores three approaches for studying the social life of data portals as technopolitical devices: (a) interface analysis, (b) software analysis, and (c) metadata analysis. These three approaches contribute to the study of the social lives of data portals as dynamic, heterogeneous, and contested sites of public sector datafication. They are intended to contribute to critically assessing how participation around public sector datafication is invited and organized with portals, as well as to rethinking and recomposing them.
The gender gap in political knowledge is a well-established finding in Political Science. One explanation for gender differences in political knowledge is the activation of negative stereotypes about women. As part of the Systematizing Confidence in Open Research and Evidence (SCORE) program, we conducted a two-stage preregistered and high-powered direct replication of Study 2 of Ihme and Tausendpfund (2018). While we successfully replicated the gender gap in political knowledge – such that male participants performed better than female participants – both the first (N = 671) and second stage (N = 831) of the replication of the stereotype activation effect were unsuccessful. Taken together (pooled N = 1,502), results indicate evidence of absence of the effect of stereotype activation on gender differences in political knowledge. We discuss potential explanations for these findings and put forward evidence that the gender gap in political knowledge might be an artifact of how knowledge is measured.
To further explore the issues discussed in previous chapters, this chapter uses the city of Bloomington, Indiana, and its open data portal as a case study. As open data portals are considered to be an instantiation of digital commons, it is assumed that its design and governance would support cooperation and community participation and at least some forms of communal ownership, co-creation, and use. To test these assumptions, the GKC framework and its concepts and guiding questions are applied to this specific case to understand the actions around the portal and their patterns and outcomes.
When people have invested resources into an endeavor, they typically persist in it, even when it becomes obvious that it will fail. Here we show this bias extends to people’s moral decision-making. Across two preregistered experiments (N = 1592) we show that people are more willing to proceed with a futile, immoral action when costs have been sunk (Experiment 1A and 1B). Moreover, we show that sunk costs distort people’s perception of morality by increasing how acceptable they find actions that have received past investment (Experiment 2). We find these results in contexts where continuing would lead to no obvious benefit and only further harm. We also find initial evidence that the bias has a larger impact on judgment in immoral compared to non-moral contexts. Our findings illustrate a novel way that the past can affect moral judgment. Implications for rational moral judgment and models of moral cognition are discussed.
The transition to open data practices is straightforward albeit surprisingly challenging to implement largely due to cultural and policy issues. A general data sharing framework is presented along with two case studies that highlight these challenges and offer practical solutions that can be adjusted depending on the type of data collected, the country in which the study is initiated, and the prevailing research culture. Embracing the constraints imposed by data privacy considerations, especially for biomedical data, must be emphasized for data outside of the United States until data privacy law(s) are established at the Federal and/or State level.
This paper identifies the potential benefits of data sharing and open science, supported by artificial intelligence tools and services, and dives into the challenges to make data open and findable, accessible, interoperable, and reusable (FAIR).
The extent to which findings in bilingualism research are contingent on specific analytic choices, experimental designs, or operationalisations, is currently unknown. Poor availability of data, analysis code, and materials has hindered the development of cumulative lines of research. In this review, we survey current practices and advocate a credibility revolution in bilingualism research through the adoption of minimum standards of transparency. Full disclosure of data and code is necessary not only to assess the reproducibility of original findings, but also to test the robustness of these findings to different analytic specifications. Similarly, full provision of experimental materials and protocols underpins assessment of both the replicability of original findings, as well as their generalisability to different contexts and samples. We illustrate the review with examples where good practice has advanced the agenda in bilingualism research and highlight resources to help researchers get started.
This paper introduces a set of principles that articulate a shared vision for increasing access to data in the engineering and related sectors. The principles are intended to help guide progress toward a data ecosystem that provides sustainable access to data, in ways that will help a variety of stakeholders in maximizing its value while mitigating potential harms. In addition to being a manifesto for change, the principles can also be viewed as a means for understanding the alignment, overlaps and gaps between a range of existing research programs, policy initiatives, and related work on data governance and sharing. After providing background on the growing data economy and relevant recent policy initiatives in the United Kingdom and European Union, we then introduce the nine key principles of the manifesto. For each principle, we provide some additional rationale and links to related work. We invite feedback on the manifesto and endorsements from a range of stakeholders.
Communication takes many forms. This chapter offers guidance for presenting work in a poster or talk, as well as for writing a research article for publication.
This research contributes to the expanding literature on the determinants of government transparency. It uncovers the dynamics of transparency in the Italian case, which shows an interesting reform trajectory: until the late 1980s no transparency provisions existed; since then, provisions have dramatically increased under the impulse of changing patterns of political competition. The analysis of the Italian case highlights that electoral uncertainty for incumbents is a double-edged sword for institutional reform: on the one hand, it incentivizes the adoption of ever-growing transparency provisions; on the other, it jeopardizes the implementation capacity of public agencies by leading to severe administrative burdens.
Although nowadays most courts publish decisions on the internet, substantial differences exist between European countries regarding such publication. These differences not only pertain to the extent with which judgments are published and anonymised, but also to their metadata, searchability and reusability. This article, written by Marc van Opijnen, Ginevra Peruginelli, Eleni Kefali and Monica Palmirani, contains a synthesis of a comprehensive comparative study on the publication of court decisions within all Member States of the European Union. Specific attention is paid on the legal and policy frameworks governing case law publication, actual practices, data protection issues, Open Data policies as well as the state of play regarding the implementation of the European Case Law Identifier.
This paper proposes a simple method for categorizing fields on a regional level, with respect to intra-field variations. It aims to identify fields where the potential benefits of applying precision agricultural practices are highest from an economic and environmental perspective. The categorization is based on vegetation indices derived from Sentinel-2 satellite imagery. A case study on 7678 winter wheat fields is presented, which employs open data and open source software to analyze the satellite imagery. Furthermore, the method can be automated to deliver categorizations at every update of satellite imagery, hence coupling the geospatial data analysis to direct improvements for the farmers, contractors, and consultants.