We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Assisted and automated driving functions will rely on machine learning algorithms, given their ability to cope with real-world variations, e.g. vehicles of different shapes, positions, colors, and so forth. Supervised learning needs annotated datasets, and several automotive datasets are available. However, these datasets are tremendous in volume, and labeling accuracy and quality can vary across different datasets and within dataset frames. Accurate and appropriate ground truth is especially important for automotive, as “incomplete” or “incorrect” learning can negatively impact vehicle safety when these neural networks are deployed. This work investigates the ground truth quality of widely adopted automotive datasets, including a detailed analysis of KITTI MoSeg. According to the identified and classified errors in the annotations of different automotive datasets, this article provides three different criteria collections for producing improved annotations. These criteria are enforceable and applicable to a wide variety of datasets. The three annotations sets are created to (i) remove dubious cases; (ii) annotate to the best of human visual system; and (iii) remove clear erroneous BBs. KITTI MoSeg has been reannotated three times according to the specified criteria, and three state-of-the-art deep neural network object detectors are used to evaluate them. The results clearly show that network performance is affected by ground truth variations, and removing clear errors is beneficial for predicting real-world objects only for some networks. The relabeled datasets still present some cases with “arbitrary”/“controversial” annotations, and therefore, this work concludes with some guidelines related to dataset annotation, metadata/sublabels, and specific automotive use cases.
Predictive maintenance attempts to prevent unscheduled downtime by scheduling maintenance before expected failures and/or breakdowns while maximally optimizing uptime. However, this is a non-trivial problem, which requires sufficient data analytics knowledge and labeled data, either to design supervised fault detection models or to evaluate the performance of unsupervised models. While today most companies collect data by adding sensors to their machinery, the majority of this data is unfortunately not labeled. Moreover, labeling requires expert knowledge and is very cumbersome. To solve this mismatch, we present an architecture that guides experts, only requiring them to label a very small subset of the data compared to today’s standard labeling campaigns that are used when designing predictive maintenance solutions. We use auto-encoders to highlight potential anomalies and clustering approaches to group these anomalies into (potential) failure types. The accompanied dashboard then presents the anomalies to domain experts for labeling. In this way, we enable domain experts to enrich routinely collected machine data with business intelligence via a user-friendly hybrid model, combining auto-encoder models with labeling steps and supervised models. Ultimately, the labeled failure data allows for creating better failure prediction models, which in turn enables more effective predictive maintenance. More specifically, our architecture gets rid of cumbersome labeling tasks, allowing companies to make maximum use of their data and expert knowledge to ultimately increase their profit. Using our methodology, we achieve a labeling gain of 90% at best compared to standard labeling tasks.
Work by Chomsky et al. (2019) and Epstein et al. (2018) develops a third-factor principle of computational efficiency called “Determinacy”, which rules out “ambiguous” syntactic rule-applications by requiring one-to-one correspondences between the input or output of a rule and a single term in the domain of that rule. This article first adopts the concept of “Input Determinacy” articulated by Goto and Ishii (2019, 2020), who apply Determinacy specifically to the input of operations like Merge, and then proposes to extend Determinacy to the labeling procedure developed by Chomsky (2013, 2015). In particular, Input Determinacy can explain restrictions on labeling in contexts where multiple potential labels are available (labeling ambiguity), and it can also provide an explanation for Chomsky's (2013, 2015) proposal that syntactic movement of an item (“Internal Merge”) renders that item invisible to the labeling procedure.
Discussions of terrorism assume actual or threatened violence, but the term is regularly used to delegitimize rivals' nonviolent actions. Yet do ordinary citizens accept descriptions of nonviolence as terrorism? Using a preregistered survey-experiment in Israel, a salient conflictual context with diverse repertoires of contention, we find that audiences rate adversary nonviolence close to terrorism, consider it illegitimate, and justify its forceful repression. These perceptions vary by the action's threatened harm, its salience, and respondents' ideology. Explicitly labeling nonviolence as terrorism, moreover, particularly sways middle-of-the-road centrists. These relationships replicate in a lower-salience conflict, albeit with milder absolute judgments, indicating generalizability. Hence, popular perceptions of terrorism are more fluid and manipulable than assumed, potentially undermining the positive effects associated with nonviolent campaigns.
In January 2020, the United States implemented a federal bioengineered labeling standard for food products that contain genetically modified material set to go into effect in January 2022. This bioengineered label indicates which products contain detectable levels of genetic material that have been modified through lab techniques that cannot be achieved in nature. An already existing alternative to the bioengineered label is the Non-GMO Project verified label which has been on the market since 2007, and indicates products free of genetically modified material through lab techniques. As consumers are now confronted with multiple labels pertaining to information related to genetic engineering, it is important to understand how people interpret these labels as it can lead to a greater understanding of how they inform consumer choice. We conducted a survey with 153 biology and environmental studies undergraduate students at Binghamton University in Binghamton, New York, asking questions about participants' views on genetically modified organisms (GMOs) and related terminology, corresponding food labels and how these labels influence their purchasing decisions. Results demonstrated a lack of awareness of the bioengineered label compared to the Non-GMO Project verified label. Additionally, individuals associated ‘bioengineered’ and ‘genetically modified’ with differing themes, where ‘bioengineered’ was more often associated with a scientific theme and ‘genetically modified’ was more often associated with an agricultural theme. There was also a discrepancy in how individuals said these labels influenced their purchases vs how the labels actually influenced purchasing decisions when participating in choice experiments. While the majority of participants reported that neither the Non-GMO Project verified label nor the bioengineered label influenced their purchasing decisions, in choice experiments, the majority of respondents chose products with the Non-GMO Project verified label. This study can give insight into overall perceptions of different terminologies associated with genetic engineering, in addition to how these labels are interpreted by consumers, and how they could affect purchasing decisions with the implementation of the new bioengineered label.
Preparing for scale-up in commercial manufacturing is far away from the thoughts of companies involved in product development, but this chapter shows when to start planning and how to plan a practical budget for this activity. For companies with their first product in commercial development, the build vs buy decision is never an easy one and the examples and key points for consideration simplify that process. The biggest challenge in scaling up is the gap in culture between R&D production for experimental testing in preclinical stages and the control and quality oriented culture in the manufacturing location. The case studies and content in the chapter specifically highlight how to achieve a successful technology transfer into commercial GMP manufacturing. The chapter content also gives practical guidelines on what it takes to put GMP and quality systems in place.
This article uses meta-regression analysis to examine variation in willingness to pay (WTP) for farm-raised seafood and aquaculture products. We measure the WTP premiums that consumers have for common product attributes and examine how WTP varies systematically across study design elements, populations of interest, and sample characteristics. Based on metadata from 45 studies, the meta-regression analysis indicates that WTP estimates differ significantly with the availability of attributes such as domestic and environmental certification, but also with sample income and gender representation.
The First Amendment to the US Constitution protects commercial speech from government interference. Commercial speech has been defined by the US Supreme Court as speech that proposes a commercial transaction, such as marketing and labeling. Companies that produce products associated with public health harms, such as alcohol, tobacco, and food, thus have a constitutional right to market these products to consumers. This article will examine the evolution of US law related to the protection of commercial speech, often at the expense of public health. It will then identify outstanding questions related to the commercial speech doctrine and the few remaining avenues available in the United States to regulate commercial speech including the use of government speech and addressing deceptive and misleading commercial speech.
Partial equilibrium models have been used extensively by policy makers to prospectively determine the consequences of government programs that affect consumer incomes or the prices consumers pay. However, these models have not previously been used to analyze government programs that inform consumers. In this paper, we develop a model that policy makers can use to quantitatively predict how consumers will respond to risk communications that contain new health information. The model combines Bayesian learning with the utility-maximization of consumer choice. We discuss how this model can be used to evaluate information policies; we then test the model by simulating the impacts of the North Dakota Folic Acid Educational Campaign as a validation exercise.
Part II looks at the position of fixers within the larger field of journalism. The newsmaking process can be understood as a series of mediations between successive contributors along a chain that stretches from local sources all the way to foreign audiences. “Fixers,” “translators,” “producers,” and others engage in similar journalistic activities along that chain, but news contributors nonetheless draw – and police – important distinctions among these various labels. To rise in status above “translators” and perhaps be recognized as “producers,” fixers try to present themselves as objective professionals and avoid the appearance of local allegiances. Yet local connections are, paradoxically, also their greatest asset for serving client reporters’ needs. Through accounts of reporting on events from the 2014 Soma mine disaster to the Syrian and Afghan refugee crises in Turkey, these chapters illustrate fixers’ ambiguous place in journalism’s hierarchical division and their efforts to claim high-status roles and labels.
In this pioneering study, a world-renowned generative syntactician explores the impact of phenomena known as 'third factors' on syntactic change. Generative syntax has in recent times incorporated third factors – factors not specific to the language faculty – into its framework, including minimal search, labelling, determinacy and economy. Van Gelderen's study applies these principles to language change, arguing that change is a cyclical process, and that third factor principles must combine with linguistic information to fully account for the cyclical development of 'optimal' language structures. Third Factor Principles also account for language variation around that-trace phenomena, CP-deletion, and the presence of expletives and Verb-second. By linking insights from recent theoretical advances in generative syntax to phenomena from language variation and change, this book provides a unique perspective, making it essential reading for academic researchers and students in syntactic theory and historical linguistics.
Chapter 2 examines linguistic changes that can be accounted for by solving labeling paradoxes. In Chomsky (2013, 2015), merging a head to a phrase no longer automatically results in the projection of that head into a label and labeling paradoxes arise when two items merge that are (too) symmetric. These paradoxes can be resolved in several ways, namely by having one of the XPs move or by feature-sharing. The resolution discussed in this chapter involves the change from phrase to head, a possibility not discussed by Chomsky. The changes discussed involve pronouns reanalyzing as functional categories, i.e. as T or v, and demonstratives reanalyzing as articles and complementizers. In the changes, a third factor resolution to the labeling problem can be observed: a change from feature-sharing and agree to Minimal Search. The changes also show other factors involved, e.g. the difference between <Q,Q> and <phi,phi> sharing. The wh-elements whether and how are specifiers and show no reanalysis to head, which indicates their feature-sharing is stable.
Chapter 1 provides some background on the shift in emphasis from Universal Grammar (UG) to third factors and gives a description of selected third factors, e.g. the Inclusiveness Condition and the Extension Condition. The main emphasis is on the Labeling Algorithm and the Principle of Determinacy. Generative models focus on the faculty of language as represented in the mind/brain. UG is the “system of principles, conditions, and rules” that all languages share through biological necessity. However, although UG received a lot of attention, recently principles “grounded in physical law” and the general “capacity to acquire knowledge” have been emphasized more. This chapter also introduces two main reasons of language change that are responsible for the linguistic cycle: those caused by economy and those by innovation.
Chapter 5 examines the tension between determinacy and labeling. Due to determinacy, if there is a TP, Verb-second (V2), i.e. V to C, is not possible but TP expletives are. Conversely, if there is no TP, V2 is possible but TP expletives aren’t. I will argue that older stages of English lack a TP and that this enables both V2 and movement of the subject from the specifier of the v*P to the specifier of the CP. It also makes the grammatical subject position and the expletive optional. Later stages of English introduce a TP, which enables expletives in the TP but bars V2. The loss of V2 and introduction of expletives has not been linked before and this offers a new perspective both on the data in English and in V2 languages and on the tension between the two third factor principles.
The chapter examines the influential perspective of symbolic interactionism with regard to its defining assumptions, its historical emergence, and its present status, both in the United States and internationally. The discussion covers debates among interactionists regarding theory and methodology, and it also considers intellectual movements strongly influenced by interactionism, especially identity theory, labeling theory, dramaturgy, and constructionism.
Lawrence T. Nichols is a former professor of sociology, recently retired from West Virginia University. He continues to do research and to publish on sociological theory, the construction of social problems, and the history and sociology of social science. Dr. Nichols also edits The American Sociologist, a quarterly journal with an international readership.
Successful investment strategies are specific implementations of general theories. An investment strategy that lacks a theoretical justification is likely to be false. Hence, an asset manager should concentrate her efforts on developing a theory rather than on backtesting potential trading rules. The purpose of this Element is to introduce machine learning (ML) tools that can help asset managers discover economic and financial theories. ML is not a black box, and it does not necessarily overfit. ML tools complement rather than replace the classical statistical methods. Some of ML's strengths include (1) a focus on out-of-sample predictability over variance adjudication; (2) the use of computational methods to avoid relying on (potentially unrealistic) assumptions; (3) the ability to “learn” complex specifications, including nonlinear, hierarchical, and noncontinuous interaction effects in a high-dimensional space; and (4) the ability to disentangle the variable search from the specification search, robust to multicollinearity and other substitution effects.
Expands the discussion of coordinate structures started in the previous chapter to another kind to syntactic ambiguity involving a prepositional phrase in the title of the Princeton University introductory linguistics course: Introduction to Language and Linguistics. On one interpretation, the left conjunct is only Language; while on the other, it is Introduction to Language. Each interpretation corresponds to a unique hierarchical structure. To determine why one interpretation is more appropriate than the other, it is necessary to consider the meaning of the words language and linguistics, including how they relate. This leads to a basic discussion of what a language is and what language is from the perspective of modern linguistics. This chapter wraps up the analysis of coordinate structures with a discussion of the use and misuse of coordinate structures in writing. It demonstrates how coordinate structures can be a source of ambiguity, redundancy, and vagueness—all hallmarks of poor writing.
Understanding biofilm interactions with surrounding substratum and pollutants/particles can benefit from the application of existing microscopy tools. Using the example of biofilm interactions with zero-valent iron nanoparticles (nZVI), this study aims to apply various approaches in biofilm preparation and labeling for fluorescent or electron microscopy and energy dispersive X-ray spectrometry (EDS) microanalysis for accurate observations. According to the targeted microscopy method, biofilms were sampled as flocs or attached biofilm, submitted to labeling using 4’,6-diamidino-2-phenylindol, lectins PNA and ConA coupled to fluorescent dye or gold nanoparticles, and prepared for observation (fixation, cross-section, freezing, ultramicrotomy). Fluorescent microscopy revealed that nZVI were embedded in the biofilm structure as aggregates but the resolution was insufficient to observe individual nZVI. Cryo-scanning electron microscopy (SEM) observations showed nZVI aggregates close to bacteria, but it was not possible to confirm direct interactions between nZVI and cell membranes. Scanning transmission electron microscopy in the SEM (STEM-in-SEM) showed that nZVI aggregates could enter the biofilm to a depth of 7–11 µm. Bacteria were surrounded by a ring of extracellular polymeric substances (EPS) preventing direct nZVI/membrane interactions. STEM/EDS mapping revealed a co-localization of nZVI aggregates with lectins suggesting a potential role of EPS in nZVI embedding. Thus, the combination of divergent microscopy approaches is a good approach to better understand and characterize biofilm/metal interactions.
This paper investigates the state of pork supply as to its neglect of developing innovations and mechanisms for delivering superior eating quality to consumers. We explore reasons behind pork supply chains’ predominant focus on mass production combined with traceability and food safety, while only little attention has been given to potentially lucrative niche markets focused on intrinsic quality cues. Using established analytical frameworks of hedonic pricing and transactions costs economics we discuss alternative strategies for the segregation and promotion of intrinsic sensory differentiated pork. Growing empirical evidence in the literature underpins the importance of eating experience in delivering utility to consumers and in stabilizing declining demand trends in major markets. Building on current consumer behavioral literature and organizational developments in meat supply chains in Europe and Australia, we critically discuss opportunities to overcome this supposedly suboptimal situation.
We administered an online choice experiment to a sample of U.S. raw-oyster consumers to identify factors influencing preferences for Gulf of Mexico oysters, determined the extent of preference heterogeneity, and estimated marginal willingness to pay for specific varieties and other key attributes. Results indicate significant preference heterogeneity among select varieties, with non-Gulf respondents estimated to require a price discount on Gulf oyster varieties on the order of $3–$6/half dozen. Gulf respondents were found to be less sensitive to oyster variety, and estimated to be willing to pay a price premium only for select Gulf varieties on the order of $0–$3/half dozen.