We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Methods for analyzing and visualizing literary data receive substantially more attention in digital literary studies than the digital archives with which literary data are predominantly constructed. When discussed, digital archives are often perceived as entirely different from nondigital ones, and as passive – that is, as novel and enabling (or disabling) settings or backgrounds for research rather than active shapers of literary knowledge. This understanding produces abstract critiques of digital archives, and risks conflating events and trends in the histories of literary data with events and trends in literary history. By contrast, an emerging group of media-specific approaches adapt traditional philological and media archaeological methods to explore the complex and interdependent relationship between literary knowledges, technologies, and infrastructures.
Starting off substantive engagement with Sun Tzu with a focus on calculation serves a positive purpose. It is a way of emphasizing to contemporary audiences that there is more to Sun Tzu than being tricky or unorthodox – the strands of his way of war that readers, at least Western ones, widely note and often lionize. In present usage, the umbrella term “calculation” is intended to serve as a flexible rubric capable of covering intendedly rational judgments of more than one kind, many intuitive, others more formally structured.
The Computational Theory of Mind says that the mind is a computing system. It has a long history going back to the idea that thought is a kind of computation. Its modern incarnation relies on analogies with contemporary computing technology and the use of computational models. It comes in many versions, some more plausible than others. This Element supports the theory primarily by its contribution to solving the mind-body problem, its ability to explain mental phenomena, and the success of computational modelling and artificial intelligence. To be turned into an adequate theory, it needs to be made compatible with the tractability of cognition, the situatedness and dynamical aspects of the mind, the way the brain works, intentionality, and consciousness.
This article addresses contemporary art as a means to investigate how, and to what extent, financial logic impacts upon the socio-cultural sphere. Its contribution is twofold: on the one hand, the article shows that contemporary art's valuation practices increasingly reflect the logic of capitalization; on the other hand, it assesses the emancipatory potential of blockchain technology for the cultural sphere. In relation to the latter I argue that, in spite of the technological novelty of blockchain-based art projects, these nonetheless fail to challenge a received logic of finance. This exposes the limitations to technological determinism as a means of countering financial power in the socio-cultural sphere, and points to new problems for art's valuation methods in relation to the liquid logic of algorithmic finance.
This article presents a speculative philosophical account of money as a computational machine. It does so by leveraging a computational and machinic framework, drawing primarily from the work of Philip Mirowski and Jean Cartelier. The argument is focused on a specific level of abstraction, i.e., the monetary operations involved in the creation and transfer of units of account, asking whether it is possible to view these operations as computations that mediate economic relations. As the primary function of such a machine would be one of social coordination, the article also highlights the political consequences of its implementation across society.
The casino provided a unique location to probe the logic of chance for those seeking to understand fortune and misfortune, causation and correlation. Chance helped generate predictability. When we shift to consider the picture of luck that emerges, we see that it is exhibited in various systems designed to generate wins at the gambling table, lured to a person to through any number of bizarre superstitions, and made the object of social scientific inquiry. Luck was something that people could generate, manufacture, cultivate, or capture. This element of human agency speaks to a vision of the world that promoted the basic idea of human agency while also acknowledging its limits. Gambling systems and superstitions, especially when they did not rest on the foundation of the “maturity of chances,” were at their heart modern attempts to bend luck to one’s side.
The nineteenth-century Australian novel has predominantly been understood in terms of the dominance of Britain, both as the place where most books were published and as the source of literary traditions. But this account presumes and maintains the status of the book as the primary vehicle for transmission of literature, whereas the vast majority of Australian novels were serialised (either before or after book publication) and a great many were only ever published in serial form. A history of the early Australian novel that recognises the vital role of serialisation, as distinct from but also in relation to book publication, brings to light new trends in authorship, publication, circulation and reception. This history also uncovers new Australian novelists as well as previously unrecognised features of their fiction. In particular, a number of literary historians argue that early Australian novelists replicated the legal lie of terra nullius in excluding Aboriginal characters from their fiction. Considering fiction serialised in Australian newspapers indicates that these characters were actually widely depicted and suggests the need for a new account of the relationship between nineteenth-century Australian novels and colonisation.
This chapter provides an introduction and an overview of computational cognitive sciences. Computational cognitive sciences explore the essence of cognition and various cognitive functionalities through developing mechanistic, process-based understanding by specifying corresponding computational models. These models impute computational processes onto cognitive functions and thereby produce runnable programs. Detailed simulations and other operations can then be conducted. Understanding the human mind strictly from observations of, and experiments with, human behavior is ultimately untenable. Computational modeling is therefore both useful and necessary. Computational cognitive models are theoretically important because they represent detailed cognitive theories in a unique, indispensable way. Computational cognitive modeling has thus far deepened the understanding of the processes and the mechanisms of the mind in a variety of ways.
What counts as a philosophical issue in computational cognitive science? This chapter briefly reviews possible answers before focusing on a specific subset of philosophical issues. These surround challenges that have been raised by philosophers regarding the scope of computational models of cognition. The arguments suggest that there are aspects of human cognition that may, for various reasons, resist explanation or description in terms of computation. The primary targets of these “no go” arguments have been semantic content, phenomenal consciousness, and central reasoning. This chapter reviews the arguments and considers possible replies. It concludes by highlighting the differences between the arguments, their limitations, and how they might contribute to the wider project of estimating the value of ongoing research programs in computational cognitive science.
Belief is often formalized using tools of probability theory. However, probability theory often focuses on simple examples – like coin flips or basic parametric distributions – and these do not describe much about actual human thinking. I highlight some basic examples of the complexity and richness of human mental representations and review some work which attempts to marry plausible types of representations with probabilistic models of belief, one of the most exciting current directions in psychology and machine learning.
This lecture was given by Per Martin-Löf at Leiden University on August 25, 2001 at the invitation by Göran Sundholm to address the topic mentioned in the title and to reflect on Dummett’s earlier effort of almost a decade before (published in this journal). The lecture was part of a three-day conference on Gottlob Frege. Sundholm arranged for the lecture to be recorded and commissioned Bjørn Jespersen to make a transcript. The information in footnote 1, which Sundholm provided, has been independently confirmed by Thomas Ricketts in an email to the author. The present version has been edited by Ansten Klev. Following the displayed text (Int-id) there is a lacuna in the original transcript corresponding to a pause in the recording when the tape was changed. The continuous text of the present version is the result of a few additions to the original transcript suggested by Klev and agreed to by the author.
Until one is committed, there is hesitancy, the chance to draw back. Concerning all acts of initiative (and creation), there is one elementary truth, the ignorance of which kills countless ideas and splendid plans: that the moment one definitely commits oneself, then Providence moves too. All sorts of things occur to help one that would never otherwise have occurred. A whole stream of events issues from the decision, raising in one’s favor all manner of unforeseen incidents and meetings and material assistance, which no man could have dreamed would have come his way. I have learned a deep respect for one of Goethe’s couplet’s: “Whatever you can do, or dream you can do, begin it. Boldness has genius, power, and magic in it.”
Twenty-first century paradigms of global modernism implicitly endorse “babelization” (the inscrutable styles of literary texts, the addition of lesser taught languages to the field) as a corrective to linguistic imperialism and the reduction of language to a communicative medium. Yet this stance does not fully account for the distinction between natural and artificial languages. “Debabelization,” as linguist C. K. Ogden put it in 1931, motivated rich debates about the nature of language and whether technological intervention could make particular languages more efficient agents of cultural exchange. Designers of Esperanto, Ido, and Basic English each promised that their artificial language would bridge the gap between speakers of different national tongues. This essay shows how the competitive and techno-utopian discourse around auxiliary language movements intersects with the history and aesthetics of modernist literature. While linguists strove to regulate the vagaries of natural languages, modernist writers (for example, Aimé Césaire, G. V. Desani, James Joyce, Ezra Pound, H. G. Wells) used debabelization as a trope for exploring the limits of scientific objectivity and internationalist sentiment.
We describe a method to estimate background noise in atom probe tomography (APT) mass spectra and to use this information to enhance both background correction and quantification. Our approach is mathematically general in form for any detector exhibiting Poisson noise with a fixed data acquisition time window, at voltages varying through the experiment. We show that this accurately estimates the background observed in real experiments. The method requires, as a minimum, the z-coordinate and mass-to-charge-state data as input and can be applied retrospectively. Further improvements are obtained with additional information such as acquisition voltage. Using this method allows for improved estimation of variance in the background, and more robust quantification, with quantified count limits at parts-per-million concentrations. To demonstrate applications, we show a simple peak detection implementation, which quantitatively suppresses false positives arising from random noise sources. We additionally quantify the detectability of 121-Sb in a standardized-doped Si microtip as (1.5 × 10−5, 3.8 × 10−5) atomic fraction, α = 0.95. This technique is applicable to all modes of APT data acquisition and is highly general in nature, ultimately allowing for improvements in analyzing low ionic count species in datasets.
Social kinds are heterogeneous. As a consequence of this diversity, some authors have sought to identify and analyse different kinds of social kinds. One distinct kind of social kinds, however, has not yet received sufficient attention. I propose that there exists a class of social-computation-supporting kinds, or SCS-kinds for short. These SCS-kinds are united by the function of enabling computations implemented by social groups. Examples of such SCS-kinds are reimbursement form, US dollar bill, chair of the board. I will analyse SCS-kinds, contrast my analysis with theories of institutional kinds, and discuss the benefits of investigating SCS-kinds.
I argue that acceptance of realist intentional explanations of cognitive behaviour inescapably lead to a commitment to the language of thought (LOT) and that this is, therefore, a widely held commitment of philosophers of mind. In the course of the discussion, I offer a succinct and precise statement of the hypothesis and analyze a representative series of examples of pro-LOT argumentation. After examining two cases of resistance to this line of reasoning, I show, by way of conclusion, that the commitment to LOT is an empirically substantial one in spite of the flexibility and incomplete character of the hypothesis.
Minimizing the costs that others impose upon oneself and upon those in whom one has a fitness stake, such as kin and allies, is a key adaptive problem for many organisms. Our ancestors regularly faced such adaptive problems (including homicide, bodily harm, theft, mate poaching, cuckoldry, reputational damage, sexual aggression, and the infliction of these costs on one's offspring, mates, coalition partners, or friends). One solution to this problem is to impose retaliatory costs on an aggressor so that the aggressor and other observers will lower their estimates of the net benefits to be gained from exploiting the retaliator in the future. We posit that humans have an evolved cognitive system that implements this strategy – deterrence – which we conceptualize as a revenge system. The revenge system produces a second adaptive problem: losing downstream gains from the individual on whom retaliatory costs have been imposed. We posit, consequently, a subsidiary computational system designed to restore particular relationships after cost-imposing interactions by inhibiting revenge and motivating behaviors that signal benevolence for the harmdoer. The operation of these systems depends on estimating the risk of future exploitation by the harmdoer and the expected future value of the relationship with the harmdoer. We review empirical evidence regarding the operation of these systems, discuss the causes of cultural and individual differences in their outputs, and sketch their computational architecture.
Different explanations of color vision favor different philosophical positions: Computational vision is more compatible with objectivism (the color is in the object), psychophysics and neurophysiology with subjectivism (the color is in the head). Comparative research suggests that an explanation of color must be both experientialist (unlike objectivism) and ecological (unlike subjectivism). Computational vision's emphasis on optimally “recovering” prespecified features of the environment (i.e., distal properties, independent of the sensory-motor capacities of the animal) is unsatisfactory. Conceiving of visual perception instead as the visual guidance of activity in an environment that is determined largely by that very activity suggests new directions for research.