No CrossRef data available.
Article contents
A proof of morley's conjecture
Published online by Cambridge University Press: 12 March 2014
Extract
In the 1960's, it was conjectured that a complete first order theory in a countable language would have a nondecreasing spectrum on uncountable cardinals. This conjecture became known as Morley's conjecture. Shelah has proved this in [10]. The intent of this paper is to give a different proof which resembles a more naive way of approaching this theorem.
Let I(T, λ) = the number of nonisomorphic models of T in cardinality λ. We prove:
Theorem 0.1. If T is a complete countable first order theory then for ℵ0 < κ < λ, I(T,K) ≤ I(T, λ).
In some sense, one can view Shelah's work on the classification of first order theories as an attack on Morley's conjecture. Over the years, he has shown that certain assumptions on a first order theory would lead to its having maximal spectrum in powers larger than the cardinality of its language (see §6 for precise references). At some point it must have seemed that Morley's conjecture would be a corollary to an exact calculation of all possible spectrums. In the end, this did not occur and, in fact, the exact spectrum functions are still not known (see [10]). Let us consider a naive approach to the proof.
If we have two nonisomorphic models of the same cardinality and their cardinality is “large enough” then there should be some reason, irrespective of their cardinalities, which causes this nonisomorphism. If we could isolate this property and extend these models to a larger cardinality preserving this property, then the larger models would also be nonisomorphic. The notion of extendibility introduced in §2 is such a property which allows a version of this naive proof to work. Let us preview the sections.
- Type
- Research Article
- Information
- Copyright
- Copyright © Association for Symbolic Logic 1989