The review article by Sabbagh & Gelman (S & G) on The emergence of
language (EL) mentions several criticisms of strong emergentism, the view
that language emerges through an interaction between domain-general
learning mechanisms and the environment, without crediting the organism
with innate knowledge of domain-specific rules, a view that successful
connectionist modelling is taken to support. One criticism of this view and
the support for it that connectionist modelling putatively provides has been
made frequently, and is noted by S & G: it is arguable that connectionist
simulations work only because the input to the network in effect contains a
representation of the knowledge that the net seeks to acquire. I think it is
worth adding to this another criticism that to my mind is a fundamental one,
but which has not featured so strongly in critiques of connectionism. A
primary goal of modern linguistics has been to account not merely for what
patterns we do see in human languages, but for those that we do not. The
concept of Universal Grammar is precisely a set of limitations on what
constitutes a possible human language. The kind of example used in teaching
Linguistics 101 is the fact that patterns of grammaticality are structurally,
not linearly, determined: in English we form a yes–no question by inverting
the subject NP and auxiliary verb, not by inverting the first and second words
of the equivalent declarative sentence, or the first and fifth words, or any
number of conceivable non-structural operations. Could a connectionist
mechanism learn such non-structural operations? Perhaps I have asked the
wrong people, but when I have queried researchers doing connectionist
modelling, the answer appears to be ‘yes’. If that's the case, then connectionist
mechanisms as currently developed do not constitute an explanatory
model of human language abilities: they are too powerful.