An attempt at summarizing Chomsky's lecture in Ann Arbor on July 11 2013 for the benefit of those who were unable to attend, and those who did not take as copious of notes as I. All material in "quotation marks" is an actual quote from Dr Chomsky; all material in 'single quotes' is not. Material in [brackets] is my commentary.
It's unpopular opinions time, ladies and gentlemen.
The one undisputed fact in this lecture is that the second half of the 20th century saw an explosion of inquiry into language, and a vast proliferation of investigable questions. It is disputed how much of a good thing the pattern this followed was. More on that later.
We can trace much of this explosion to new options for considering and investigating the question: what is language?
[Here Chomsky describes for the first time what he calls the "Basic Property" of language, the full description of which I was never able to properly get down, but which goes something like the following: an unbounded array of hierarchically interpreted something something conceptual and mental system. If I get better data (like someone uploads a video or something) I'll update this to reflect that. Key words: unbounded, hierarchical, interpreted, conceptual/mental.]
The classic formulation, from Aristotle, is that 'Language is sound with meaning.' This formulation is misleading.
Chomsky uses the term 'i-language': language is internal, individual, and intensional [which refers to a 'computational procedure'?], and claims that language has a strong generative capacity. He disdains the concepts of 'e-language' and weak generative capacity, possibly claiming they are incoherent.
Language is a property of an individual, arising from an individual brain - this is the biolinguistic perspective, which he claims should be a truism.
Investigations must be based on a concrete idea of what language is. We have only recently been able to articulate the Basic Property; previously, descriptions were fuzzy and vague, along the lines of Aristotle's classical formulation, or the Boazian idea that languages can differ arbitrarily, and therefore one may approach any given language with nothing more for one's linguistic theory than a collection of analytic procedures and principles.
However, generative grammar and the biolinguistic approach open up new avenues of inquiry: we can learn much by comparing many languages, and within a single language we still have new things to investigate if we assume that there is an underlying structure, pattern, or thing inherent to language. [I think this is what he meant, anyway.] Language relies on shared biology: this is where UG (Universal Grammar) becomes important.
However, alas, fuzzy answers to the question remain current. From a recent publication, names skipped to protect the guilty: language is "the full suite of abilities to map sound to meaning." This is "too empty to ground." [One might say that this 'fuzzy' explanation (more of a description) does not have any predictive value; it does not make any scientific assertions. Therefore it isn't useful or meaningful.]
He refers to Ian Tattersall's work on human evolution, citing the claim that there was an extremely abrupt transition, 50-100 thousand years ago, to modern humanity. The hypothesis, then, is that the appearance of language was the source of this abrupt transition. [This is foreshadowing the claim that language is primarily internal, and peripherally external/expressed.]
He then discussed the approach to language of some great scientists of the past, who noted that language was the primary separator between human and animal, and/or noted the infinitude of language. Particularly, Darwin and Galileo both recognized the unboundedness of language, which was evidently quite rare. He emphasized Descartes' notion that the use of language has a creative character - another way of expressing its lack of bounds. Language is an infinite use of finite means. The task of the linguist, then, would be to discuss the mechanisms of this use, how they arise in the mind, and "great principles underlying all languages". [This may have been further reference to Descartes.]
Therefore, he says, we must investigate the interfaces involved in language production, the generative procedures, and have an emphasis on "free expressions", the heart of creative infinitude.
There are puzzling phenomena, which are not obvious until you have articulated the Basic Property. He says, "convention and analogy … gets you nowhere." The key to scientific investigation is noticing puzzlement, accepting it, and acting upon it. In that way we learn, as we already have to some extent with linguistics, that "our beliefs are all senseless and our intuitions are mostly wrong."
He used an example: take the sentence "Eagles that fly swim." Add a word to the beginning, say, "instinctively": "Instinctively, eagles that fly swim," or "can": "Can eagles that fly swim?" Both words added to the beginning link to 'swim', not to 'fly'. It's a remote, structural connection - that is, nonlinear. Linear distance is simpler than structural distance, but we don't use it. How do we do this? Why do we do this? How do children instinctively know the answer, even if they haven't heard it before? [Heard the sentences, or heard the structure? I'm not sure]
Hypothesis: the linear order is simply unavailable to be used for interpretation. Perhaps UG restricts the search [for interpretations]. [Is there another plausible explanation?]
This puzzle, and this answer, are resisted and dismissed - indicating the immaturity of the field, and driven by ideology.
Language tends toward minimal computation - which is invariably minimal strucutral difference, again, not linear (although the latter is more computationally simple).
There is some supporting evidence from neuroscience. There was an experiment where participants are [exposed to? Learn?] invented languages, some of which conform to UG principles like structural distance and some of which conform to non-UG principles like linear distance. In the UG case, we see normal brain activity for language processing. In the non-UG case, we see diffuse brain activity, indicating that it's being processed as a nonlinguistic puzzle rather than as linguistic data. [He mentioned a name in connection with this study but I didn't write it down.] Also relevant: some research done by Neil Smith.
Attempts to show that the principles of language are learnable by statistical analysis have universally failed, and are "efforts beside the point in the first place," since the important question is why we use structural distance instead of linear distance. He cites the linguists' "unwillingness to be puzzled."
The sensorimotor system requires linearity, suggesting that we pass language through a linearizing filter which is older than our linguistic capacity, and therefore not specific to language - everything we perform bodily must be linear in nature.
A broad theory: linearity is completely unavailable linguistically.
A new formulation of the classic hypothesis: language is meaning that sometimes occurs with sound. That is, the externalization is secondary; most use of language occurs as internal dialog. There is a lack of research on internal dialog, which is, again, driven by ideology, even though a lot of interesting work could be done on it. We see the "instant" formation of full expression in conscious thought - most of the work goes on below the conscious level.
Language is essentially an instrument of thought, and externalization is peripheral: this is an unpopular opinion. One inevitable implication is that most inquiry into language origin is on the wrong track entirely, being an inquiry into the history of communication.
We seek the simplest theory, the one with the fewest arbitrary stipulations. So let us look at computation. The simplest computational procedure is one where you have an x and a y, and you combine them to make a z. Let's call this 'merge'.
In order to have the simplest procedure, neither x nor y can be modified, and they must also be unordered. So we get a procedure analogous to set formation. Note that the brain does not necessarily include sets; it merely has properties that can be characterized in these terms.
There are two sorts of merge we can do: either x and y are completely separate (this is called 'external merge'), or one is part of the other (this is called 'internal merge'). An example of internal merge: x = 'which book x', y = 'john read which book x', x + y = 'which book x john read which book x'. This gives us, eventually, "Which book did John read?"
This is the only possible binary merge, and it would take an arbitrary stipulation to bar either internal or external merge. It's assumed that displacement is an imperfection, but no - it is expected, and its absence would instead be an imperfection.
Chomsky argues that internal merge is actually simpler than external merge; internal merge yields structures appropriate for [our methods of] semantic interpretation! But they are bad for sensorimotor production - so one copy is (almost) universally dropped, and it is always the structurally lower copy (pointing again to our 'minimal computation' principle). Even the exceptions [which he didn't talk about] confirm the principle.
Articulated gaps, although they are computationally useful for the speaker, are hard to parse - language is here disregarding complexity of use.
If you want to replace internal merge, you have a dual burden of proof: 1) Justify barring internal merge and 2) justify your new mechanism.
It is "inconceivable that data processing yields these outcomes."
The common claim that there are no genuine linguistic universals refers to descriptive generalizations, which are actually "quite likely to have exceptions." When astronomers found perturbations in the orbit of Uranus, they assumed the theory of physics and their data were correct and did further investigation, which led them to find Neptune. On the other hand, if linguists had found these perturbations, they would have said, "okay, let's throw out physics." Common approaches to linguistic generalizations [or something? Sketchy notes here] are based on "pre-scientific belief" and something about "perversions" [presumably of science].
[He said something here about 'islands' as another illustration of computational efficiency over communicative efficiency - 'The mechanics said they were working on the cars' lets you ask "How many cars?" and "How many mechanics?" but something. I kind of lost the thread of this example, even while he was saying it.]
Language is an instrument of thought. The emergence of language was sudden and recent and there has been no detectable evolutionary change since [I'm not sure what he means specifically - humans themselves have evolved in detectable ways, but possibly not with regards to language processing; we don't know]. The appearance of language, however you slice it, occurs in a very narrow time window.
The study of the origins of language is a "huge literature based on absolutely nothing", a "fantasy", and "an interesting pathology in the field." A hypothesis about the origin of language consistent with Chomsky's ideas: some human had a mutation permitting unbounded merge, the only thing that separates our language from animals' modes of communication [that's how that's supposed to go, right?].
This question, "what is language?," matters quite a lot and has many ramifications.
OKAY QUESTION TIME
[I'm just going to put up some interesting quotes and concepts from the question section, as my notes on it are a bit scattered.]
"Linguistics is part of psychology." "Language is an organ of the body." "An awful lot of what goes on in the field is just seriously misguided" [the whole lecture in one sentence, basically].
"Ungrammaticality is kind of a funny notion" [language is fundamentally individual]. There's no clear split between grammatical and un-, so it's very hard to talk about. "Every metaphor is an ungrammatical expression." Finnegan's Wake is word salad, but it's meaningful word salad. The essence of poetry is that the reader is forced to "contrive a world of interpretation."
How do we justify pursuing something so comparatively esoteric, when there's so much suffering in the world? Well, it's helping us explore the question 'what is the nature of human beings,' but ultimately that's a question you have to answer for yourself.
Embedded sentences are "mostly misunderstood." Embedding mostly doesn't appear in speech, due to the limits of short-term memory - but even long-term memory is not on the right scale for containing language; there are more grammatical 8-word sentences than there are particles in the universe.
Question: Not all allowable grammars are implemented; might not that data say something? Answer: All the value is in theory; you're welcome to chase data all day long, but it's a waste of time and energy. We need understanding, not prediction. [I think that here he is contradicting himself, because earlier he said that exceptions to generalizations are instructive, and even gave a specific example.] Data analysis is not fruitful elsewhere. [But we would never have found Neptune without working closely with the data on Uranus' orbit - the predictive value of the theory plus its distance from the data was how we got that.] Everything we do is communicative; you can study communication [but it's not linguistics?]. He then disclaimed, "I'm kinda exaggerating."
Question: You usually say that competence is more important than performance, but you seemed to be talking about performance a lot here. Answer: "If I did, I wasn't aware of it." [audience laughter] What you know vs what you do - we can study both, but ultimately it's more interesting to study what you know, because there are too many constraining factors on what you do (e.g., mostly nobody can do three-digit arithmetic in their head, because of the limits of short-term memory).
We hear and process linearly - but we don't really know how we process. There's a striking incompatibility between computational efficiency and communicative efficiency, and language is badly designed for communication. A statement that "drives people crazy": "language is beautiful but unusable."
[Then a, like 12-year-old asked a question and it was adorable:] When we read, we, like, have this vicarious (visual?) experience - why does everyone experience the same text differently? Short version of answer: We impose rich knowledge and structure on the 'hints' provided in the media we consume. Poetry makes us impose quite a lot. Bonus quote: on theory of mind, "no-one knows what that means."
AND SCENE