Dictionaries don’t define words, and words don’t really make up language

There are many academic topics which most people not in a related field know little or nothing of. There are others, such as astronomy, quantum physics, climate science, brain sciences, etc., that are interesting enough to be discussed (mostly inaccurately) in blogs, popular science magazines, newspapers, YouTube, TV, and so on. Then there are fields like linguistics. Most linguists (and often those who aren’t linguistics but study language as neuroscientists or cognitive psychologists) have had the experience of telling others what their field is only to be met with an initially positive, receptive audience that expects to hear one thing and, upon hearing something quite different behaves quite differently (sometimes even a bit hostile, at least in a few accounts I’ve heard). This reaction is because brain research or quantum physics require watching or reading about to get an inaccurate view of, and diligent study to get something more,  but everybody speaks and most people can read too. Most also either speak more than one language or have at least taken some foreign language classes.

It turns out that knowing more than one language often translates into knowing less about language. Worse still, wide-spread literacy (especially for languages with a long tradition of dictionary use) also creates rather fundamental misconceptions. So fundamental, in fact, that it took linguists a long time to realize how much even they were misled about the nature of language thanks to dictionaries and a long tradition of grammar schools and grammarians (yes, you can blame all of your problems on anybody who tried to teach you “proper” grammar, especially those who responded to “Can I go to the bathroom?” with “I don’t know, can you?” or introduced you to participles).

But let’s start with something simpler than whether or not Georgian has gerundives. Let’s start with words and simple structures, because this is really about language, words, and meaning. It would seem natural to suppose that a great part of language consists of words and a kind of mental “dictionary” of their meanings. It was so natural that linguists like Chomsky tried to understand language this way, thinking that meanings could be relegated to this mental “list” of dictionary-like meanings (called the “lexicon”) and the rest consisted of purely formal (syntactic) rules for manipulating the words of any given language to produce grammatically “correct” sentences. So, for example, linguists (even before Chomsky) produced tree diagrams in which sentences were broken down into smaller and smaller chunks within larger chunks. The main chunks were noun phrases (NPs), verb phrases (VPs), and prepositional phrases (PPs), familiar to those who were tortured…uh, I mean taught, traditional grammar in school.

For most of my life during which my grandfather was alive, I knew him as a professor of classics and linguistics at Cornell and then professor Emeritus. The books of his I received during his life (only a few) and those after his death were mostly about languages and were mostly language textbooks or grammars on e.g., Old Norse, Welsh, Cornish, Sanskrit, Old Church Slavonic, Greek Romany (he wrote that one, actually), etc. The only actual linguistics textbook I have of his—the first I read and what made me decide (wrongly) that linguistics wasn’t for me—was An Introduction to the Principles of Transformational Syntax by Akmajian & Henry (MIT Press, 1975). It was filled with tree diagrams in which a sentence S would be broken into two sub-branches NP & VP which were in turn broken into further sub-branches until at the bottom were the words of some sentence. The point was to describe the ways in which certain rules could transform or generate these grammatical structures (generate/manipulate the branches) using combinatorial methods (mathematical/formal rules that could be programmed into a computer). Once these rules were discovered, all one would need to generate or parse grammatically correct sentences would be these rules and access to an external dictionary like those thought to be in everybody’s head.

This failed. Time was when basically all natural language processing (NLP) and AI/machine learning more generally was based on generative linguistics (Chomskyan-based linguistics like that described in the aforementioned linguistics text), which was also the foundation for cognitive scientists’ understanding of language. Nowadays, NLP and related areas in machine learning/AI use advanced statistical methods and specialized databases like FrameNET rather than generative linguistics, and lots of different linguists with varying approaches and theories came to subscribe to an umbrella category of linguistic theories called cognitive linguistics (NOT generative linguistics). Meanwhile, even generative linguists increasingly had to admit that the “lexicon” wasn’t just a list of words if it existed at all, and studies of languages with radically different structures than those long known to scholars (Greek, Latin, Hebrew, German, French, English, etc.) posed significant problems even when it came to identifying whether languages had things like nouns and adjectives, and if so what empirical means there might be to determine such parts of speech.

Other than the problems posed by languages in which e.g., it seems as if everything is a verb (there are examples more extreme than Navajo, but I haven’t studied them and given the structure of Navajo I would be scared to), the big issue and the one which ultimately became central to the model of grammar cognitive linguists employ was that it seemed no matter how many rules which linguists added for a given languages, most actual speech consisted of exceptions to these rules. A landmark study was Nunberg, Sag, & Wasow’s paper “Idioms.” They challenged the view of “many linguists…[who] have been implicitly content to accept a working understanding of idioms that conforms more or-less well to the folk category” which essentially was content to regard idioms as idiomatic: a small part of any given language that could be ignored or at worst require entries in the “lexicon” that were larger than one word. In the paper, the authors showed that not only are idioms not so idiomatic as thought, but also possess rules & structures internal to them. The authors categorized idioms by these internal rules and structures, which we won’t cover, but it is important to talk a bit about their nature, as idioms were the basis for constructions and these are the basic unit of language (which we’ll get to shortly).

The easiest type of idiom to understand follows the “folk category” understanding, such as “birds of a feather flock together.” Despite its length, this idiom is basically like a single word. Even trying to change the tense is “ungrammatical” (*birds of a feather flocked together” is ungrammatical). Other idioms can be “decomposed” into meaningful parts which can be analyzed individually but only in the context of the whole idiom. Consider “spill the beans”. Clearly “Jack spilled the beans on the whole affair” is different from the (non-idiomatic) “Jack spilled the beans on the floor”. The idiom means “divulge information”, but we can split it into “spill=divulge” and “information=the beans”. This is not true of an idiom like “kick the bucket”, which is not as fixed as the “birds of a feather” metaphor (we can say “kicked the bucket”, for example), but can’t be decomposed into meaningful components. However, both “pull strings” and “kick the bucket” aren’t syntactically idiomatic in that we have a regular VP structure; the main problem with these “grammatical” idioms is that we can’t expect to regulate to the “lexicon” because “kick” in “kick the bucket” has no decomposable meaning and while “pull” in “pull strings” does, it only does in this idiom.

Other idioms are even worse. The division of idioms into “grammatical” and “extragrammatical” comes from the even more groundbreaking work with idioms by Fillmore, Kay, & O’Connor in a paper that basically founded Construction Grammar (and therefore construction grammars). Extragrammatical idioms don’t even follow predictable syntactic structures, including e.g., by and large, all of a sudden, believe you me, easy does it, be that as it may, first off, so far so good, make certain, no can do, etc.

The last dimensionality/category we’ll cover (and we’ve already introduced a lot of the notions in construction grammar) is schematicity. This is a fancy way of referring to the ways in which some idioms are actually more like grammatical rules. Syntactic structures like PPs or NPs are highly schematic (they were treated as meaningless structures that applied to basically all possible grammatical sentences). That’s what makes them grammar as opposed to part of the “lexicon”. But there are idiomatic constructions like “the X-er, the Y-er” that are almost as purely syntactic: the higher they climb, the harder they fall; the more you practice, the better you’ll be; the more you act like that, the less likely I am to give you what you want; etc. The “the” part in front of the X-er & Y-er structures is actually distinct from the definite article “the”; it’s from the Old English instrumental demonstrative. Also note that this idiom is so “syntactic” that we have to use variables to describe it, just the way we describe syntactical structures. That’s how schematic it is.

Now we can easily describe construction grammar in its barebones form. There are actually many such grammars, from the original Construction Grammar to Radical Construction Grammar or Cognitive Grammar and even Word Grammar, but they all share a fundamental property that separated them from the models of grammar before them: that the lexicon and grammar are not distinct components but lie along a continuum and that therefore constructions are the basic units of language, not lexemes (which is pretentious-speak for “words”). Some constructions correspond or can correspond to traditional parts of speech like nouns or to words. But the realization that grammatical structures, not just words, were meaningful and that this lexico-grammatical continuum existed showed us that even when we can say that a word is a construction and part of another construction like a noun phrase, it’s still true that meaning comes not from some idealized mental dictionary but are internal to the constructions in which the words appear. It turns out that about half of language consists of “prefabricated constructions” in which structure and/or meaning are internal to units that are larger than words. Put differently, about half the time we use words the meaning can’t be understood as additive (i.e., the sum of the parts of the phrase/sentence). Moreover, even if we idealize words as having independent meaning, this meaning isn’t like a dictionary entry but an encyclopedia entry.

There is one last nail in the coffin of the traditional understanding of language (at least that I’ll cover). It is related to (and involves) the encyclopedic nature of lexical meaning. Simply put, meanings are flexible. Period. Not just because they might occur in some idiom or because they might act like the modal verb “might”, but also because of things like novel usages in which phenomena like metonymy come into play. One very broad category of ways in which meanings are extended regularly and “on the fly” so to speak (I deliberately used to prefabs/idioms there) is via metaphor. My favorite example comes from a linguist who overheard part of a conversation in a pub. A member of a group of friends had left for a while, and upon returning discovered that a female member of the group had left. After asking his friends where she was, he received the answer “she left about two beers ago.” Now, normally when we wish to indicate units of time, we don’t use beers. But here the ability to comprehend novel metaphorical extensions allowed the hearer (and the linguist) to understand that “two beers” referred to the (approximate) amount of time it takes to drink two beers.

So, to wrap up, I’ll summarize the key points. Language isn’t a bunch of grammatical rules we apply to atomic elements that linguists call lexemes and most people call words. It’s vastly more complex, dynamic, convoluted, and most of all inherently and thoroughly meaningful. Not only do words lack any “dictionary” like meaning or even more generally meaning apart from the constructions in which they appear, the “structures” in language convey meaning as do various linguistic (and/or cognitive) mechanisms like metaphor. Hence debates over what a word means that rely on dictionaries aren’t just subject to the quality of the dictionary, but are fundamentally problematic. Words don’t have dictionary like meanings, and debates over what atheism means or what hypotheses are or any number of topics discussed here and elsewhere that are based on disagreements over what certain terms mean can’t be resolved by quoting dictionaries. Sometimes the terms may be technical enough that there exists among specialists an agreed upon definition. Sometimes other facets of language and linguistic use can help resolve disputes which are based on lexical semantics. Sometimes logic helps. But quoting dictionaries your average dictionary is only one step up from simply defining your personal definition to be the definition, and there is never a THE definition of any word (words are inherently polysemous).

Advertisements
This entry was posted in Linguistics and tagged , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s