In my work, I study language comprehension. Intuitively, most people think that the solution to cracking language comprehension surely must be to figure out how word meanings are retrieved and how they are combined to form sentences. But do words actually have meanings?

Intuitively, the function of words is that they mean something. “Dog” has a meaning, “gog” doesn’t. Psycholinguists often use the construct of the so-called mental lexicon, a dictionary-like mental structure that stores word meanings. As words are processed, they are transformed from messy vibrations in the air to neural signals in the brain that ultimately activate the corresponding concepts. During sentence comprehension, multiple concepts combine according to a set of rules. Steven Pinker dedicated a whole book to the idea that language processing essentially comes down to two ingredients: Words and rules, and, believe it or not, the book’s title is “Words and rules – The ingredients of language”. In this scenario, the work seems to be cut out for us: Describe how words activate their corresponding concepts and describe the rules that govern how they combine, problem solved. If only.

The stumbling stone in this scenario is the assumption that each word has a corresponding entry in the mental dictionary that simply needs to be looked up. Large amounts of experimental data show that word meanings are extremely fluid and depend so much on contextual factors that it becomes hard to separate word meanings from their contexts. Here are some examples: A central conceptual feature is whether something is alive and can perform actions, have feelings, etc. Because of this, sentences like “The peanut fell in love” generally trigger a rapid error signal in the brain that can be measured at the scalp with electrodes. However, if some context is added before the sentence that talks about a peanut that has never known love but then met a very cute peanut, this error signal (in response to the same sentence) disappears.

In my own work, I found that when people processed words like “elephant”, their brains’ activation patterns carried information about typical size (an elephant is large) when their task was to decide whether it would fit in a shoebox. But when their task was to decide whether it is an animal or not, the brain’s responses changed drastically and did not contain information about typical size any longer. These examples show that the meanings of words are fluid and our brain adapts very flexibly to the current context in which words occur.

Reading this, you might be wondering how far this phenomenon can reach. After all, at least some ‘core’ features of a concept should always be activated, shouldn’t they? In the famous Stroop task, participants are asked to name the font colors of color words. The typical finding is that participants respond very quickly when the font color and the color word match (BLUE -> “blue”) and much more slowly when they mismatch (BLUE -> “green”). The delay caused by mismatching color words is larger than the one caused by neutral non-color words, but it grows as words become more color-like (for example “lake” is associated with blue and will make responding “green” a little harder). This indicates that when people read a color word, such as ”blue”, the corresponding color concept (representing the blueness of blue) becomes active.

However, even this effect is not automatic: if participants see many instances where the font color and the color words are not the same (BLUE -> “green”), this no longer causes a delay compared to when the font color and color word are the same (BLUE -> “blue”). This suggests that participants can quickly learn to block out color words’ meanings. That means that, counter-intuitively, even the blueness of blue is not automatically active whenever one encounters the word “blue”.

If context can change how we process words so much that the contrast of living and non-living things, the typical size of things, and even the blueness of blue are not automatically activated, maybe we should come to terms with the idea that words don’t have meanings separate from the contexts in which they occur. This has two consequences: 1) it shows that our brain’s capacity to understand what people are saying is even more impressive than we thought. Rather than activating fixed concepts, words are just one of many available cues to meaning. The brain flexibly integrates these cues to construct situation-specific interpretations of what people are trying to communicate. 2) It changes the entire enterprise of explaining language comprehension. Instead of describing a fixed set of word meanings and rules, we are working with moving targets and have to explain how unique contextual word meanings arise on a moment-to-moment basis.

A final note of nuance: Even though I made the case that words don’t have fixed meanings, it is also true that in many contexts words will gravitate towards relatively similar meanings. The point is that contexts can and often do override this general tendency. It is only by taking this aspect seriously that we will be able to figure out how humans understand language.