Letters are more easily recognised when embedded in a word. We’ve all experienced this effect, for instance when navigating in bad weather: it’s easier to read a word or name (like a road sign) than a random string (like a licence plate). But why?

Historically, there have been two explanations for this phenomenon, motivated by two very different perspectives on the brain.
First, there is the bottom-up perspective, which holds that information processing in the brain works purely ‘from the bottom-up’: First – at the ‘bottom’ – the brain processes simple forms (like lines), then shapes (like letters), all the way ‘up’ to complex objects (like words). Critically, information flows only in this ‘upward’ direction. Therefore, in bottom-up models, the effects of complex contexts (like words) can only happen at the end of the processing chain – after the letters have been perceived. In other terms, these models claim that words don’t help you see letters more clearly— rather, words only help you to guess what you’re seeing:

Bottom-up models propose that word-context only informs decisions, and not your perception

Alternatively, top-down models are motivated by the idea that information in the brain can flow in both directions, and propose that knowledge of words can enhance perception “from the top-down”. So under this account, word contexts don’t just help you guess the letters you’re seeing, but can also make you see them better:

According to top-down models, knowledge of the context (here, the word “TIME”) can help us to perceive the letter’s individual lines (here, the letter “E”)

Perhaps surprisingly, the top-down model is currently the most influential one — it’s what you’ll find in the textbooks. This is because of an elegant series of behavioural experiments supporting the top-down model.

And yet, the debate was never quite settled – in part because there was no evidence that word contexts help you see the letters better – rather than just guess them better. This is what we set-out to test in our study.
We reasoned that, if words can help participants actually see letters better, then we might be able to measure this perceptual enhancement in early visual cortex (responsible for the perception of lines and edges) already.

On the other hand, if words do not change how participants see the letters, we should not observe such changes in early visual areas.

To test this idea, we compared participants’ brain activity while they viewed streams of words (that is, letters in context) or nonwords (random strings). In each stream, the middle letter was fixed (a U or an N) while the outer letters varied, forming either word or nonword (nonsense) contexts. We added noise to make the letters more difficult to see (rendering context extra helpful).

This is an illustration of what the participants would see in the scanner:

To probe the amount of sensory information present in early visual parts of the brain, we trained a computer model to identify the middle letter based on participants’ brain responses to the stimuli in early visual cortex:

We first trained a computer model on the brain activity of participants viewing only letters (U or N). Then we used that computer model to predict the middle letter from the word based on the sensory information in the early visual brain.

Now, if participants can see the letter better in a word context than in a nonsense context – as top-down models predict – then we should be able to read-out from the brain activity which letter a participant is seeing, more so in the word context than in a nonsense context.

Strikingly, this is exactly what we find: letters are more easily read-out from early visual cortex when they are embedded in a word than when they are embedded in a non-word

Quality of letter information in words an nonwords predicted by models (A) and observed in brains of participants (B).
As can be seen in panel A, the top-down model predicts that the quality (or ‘fidelity’) of letter information is enhanced by word contexts, while the bottom-up model predicts the information should not be different between words and nonwords. As can be seen in panel B, the observed results in early visual brain areas match the top-down predictions: letter information is more reliable (i.e. easier to ‘read-out’) in words compared to nonwords, and this was found using two independent methods (‘classification’ and ‘pattern correlation’). Grey dots with connecting lines represent observations (B) and predictions (A) for single participants. The stars indicate that the difference is ‘statistically significant’. In other words, that it is unlikely due to random chance: ** means less than 1%, and *** less than 0.1% chance to happen randomly. This means we can be very sure the difference we observe was reliable and not a statistical fluke due to random noise.

Finally, we found that activity in key areas in the brain’s reading network correlated with the enhancement effect in visual cortex.

This suggests that our word knowledge from the reading network is enhancing our perception of simple shapes and letters ‘from the top-down’.

Altogether, these results support top-down models of reading (and perception more generally), and suggest that perhaps you can better identify letters in words because you might, quite literally, see them better.

This post is based on an illustrated Twitter thread, which can be found here

The full story can be found in published paper: DOI:10.1038/s41467-019-13996-4