Letters are more easily recognised when embedded in a word. We’ve all experienced this effect, for instance when navigating in bad weather: it’s easier to read a word or name (like a road sign) than a random string (like a licence plate). But why?
Historically, there have been two explanations for this phenomenon, motivated by two very different perspectives on the brain.
First, there is the bottom-up perspective, which holds that information processing in the brain works purely ‘from the bottom-up’: First – at the ‘bottom’ – the brain processes simple forms (like lines), then shapes (like letters), all the way ‘up’ to complex objects (like words). Critically, information flows only in this ‘upward’ direction. Therefore, in bottom-up models, the effects of complex contexts (like words) can only happen at the end of the processing chain – after the letters have been perceived. In other terms, these models claim that words don’t help you see letters more clearly— rather, words only help you to guess what you’re seeing:
Alternatively, top-down models are motivated by the idea that information in the brain can flow in both directions, and propose that knowledge of words can enhance perception “from the top-down”. So under this account, word contexts don’t just help you guess the letters you’re seeing, but can also make you see them better:
Perhaps surprisingly, the top-down model is currently the most influential one — it’s what you’ll find in the textbooks. This is because of an elegant series of behavioural experiments supporting the top-down model.
And yet, the debate was never quite settled – in part because there was no evidence that word contexts help you see the letters better – rather than just guess them better. This is what we set-out to test in our study.
We reasoned that, if words can help participants actually see letters better, then we might be able to measure this perceptual enhancement in early visual cortex (responsible for the perception of lines and edges) already.
On the other hand, if words do not change how participants see the letters, we should not observe such changes in early visual areas.
To test this idea, we compared participants’ brain activity while they viewed streams of words (that is, letters in context) or nonwords (random strings). In each stream, the middle letter was fixed (a U or an N) while the outer letters varied, forming either word or nonword (nonsense) contexts. We added noise to make the letters more difficult to see (rendering context extra helpful).
This is an illustration of what the participants would see in the scanner:
To probe the amount of sensory information present in early visual parts of the brain, we trained a computer model to identify the middle letter based on participants’ brain responses to the stimuli in early visual cortex:
Now, if participants can see the letter better in a word context than in a nonsense context – as top-down models predict – then we should be able to read-out from the brain activity which letter a participant is seeing, more so in the word context than in a nonsense context.
Strikingly, this is exactly what we find: letters are more easily read-out from early visual cortex when they are embedded in a word than when they are embedded in a non-word
Finally, we found that activity in key areas in the brain’s reading network correlated with the enhancement effect in visual cortex.
This suggests that our word knowledge from the reading network is enhancing our perception of simple shapes and letters ‘from the top-down’.
Altogether, these results support top-down models of reading (and perception more generally), and suggest that perhaps you can better identify letters in words because you might, quite literally, see them better.
This post is based on an illustrated Twitter thread, which can be found here.
The full story can be found in published paper: DOI:10.1038/s41467-019-13996-4