
Letters are more easily recognised when embedded in a word. We’ve all experienced this effect, for instance when navigating in bad weather: it’s easier to read a word or name (like a road sign) than a random string (like a licence plate).
But why? Historically, there have been two accounts for this effect.
Bottom-up models claim that the word advantage is only a post-perceptual advantage in guessing the correct letters. So under this account, you see letters just as well or poorly—words only help you guess what you’re seeing:
Alternatively, top-down models propose that linguistic knowledge can enhance perception in a top-down fashion. So under this account, word contexts don’t just help you guess the letters you’re seeing, but can also make you see them better:
Perhaps surprisingly, the top-down model is currently the most influential one — it’s what you’ll find in the textbooks. This is because of an elegant series of behavioural experiments supporting the top-down model.
And yet, the debate was never quite settled – in part because there was no neural evidence supporting the top-down model.
This is remarkable, because the top-down model makes a clear neural prediction: if the behavioural advantage reflects a perceptual enhancement of letter stimuli, then it should be accompanied by an enhancement of sensory information in visual cortex already:
In this study, we set out to test exactly that.
We presented participants streams of 5 letter words or embedded in Gaussian noise. In each stream, the middle letter was fixed (a U or an N) while the outer letters varied, forming either word or nonword contexts:
Our idea was that if word contexts enhance the perception of letter stimuli, then we should observe an enhancement of sensory information in visual cortex.
To probe sensory information, we try to classify the middle letter (U or N) based on brain activity patterns in V1-V2. Just to be sure, we also probe representations using a simpler technique, based on comparing correlations between brain activity patterns:
Strikingly, we indeed find the enhancement predicted by the top-down model: letters are more easily decoded from early visual cortex when embedded in a word:
Finally, we asked what the neural source of this enhancement effect might be. Early visual cortex itself does not know anything about words or letters, so it has to come ‘from the top-down’ — from some higher-order brain area. But where?
We expected that activity in a source area would covary with the amount of letter information in visual cortex, such that letter information is higher when the area is more active, and vice versa. Importantly, this relationship should hold within, not just between conditions.
When we tested for such a hallmark, we found exactly this pattern of effects in VWFA, pMTG and IFG — all key areas of the reading network. This suggests these areas might be the source of the enhancement
Altogether, these results provide neural support that context effects in letter perception can be (at least to some extent) top-down in nature. In other words, they suggest that perhaps readers can better identify letters in words because they might, quite literally, see them better
This post is a shortened version of an illustrated Twitter thread, which can be found here.
The full story can be found in published paper: DOI:10.1038/s41467-019-13996-4