Did you know that if you are a fan of Star Wars, in your brain there might be neurons that respond only to characters from the movie? Interestingly, some types of artificial brains that scientists generate with computers (called artificial neural networks) also spontaneously develop such neurons. What does it (and what does it not) tell us about the real brain? And what does it have to do with language?

Human language is enabled by our ability to create concepts—capture snapshots of meaning which help us structure the world around us. Only once we isolate a concept can we give it a name. For example, at first, you may be surprised to see when one driver gets so angry at another that he starts to drive in a reckless and aggressive way. After you realize that it is actually a frequent behaviour, you may start to think about it as a “thing”—this is the concept. Then you can give it a name, such as “road rage”. Giving the label helps to preserve the concept and later enrich it with new experiences. From now on you can refer to it when talking to your friends, without the need to invoke all examples required to understand it. The new concept will also be your own prop in thinking: You don’t need to busy your mind imagining instances of road rage to ask yourself why people sometimes get so mad in the car. You don’t even need to be a driver to learn this name from others and all information associated with it! Without concepts, we, homo (humans), would not be sapiens (thinking). Scientists believe that the ability to create new names is the key ingredient that sets our language abilities apart from that of other species (for example, monkeys can be taught to use sign language, but unlike children, they never spontaneously create new signs and are limited to the repertoire of signs that they have been taught by humans). But quite possibly, it is not only the ability to give new names, but the ability to conceptualize (isolate concepts), that fundamentally defines human language abilities. 

Nim Chimpsky (November 19, 1973 – March 10, 2000), chimpanzee who was the subject of an extended study of animal language acquisition at Columbia University. Photography by Herbert Terrace.

Because concepts are so crucial, it was exciting news when scientists discovered (back in the 2000s) neurons in the human brain which were activated only by specific people, places, and objects. These neurons were dubbed “Jennifer Aniston” neurons, named after a specific neuron that showed curious affinity to the American actress and fired only when presented with her picture. Importantly, such concept neurons (as they are called now) do not fire consistently only to pictures. Another neuron fired when presented with the American actress Halle Berry. It reacted to her pictures, also when dressed as Catwoman (but not to a different actress who also starred as Catwoman), but also pencil sketches of her, caricatures, as well as to the letter string HALLE BERRY. Such sensitivity straddling different modalities, such as text and pictures, shows that these neurons are truly abstract – they react to the concept no matter how it is presented. They ignore all details (e.g. the costume worn by the actress) and focus on the part that is invariant, the core concept of the person. Another interesting property of these neurons is that they fire very late, about 300ms after a picture is presented, which is ages in the neural time scale. This suggests that the visual information must pass quite elaborate processing in the brain before it can be distilled to the bare concept. 

More recently it was found that an artificial neural network architecture—called CLIP—also seems to produce concept neurons. Given a large database of pictures (a cat on a sofa) and short descriptions (e.g. “kitty is sleepy OMG SO SWEET!”), these networks were trained to match the two. Thus, the network had to learn both about language and about recognizing pictures. At each iteration of learning, these networks were modified a tiny bit to learn from their mistakes and produce better responses next time. When a team of computer scientists gathered to test the artificial network after it was fully trained, it turned out that one of the highest layers of the network consists of neurons that resemble concept neurons in the human brain. For example, one neuron seemed specialized in lemons. It reacted to the pictures of lemons, word LEMON, and more weakly to its perceptual properties, such as yellow colour. Another one was a Spider-Man neuron. Much like the “Jennifer Aniston” neuron, it reacted to Spider-Man depictions both from the comic books and from the movies, to photos of spiders, and to the text SPIDER. 

Types of pictures that highly activate the Spider-man neuron. 

Artificial neural networks have a nice property—we can study them in their entirety, neuron after neuron, quite unlike testing neurons in real brains where we have to implant depth electrodes. How can we study neurons in artificial networks? We can, for example, test which pictures or texts activate a given neuron the most. We can also ask the network to “hallucinate”, i.e. generate texts of pictures such that they maximize the activation of a given neuron. When a team of computer scientists set out to study all these concept neurons in more detail, a strange and beautiful world of conceptual representations emerged (see also Further Reading at the end). 

Just like the concept neurons in the human brain, the “artificial brain” also had neurons for famous people, such as Donald Trump. This neuron was also sensitive to MAGA caps, “The Wall” slogans, or to people associated with the ex-president, such as Mike Pence or Steve Bannon. This neuron was also strongly deactivated when the network was exposed to a picture of Martin Luther King Jr. or to the symbols of the LGBT movement, or, curiously, to images and texts associated with the computer game Fortnite (artificial networks sometimes pick up associations that are not obvious, or are plainly accidental and make no sense). 

Visualizations of CLIP neurons dreaming of “Donald Trump”. 

There were also numerous neurons associated with specific geographical regions, for example, places in Europe or Africa. These neurons were sensitive to pictures showing maps with the region’s location marked on it, as well as to the landmarks,  architecture, food, logos, flags, faces, clothing styles, names, language, and alphabet typical for the given region — or even TV shows popular in the region. Some of them were also associated with other stereotypical traits associated with the region, e.g., a “California” neuron also reacted to the word “entrepreneur”. 

Network’s hallucinations of preferred images and photos that most strongly activate a geographical neuron with a preference for France, Netherlands, and Switzerland-related pictures and texts. 

Other neurons were more abstract. For instance, the artificial neural network was able to isolate core concepts associated with emotions or psychological traits: shock, crying, happiness, sleepiness, or being evil. They reacted to facial expressions (sometimes even across species), body language, drawings, and related texts (also slang words, such as OMG! or WTF for shock).  

Neurons dreaming about facial expressions of shock, cry, and happiness. 

Interestingly, much like it is hypothesized in human psychology, basic emotions and states (which also happen to be universally recognized across cultures), combined into representations of more complex emotional states via simple arithmetic relationships. For example, the “intimate” concept was encoded by activations of “soft smile” and “heart” neurons, and inhibited by the activation of the “sick” neuron. Another example of such “conceptual algebra” is the concept of “confidence”, which was strongly represented by an “overweight” neuron, and more weakly by “soft smile” and, “happy” neurons, whereas it was inhibited by a “sleeping” neuron. As an example besides the emotion neurons, “piggy bank” was represented mainly as a combination of “finance” and “dolls & toys”. This shows that concepts that were perhaps less frequently observed in the model’s experience, were represented by a larger number of neurons (something that neuroscientists call sparse representations). Also, this mimics concept neurons in the human brain, which sometimes also react to a larger set of related concepts, such as a neuron firing to snakes and spiders, but not other animals, or a neuron firing to Yoda and Luke Skywalker, both characters from Star Wars. 

So, does it mean that the researchers built a neural network that functions just like the human brain? as the similarity between concept neurons in the human brain and their counterparts in the artificial network may suggest? Definitely not! Neural networks are constructed using different principles. Also, it is not the case that the network created all the concepts itself as humans do. As part of its training, the network received language input in which each word already reflected a concept that has been isolated by humans in the past and now functions in language as part of our cultural legacy. 

On the other hand, the neural network did its share of abstraction: it was able to extract relationships between words (to recognize semantically related concepts), to recognize objects in images (despite the differences in visual details), and to relate the two, which allowed it to, for example, recognize that the pictures of Donald Trump and a “MAGA” hat are related. But it does seem to demonstrate that abstraction (across many modalities) can be obtained “for free” simply by crossing and mixing the different modalities and presenting it to learning machinery with sufficient capacity (such as this neural network or hippocampus in the human brain). Moreover, a major part of the artificial network’s machinery was located in specific “modules”—for image recognition and language understanding—with relatively few of the artificial neurons spent on integrating the modalities. This is another analog to the biological brain, which appears to consist of modules that interact primarily via top-down connections, with only a handful of regions that integrate this information. In this way, artificial neural networks may indirectly teach us something about the brain, by showing what kind of information and structure is necessary to obtain concept-like representations. 

Further reading: