How do we localize sounds?

A new study resolves a longstanding controversy over how the brain determines the source of a sound.

Graphic of a recorded sound wave. Image credit: Antje Ihlefeld (CC BY 4.0)

Being able to localize sounds helps us make sense of the world around us. The brain works out sound direction by comparing the times of when sound reaches the left versus the right ear. This cue is known as interaural time difference, or ITD for short. But how exactly the brain decodes this information is still unknown.

The brain contains nerve cells that each show maximum activity in response to one particular ITD. One idea is that these nerve cells are arranged in the brain like a map from left to right, and that the brain then uses this map to estimate sound direction. This is known as the Jeffress model, after the scientist who first proposed it. There is some evidence that birds and alligators actually use a system like this to localize sounds, but no such map of nerve cells has yet been identified in mammals. An alternative possibility is that the brain compares activity across groups of ITD-sensitive nerve cells. One of the oldest and simplest ways to measure this is to compare nerve activity in the left and right hemispheres of the brain. This readout is known as the hemispheric difference model.

By analyzing data from published studies, Ihlefeld, Alamatsaz, and Shapley discovered that these two models make opposing predictions about the effects of volume. The Jeffress model predicts that the volume of a sound will not affect a person’s ability to localize it. By contrast, the hemispheric difference model predicts that very soft sounds will lead to systematic errors, so that for the same ITD, softer sounds are perceived closer towards the front than louder sounds. To investigate this further, Ihlefeld, Alamatsaz, and Shapley asked healthy volunteers to localize sounds of different volumes. The volunteers tended to mis-localize quieter sounds, believing them to be closer to the body’s midline than they actually were, which is inconsistent with the predictions of the Jeffress model.

These new findings also reveal key parallels to processing in the visual system. Visual areas of the brain estimate how far away an object is by comparing the input that reaches the two eyes. But these estimates are also systematically less accurate for low-contrast stimuli than for high-contrast ones, just as sound localization is less accurate for softer sounds than for louder ones. The idea that the brain uses the same basic strategy to localize both sights and sounds generates a number of predictions for future studies to test.