null How Does it Sound on Headphones? Personalised Headphone Listening • Part Two
How Does it Sound on Headphones? Personalised Headphone Listening • Part Two
In a room, sound is all around us. From just two months old, we automatically recognize the direction of sound, turning our eyes towards a source, and half a year later we start using movement to change the perspective as an integral part of hearing and seeing when interpreting the world. Active ‘reaching out’ using movement is a main element of most human sensing, and the ground rules for this are laid before the age of two.
Early in life we also start to make use of individual and unique outer ears with movement, to differentiate, for instance, between direct sound and reflections in a room. Like the imprint of a mother tongue, early sensory experience becomes a reference that we're unable to ever fully escape, since it’s naturally rooted in our anatomy and conditions around the time of childhood.
The purpose of monitoring is to evaluate audio in a neutral way, and to ensure good translation to other reproduction conditions. On-ear and in-ear headphones have not been relevant for this purpose because they exclude the influential external ear from our auditory system, thereby breaking the link to natural listening that we have acquired over a lifetime. Traditional headphones cannot externalise sound, and they also remove another essential facilitator of critical listening: Active sensing.
The outer ear together with movement is what provides us with the wonderful feature of ‘spherical hearing’, which is used regardless of whether a production is stereo or immersive. In the case of stereo, we receive direct sound in the 60 degree angle in a distinct way, like we receive cross-talk and all room reflections. Each of us have substantial and characteristic reception-resonances, depending on the angle (azimuth and elevation) by which sound arrives. Our personal physical properties are not compensated for with traditional on-ear or in-ear headphones; and for practical reasons are not included in any measures to standardise headphone listening (such as efforts to prevent hearing damage from over-exposure).
In monitoring, however, such uncertainty spells a problem. Using traditional headphones, important sources like the human voice or tonal instruments are difficult to equalise, pan and adjust for level because midrange frequencies translate randomly between people when using them. Many professionals have learned this the hard way and therefore use headphones only when necessary or for other reasons, such as checking for artifacts. How the headphones are seated in or on the listener’s ears is another systemic issue with headphone monitoring, adding frequency response and level changes (of sometimes 10 dB or more) between each seating. In headphone standardisation, averaging over at least 5 seatings is therefore advised.
Finally, our sensation of very low frequency sound needs mentioning in this context. At low frequency, we cross-over from aural sensation dominating to haptic sensation (i.e. sound which we tend to experience through ‘touch’) dominating. Haptic reception is driven by what are called Pacinian receptors in the abdomen and in the skin, and even allows accurate (outdoor) detection of the direction of VLF sound. The cross-over frequency between aural and haptic domination is individual but happens below 50 Hz.
Traditional headphones, like standards, build on a general assumption of the listeners' ears, such as how the 60 degree stereo angle is averagely received. Manufacturers submit to different headphone target curves, typically based on free field or diffuse field anchoring to the real world, and thereby end up with different secret sauce recipes for "their" frequency response. However, every such response is a one-size-fits-all compromise incompatible with professional listening.
Genelec's new Aural ID technology signifies a step away from random, generalised headphone listening to systematic personalisation, and should not be confused with processing plugins that merely modify the "secret sauce" frequency response, based on headphone type.
By describing how sound from all different angles is modified during normal arrival, and how this affects you personally, Aural ID enables a rendering plugin to offer natural (or supernatural) hearing, when using headphones. It supports open standards such as the Spatially Oriented Format for Acoustics (SOFA), and also provides a path to immersive headphone-monitoring.
So in summary, in addition to good transducer and mechanical design, professional use of headphones for actual monitoring needs to include at least these elements:
- compensation for headphone-type pre-emphasis,
- addition of personalisation for direct sound per channel,
- addition of virtual room reflections for each channel,
- all of the above performed dynamically with movement and low latency,
- consideration for haptic sensation,
- attention to headphone seating-variation.
Only then can headphones really be used to find out ‘how it sounds on headphones’ in general.
Read Personalised Headphone Listening • Part One by Aki Mäkivirta.