I’ve heard two technology demos so far at High End 2016, and my reaction to them couldn’t have been more different. One sold me in the first few seconds; it was the kind of presentation that keeps my enthusiasm for audio going after all these decades. The other disappointed me; it left me questioning the viability of the technology and wondering how it could be sold to anyone beyond the most credulous audiophiles.

The great demo

The demo I heard featuring BACCH-SP audio processing technology was one of the most convincing and dramatic I’ve heard in my 26 years of attending audio shows. BACCH 3D, the technology used in the BACCH-SP processor, is the brainchild of Princeton University professor Edgar Choueiri. It’s a method of creating enhanced stereo imaging through crosstalk cancellation.

Part of the problem of creating realistic stereo sound is that your left ear hears what’s coming from the right speaker, and vice versa. Crosstalk cancellation puts an inverted, filtered cancelling version of the left-channel signal into the right-channel speaker, and vice versa -- much the same effect you’d get by standing mattresses on their ends in a line going from directly between the speakers to your face. It’s been done on a fairly crude level for decades, perhaps most notably by Carver and Polk, who built it into their receivers and speakers, respectively. Choueiri’s technique is far more precise, because it tailors the crosstalk cancellation filters to suit the characteristics of your head, ears, and speakers precisely, the goal being to accomplish crosstalk cancellation without introducing audible colorations.

Brent Butterworth at BACCH-SP

To calibrate the system, Choueiri had me place two tiny microphones in my ears. They’re sort of like backward earphones, with the transducers on the outside rather than the inside. He ran a few test tones through the speakers so BACCH-SP could measure the specific acoustical effects of my ears and head, then played me some binaural recordings done by Chesky Records.

The effect of this demo was immediately convincing. Even though the Marten Coltrane 3 speakers were placed at typical angles from my listening chair, I heard the clear, precise image of a trombone from directly to my left, at a 90-degree angle far beyond the left speaker. On a recording with what sounded like 30 or 40 people talking and making various vocal noises, I could easily pick out individual voices by ear, spread out in an arc extending 90 degrees in either direction.

Choueiri gave me an iPad running the BACCH 3D app, which allowed me to switch the processing on and off. When I switched it off, I got a normal stereo presentation, but my ears couldn’t detect any other change in the sound. There was none of the phasiness, comb filtering, or sweet-spottiness that plagues most of the other spatial-enhancement technologies I’ve tried. It was, without question, the most lifelike and compelling presentation of stereo sound I’ve ever heard.

Key to the success of the BACCH 3D processing was the inclusion of a head tracker of the type used for video games. The tracker incorporates a camera that could detect my head movement and allowed the BACCH-SP processor to adjust the sound slightly so the presentation didn’t change when I moved my head -- just as the placement of a group of musicians doesn’t change when you move your head during a performance.

Brent Butterworth at BACCH-SP

To show what the system could do with typical material, Choueiri played some non-binaural recordings, including, to my delight, “Hots on for Nowhere,” from the overlooked Led Zeppelin album Presence. The results here weren’t as impressive as with Chesky’s binaural recordings, but they were still worthwhile. I heard a bigger, more live-sounding spread of sound with more precise stereo imaging, and with none of the artifacts I described above.

The commercial prospects of BACCH 3D are uncertain. The system, including the BACCH-SP processor, the in-ear microphones, and the head tracker, costs $54,000, which obviously will prevent it from generating substantial sales. Choueiri hopes to license the technology into A/V receivers, which already have the necessary digital signal processing capability. A simple, very limited version of the processing has already been licensed into some Jawbone Bluetooth speakers. I heard it and wasn’t especially impressed, probably in large part because there’s not so much you can do with a little Bluetooth speaker that has its drivers only a few inches apart.

The not-so-great demo

The demo that didn’t impress me was of MQA, the technology originally created by Meridian Audio and since spun off into a separate company for licensing. MQA, as originally described to me during the first demo of it I heard, folds the added high-frequency information of a high-resolution recording down into the bottommost bits of a 24/44.1 or 24/48 digital file, then resurrects this information on playback. During this demo, though, Meridian cofounder Bob Stuart described MQA as a technology to improve the timing resolution of a music recording; he claimed MQA delivers a result that is 15 to 30 times more accurate in the time domain than conventional digital audio.

MQA is now being licensed into high-end audio products, including the Brinkmann Nyquist DAC that was being demoed. Stuart also announced that Warner Music Group has agreed to license MQA for use in music releases.

Brent Butterworth at MQA

So how does MQA perform? I have no idea. Despite the fact that this was the third MQA demo I’ve heard, I have yet to hear a direct A/B comparison of an MQA file with standard 16/44.1 digital audio. As in the past, the demo consisted of nice-sounding recordings played back on a very expensive audio system. As in the past, everything sounded good. But how much MQA contributed to the quality of this demo -- or for all I know, detracted from it -- was impossible to determine.

Imagine someone came up with a new gasoline additive that they claimed would increase a car’s performance. Imagine they poured the additive into the gas tank of a Ferrari Enzo sports car, then asked you to drive the car to gauge the effect of the additive. The performance would, of course, be incredible. But you would have no way of knowing how much the additive contributed to (or detracted from) the performance. Without being able to test the same car filled with untreated gasoline, would you buy the additive? That’s basically what we’re being asked to do in the case of MQA.

In my opinion, if you want to sell people on new audio technologies, the best way is to demonstrate their advantage. At the MQA demo, we were expected simply to believe there’s an advantage. I think the people who spend the time and money to attend audio shows deserve better.

Brent Butterworth
Contributor, SoundStage!