Skip to content
Surf Wiki
Save to docs
general/digital-audio

From Surf Wiki (app.surf) — the open knowledge base

Codec listening test

Scientific study designed to compare two or more lossy audio codecs


Scientific study designed to compare two or more lossy audio codecs

A codec listening test is a scientific study designed to compare two or more lossy audio codecs, usually with respect to perceived fidelity or compression efficiency.

Most tests take the form of a double-blind comparison. Commonly used methods are known as "ABX" or "ABC/HR" or "MUSHRA". There are various software packages available for individuals to perform this type of testing themselves with minimal assistance.

Testing methods

ABX test

Main article: ABX test

In an ABX test, the listener has to identify an unknown sample X as being A or B, with A (usually the original) and B (usually the encoded version) available for reference. The outcome of a test must be statistically significant. This setup ensures that the listener is not biased by their expectations, and that the outcome is not likely to be the result of chance. If sample X cannot be determined reliably with a low p-value in a predetermined number of trials, then the null hypothesis cannot be rejected and it cannot be proved that there is a perceptible difference between samples A and B. This usually indicates that the encoded version will actually be transparent to the listener.

ABC/HR test

In an ABC/HR test, C is the original which is always available for reference. A and B are the original and the encoded version in randomized order. The listener must first distinguish the encoded version from the original (which is the Hidden Reference that the "HR" in ABC/HR stands for), prior to assigning a score as a subjective judgment of the quality. Different encoded versions can be compared against each other using these scores.

MUSHRA

Main article: MUSHRA

In MUSHRA (MUltiple Stimuli with Hidden Reference and Anchor), the listener is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference and one or more anchors. The purpose of the anchor(s) is to make the scale be closer to an "absolute scale", making sure that minor artifacts are not rated as having very bad quality.

Results

Many double-blind music listening tests have been carried out. The following table lists the results of several listening tests that have been published online. To obtain meaningful results, listening tests must compare codecs' performance at similar or identical bitrates, since the audio quality produced by any lossy encoder will be trivially improved by increasing the bitrate. If listeners cannot consistently distinguish a lossy encoder's output from the uncompressed original audio, then it may be concluded that the codec has achieved transparency.

Popular formats compared in these tests include MP3, AAC (and extensions), Vorbis, Musepack, and WMA. The RealAudio Gecko, ATRAC3, QDesign, and mp3PRO formats appear in some tests, despite much lower adoption . Many encoder and decoder implementations (both proprietary and open source) exist for some formats, such as MP3, which is the oldest and best-known format still in widespread use today.

SourceDatesFormatsBitrate (kbit/s)CodecsMusical genresSamplesListenersBest ResultCommentsSourceDatesFormatsBitrate (kbit/s)CodecsMusical genresSamplesListenersBest ResultComments
ff1232001multiple~128116Musepack and AAC
ff1232001 October - 2002 Januarymultiple~128Various325-28Musepack
or Vorbis
ff1232002 Julymultiple~64Various1224-41mp3PROBoth Vorbis variants were a close second.
Roberto Amorim2003 JuneAAC128 CBRVarious1011-18QuickTime
Roberto Amorim2003 Julymultiple~128Various1214-24MusepackAAC, WMA, and Vorbis tied for close second
Roberto Amorim2003 Septembermultiple~64Various1230-43Nero
HE-AACThis test showed that listeners preferred 128 kbit/s MP3 audio encoded by LAME to all the tested codecs at 64 kbit/s, with greater than 99% confidence:
Roberto Amorim2004 JanuaryMP3~128Various1211-22LAMEThe author noted that the results may have been affected by the use of an outdated version of the Xing encoder and non-optimal settings for ITunes.
Roberto Amorim2004 FebruaryAAC~128Various1219-29iTunesOpen-source FAAC codec improved greatly since previous test
Roberto Amorim2004 Maymultiple~128Various1812-27aoTuV (Vorbis) and Musepack
Roberto Amorim2004 Junemultiple32 CBRVarious1847-77Nero
HE-AAC
HydrogenAudio user "guruboolez"2004 Julymultiple~175Classical181Musepack
HydrogenAudio user "guruboolez"2005 Augustmultiple~180Classical181aoTuV (Vorbis)The author reflects on substantial improvements in Vorbis encoding since his previous test (above):
gURuBoOleZZ2005 Augustmultiple~96Classic, various150 classical, 35 various1aoTuV and AAC tied (classical), aoTuV (various)The author selected each participating encoder by pitting multiple encoders against one another in an initial "Darwinian phase." For example, LAME was chosen as the representative MP3 encoder because it clearly outperformed four other MP3 encoders on a subset of the full sample corpus.
Sebastian Mares2005 Decembermultiple~140 (nominal 128)Various1818-304-way tie (all except Shine)"I think this test shows that with the current encoders, the quality at 128 kbit/s is very good... It's time to move to bitrates like 96 kbit/s or even lower (64 kbit/s)."
Mp3-tech.org2006 MarchAAC48Various1810-205-way tie
(all except anchors)"... it seems that overall, plain HE-AAC might be better than HE-AAC v2 at this bitrate, but a lot more samples would be needed to be able to draw definitive conclusions regarding this.
Sebastian Mares2006 Novembermultiple~48Various2022-34Nero
HE-AACWMA Professional and aoTuV tied for second
Sebastian Mares2007 Julymultiple~64Various1821-33Nero Digital and WMA Professional
Sebastian Mares2008 OctoberMP3~128Various1426-395-way tie
(all except L3enc)"The quality at 128 kbps is very good and MP3 encoders improved a lot since the last test." Also notes that Fraunhofer and Helix codecs are several times faster at encoding than LAME, although virtually identical in terms of perceived audio quality.
HydrogenAudio user IgorC (March/April 2011)2011 Marchmultiple~64Various3025-13CELT / OpusIn results, CELT is referred to as Opus, its name when later standardized.
HydrogenAudio user IgorC (July - August 2011)2011 July/AugustLC-AAC~96Various2025Apple QuickTime
HydrogenAudio user "Kamedo2"2013 MayMP3~224Various2514-way tie
(all except BladeEnc
low anchor)Most impairment grades rated between 4 (perceptible but not annoying) and 5 (imperceptible). Both speech samples transparent (p
HydrogenAudio user Kamedo2 (July/September 2014)2014 July - Septembermultiple~96Various4033OpusIn results Opus is clear winner, Apple AAC is second, Ogg Vorbis and higher-bitrate LAME MP3 are statistically tied in joint third place. FAAC, known to be inferior in advance, was used to discard bad results and as quality scale anchor.
Cunningham and McGregor2019 Februarymultiple192 - 1411Pop101005-way tie (WAV, MP3, AAC, ACER HQ, ACER MQ)Participants reported no perceived differences between the uncompressed, MP3, AAC, ACER high quality, and ACER medium quality compressed audio in terms of noise and distortions but that the ACER low quality format was perceived as being of lower quality. However, in terms of participants’ perceptions of the stereo field, all formats under test performed as well as each other, with no statistically significant differences.

References

References

  1. (2019). "Subjective Evaluation of Music Compressed with the ACER Codec Compared to AAC, MP3, and Uncompressed PCM". International Journal of Digital Multimedia Broadcasting.
Info: Wikipedia Source

This article was imported from Wikipedia and is available under the Creative Commons Attribution-ShareAlike 4.0 License. Content has been adapted to SurfDoc format. Original contributors can be found on the article history page.

Want to explore this topic further?

Ask Mako anything about Codec listening test — get instant answers, deeper analysis, and related topics.

Research with Mako

Free with your Surf account

Content sourced from Wikipedia, available under CC BY-SA 4.0.

This content may have been generated or modified by AI. CloudSurf Software LLC is not responsible for the accuracy, completeness, or reliability of AI-generated content. Always verify important information from primary sources.

Report