Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Media Music

1/3 of People Can't Tell 48Kbps Audio From 160Kbps 567

An anonymous reader writes "Results of a blind listening test show that a third of people can't tell the difference between music encoded at 48Kbps and the same music encoded at 160Kbps. The test was conducted by CNet to find out whether streaming music service Spotify sounded better than new rival Sky Songs. Spotify uses 160Kbps OGG compression for its free service, whereas Sky Songs uses 48Kbps AAC+ compression. Over a third of participants thought the lower bit rate sounded better."
This discussion has been archived. No new comments can be posted.

1/3 of People Can't Tell 48Kbps Audio From 160Kbps

Comments Filter:
  • by N3Roaster ( 888781 ) <nealw@ac m . org> on Monday October 19, 2009 @01:11PM (#29796187) Homepage Journal

    Are these the same people who prefer MP3 Sizzle [slashdot.org]?

  • by Rei ( 128717 ) on Monday October 19, 2009 @01:11PM (#29796191) Homepage

    (although not as low as 46kbps) and reached the same conclusion. Most people vastly overestimate their ability to distinguish tracks encoded at different bitrates. And I've seen study after study that backs this up. This includes self-professed audiophiles, the original authors of particular tracks of music, and so forth.

  • In other news (Score:5, Informative)

    by Etrias ( 1121031 ) on Monday October 19, 2009 @01:13PM (#29796223)
    So, 1/3 of people eh? Hardly a damning assessment when your sampling size is 16 people. Besides, most people I know including myself have some sort of hearing damage from the past or don't really know what to listen for when presented with different types of sound.
  • by -kevin- ( 90281 ) on Monday October 19, 2009 @01:18PM (#29796313) Homepage

    Or maybe it seems just louder [wikipedia.org]. ;)

    fyi, dynamics compression is independent of data compression

  • compared to what ? (Score:4, Informative)

    by Brigadier ( 12956 ) on Monday October 19, 2009 @01:20PM (#29796343)

    I say the only valid comparison is listening to the live music, vs the digital format. This way you compare to the original and your not just saying which sounds better (which is subjective). I once worked with a audio system designer and everything was tested using analogue formats with various types of music preferably classical because of it's range in sound.

  • by mapkinase ( 958129 ) on Monday October 19, 2009 @01:20PM (#29796347) Homepage Journal

    "Of the 16 people tested"

    Good-bye.

  • by Cowclops ( 630818 ) on Monday October 19, 2009 @01:26PM (#29796439)

    And I've been telling people for years that the "weakest link" concept in audio reproduction is an oversimplification and therefore wrong.

    There are orthagonal distortion components introduced by various devices. An MP3's digital distortion (sizzle sounds, to borrow from another article somebody linked to) would be IN ADDITION TO poor frequency response and mechanical distortion. It isn't "masked" by it. And it doesn't take significantly more bitrate to go from "crappy" to "great." 128kbps CBR MP3 is pretty crappy, but 160kbps VBR MP3 is indistinguishable from the source "even on great systems." I don't intend to argue what bitrate you consider "sufficient," just that "Listen to a low bitrate because you have crappy speaker" implies that crappy speakers mask MP3 compression artifacts.

    If I were to go out on a limb, I'd say its possible for crappy speakers to distort even more with overcompressed MP3s than good speakers do.

  • by Curmudgeonlyoldbloke ( 850482 ) on Monday October 19, 2009 @01:27PM (#29796467)

    CNET - Owned by Rupert Murdoch.
    Sky - Owned by Rupert Murdoch.

    CBS, actually:
    http://www.techcrunch.com/2008/05/15/why-cbs-bought-cnet-and-not-the-other-way-around/ [techcrunch.com]

    If it'd been a Myspace survey or something from the Times, the Courier-Mail or the WSJ, you'd have had a point.

  • by beelsebob ( 529313 ) on Monday October 19, 2009 @01:36PM (#29796629)

    Except that most of the compression gained from mp3 is gained by removing frequencies we can't hear anyway, speakers with poor frequency response absolutely 100% do mask this.

  • by godrik ( 1287354 ) on Monday October 19, 2009 @01:37PM (#29796647)

    I think it really depends on your audio setup as well. I used to have crappy speaker and could not make the difference between FLAC and low rate MP3 (I think it was fixed 128kbps).

    When I switched to better speakers then I could actually make the difference. Despite that, I am sure I won't make the difference between 192 VBR and FLAC.

    BTW, since hard drive is cheap this days, I go for FLAC for everything.

  • by Anonymous Coward on Monday October 19, 2009 @01:46PM (#29796767)

    That study is not about lossy compression.

    It is about CD quality (16bit 44.1Khz linear) versus higher sample rate/bit depths.
    It is also a study of using CD quality as a distribution medium, not a recording medium.
    There are benefits to higher sample rates and bit depths for tracking, and the study does not dispute that.

  • by iluvcapra ( 782887 ) on Monday October 19, 2009 @01:47PM (#29796791)

    Jesus I'd never seen those. Just for those of you at home, I'm a professional sound designer for films, and I use ethernet cables that I bought at Fry's for a couple bucks a piece.

    But seriously, can you make a sweeping statement like "People can't tell 48k audio from 160k"

    The issue isn't "can I tell 48kbps from 160kbps" -- the real question should be: can I tell the difference between 48kbps AAC and the original uncompressed recording? AAC can sound "better" or "good" under a lot of situations where it's significantly distorting the original program material. AAC was designed specifically to choose "good sounding" over "accurate" as the bit rates get lower and lower. Also, keep in mind that a side-effect of compressing an audio stream like this is that you'll strip away noise and unusual harmonics from the original, which might cause a lower-rate recording to "sound better," when in fact stuff that the producer actually has in his mix is being removed.

  • by sopssa ( 1498795 ) * <sopssa@email.com> on Monday October 19, 2009 @01:52PM (#29796899) Journal

    Over a third of participants thought the lower bit rate sounded better.

    Another thing is that majority of people actually have quite crappy speakers, atleast on computers. Lower bitrate sounds "better" on cheap speakers because it dumbs down highest frequency changes in the song.

  • by Beardo the Bearded ( 321478 ) on Monday October 19, 2009 @01:57PM (#29796981)

    Blaming people is a pretty good start.

    To be more specific, the physiology of humans. We can hear up to about 20kHz. Basic theory tells us that we require a sampling rate of twice the highest frequency, so that's 40kHz. You throw in 20% extra and you're sitting at 48kHz.

    Anything more is filler.

    Now, let me qualify that for a moment -- some codecs are terrible, and I can often hear the phasing on higher pitches, notably on cymbals. I have excellent hearing and have more than 20 year of musical training in both brass and choir. (I have to listen to mp3 players on minimal volume.)

    However, when music is background music it doesn't really matter how perfect the sound is. If there's a hearing loss epidemic caused by the prevalence of, say, cheap mp3 players, then most people are probably halfway to tone deaf anyway.

  • by StrategicIrony ( 1183007 ) on Monday October 19, 2009 @02:03PM (#29797089)

    it said 48Kbps, not kHZ.

    Most lossy music formats totally submarine a lot of detail at 48Kbps and I would wager that almost everyone has the auditory acuity to recognize it. They simply don't have the mental acuity to care.

    I agree, so much auto-tone (big air quotes) "music" and they hardly notice gross clipping and drastic tone flattening. :-)

  • by maharb ( 1534501 ) on Monday October 19, 2009 @02:04PM (#29797097)

    Listen to some Infected Mushroom at 320 kbps and tell me that again. You can hear so much more when you have the *combination* of good inputs and then high bit rates. If the input sucked to begin with the bit rate doesn't matter, which could be the case in many of these "is it better" studies. I can EASILY tell the difference between some random electronic music and the godliness of recording that is an Infected Mushroom album (even if you don't like the music). And no I am not an audiophile or anything; I have a $20 pair of headphones for my iPod and I can tell a huge difference with just that.

  • by qortra ( 591818 ) on Monday October 19, 2009 @02:13PM (#29797251)
    That would be true, except that even crappy computer speakers these days can produce high frequencies just fine. Consider the following speakers that are among the least expensive on Newegg [newegg.com]. They have an advertised frequency response of 100hz to 20,000khz, plenty of range to reveal encoding flaws. Yes, the actual frequency response might not be as good as advertised, but if they're anywhere close, they will not have any trouble revealing encoding flaws.

    In my experience, medium-high frequency reproduction is probably the chief problem with poorly encoded music. From the article, "Some also noted that cymbals, hi hats and vocals in particular sounded better" (referring to the better encoded stream). Cymbals and hi-hats are dead on - they end up sounding like 60s sci-fi if encoded badly. Even the most modest of computer speakers and earbuds will reproduce a cymbal frequency range without breaking a sweat.

    The grandparent is dead on here - sound reproduction is not a chain, it's a relay race. Any particular member of that race can single handedly improve or worsen the reproduction.
  • Re:In other news (Score:3, Informative)

    by c ( 8461 ) <beauregardcp@gmail.com> on Monday October 19, 2009 @02:37PM (#29797601)

    > I'm still trying to figure out how to divide a group of
    > 16 people into thirds without staining the carpet.

    They disqualified the audiophile in the group who said they all sounded like crap compared to his $167,578 home rig.

    c.

  • by thisnamestoolong ( 1584383 ) on Monday October 19, 2009 @02:52PM (#29797817)
    No. This would only be true if the speaker masked the exact same frequencies as the MP3. In this case, you are losing frequency content at the source (lossy MP3 file) and are AGAIN losing frequencies at the speakers. I have found, at least in my experience, that low bitrate stuff is even more unbearable on low end gear than on better systems.
  • by E IS mC(Square) ( 721736 ) on Monday October 19, 2009 @02:57PM (#29797917) Journal
    >> When I switched from ipod ear buds to Sennheiser cx300...

    That's a good start. Now go ahead and change your music player too, to something better. I know this post will be downmodded real fast, but if anybody is interested, do a sound comparison of ipod against, say, iriver, with any same earbuds/headphones and hear the difference yourself.
  • Apples to oranges (Score:5, Informative)

    by MoxFulder ( 159829 ) on Monday October 19, 2009 @03:15PM (#29798235) Homepage

    This test isn't a complete experimental fiasco (like some of the Microsoft-sponsored listening tests that deem WMA to sound as good at 64k as MP3 at 128k).

    But there are a couple of significant flaws with it, that make the results pretty useless:

    • They used the AB method, rather than the superior ABX method [wikipedia.org]. In the AB method, a participant hears the two versions of the song, without knowing which is which, and then much choose whether one is better, or whether they are equal. In the ABX method, the participant hears two distinct versions, then a third which is identical to ONE OF the first two. They are asked to figure out which of the first two samples is the same as the third. If they perform no better than chance at this task, it's a good indication that the null hypothesis may be correct. Which is very important, since modern audio codecs have gotten so good that their quality is often indistinguishable in practice. It's disingenuous to argue about slight degrees of preference without an attempt to determine their statistical significance.
    • We don't know exactly which codecs were used!!! There are many implementations of AAC+ encoders [wikipedia.org], which may differ markedly in quality (though in 2006, a credible ABX test [mp3-tech.org] found that none was preferred over another to a statistically significant to a 95% confidence interval). Likewise, there are multiple implementations of Ogg Vorbis encoders [wikipedia.org]. The aoTuV patches, in particular, are widely considered to considerably improve sound quality.

    If you want to know about some methodologically-better comparisons of audio codec quality, please see the Codec listening test [wikipedia.org] page at Wikipedia. Full disclosure: I wrote most of this article, and have attempted to compile the results of all the carefully-conducted independent tests that I could find.

    Finally, none of this is to say that we should all demand 160kbps streaming audio if 48kbps can be made to sound just as good. It's just that this study doesn't establish that, not by a long shot. The headline is also wrong in claiming that 1/3 of the participants couldn't distinguish 48k from 160k audio: in fact, they preferred the 48k audio. And preferring one format is very different from claiming that it is of a high-fidelity: for example, audio with a compressed dynamic range is by definition degraded, and yet it persists in commercial rock recordings [wikipedia.org] because uniformly loud music grabs listeners' attention more easily.

  • Yeah, I just don't get why people don't use FLAC for their own CDs. Whenever I get a CD, I rip it immediately to FLAC, label it with the metadata and whatnot so I never have to do that again, put them in an artist/album directory, and then encode it however I want it, currently 320kbs mp3.

    The FLACs then get burned to a DVD and deleted from my computer. (Obviously, one of those where you can write multiple sessions to.) Although, like you said, I'm at the point where I could, instead, keep the FLACs around and play them. But I don't think there's a meaningful difference between lossless and 320kps mp3, and I'd still need mp3s for my iPhone anyway. Yes, I know there's a lossless format for that, but it's only 8 gigs of space.

    If I ever need to change formats, like I did from 128kps mp3 to 320 mp3, or like I did for my OGG experiment (Which I gave up on.), I just pull out the FLAC DVD and drag and drop the entire DVD to foobar 2000 or lamedropXPd or whatever, and tell it to convert the files and write them wherever they go. Hopefully they can keep their paths, but if not, or if you change the format of the filename(1), it's easy enough to automatically move them, because they already have metadata.

    Every few hours I swap DVDs (Well, okay, I only have three, but in principle it would work for a very large library.)

    Entire music library converted with almost no work at all. I can't imagine the people who have to track down every CD they own and run them back through the computer, and type or lookup the metadata again, one CD at a time.

    Hell, I can't even imagine having to track down the original CDs if I wanted to burn a copy. I suppose more people just burn them off the mp3s or whatever, though.

    1) Ah, the eternal question: Do you put artist and album in the filename, or just the path? The later makes more sense, but only if you have no non-album songs. I seem to flip back and forth every year or so, but luckily have programs that make mass renaming easy.

    Right now I'm experimenting with having an Albums directory, where files are Artist/Album/1-Songname.mp3, and then have an entirely different structure for non-albums in a different directory.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...