Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Media Music

1/3 of People Can't Tell 48Kbps Audio From 160Kbps 567

An anonymous reader writes "Results of a blind listening test show that a third of people can't tell the difference between music encoded at 48Kbps and the same music encoded at 160Kbps. The test was conducted by CNet to find out whether streaming music service Spotify sounded better than new rival Sky Songs. Spotify uses 160Kbps OGG compression for its free service, whereas Sky Songs uses 48Kbps AAC+ compression. Over a third of participants thought the lower bit rate sounded better."
This discussion has been archived. No new comments can be posted.

1/3 of People Can't Tell 48Kbps Audio From 160Kbps

Comments Filter:
  • If they used the "mosquito" - then lots of people would just randomly pick something :-) Or just say things like "Hey! What's that ringing in my ear!"

    • by ByOhTek ( 1181381 ) on Monday October 19, 2009 @12:14PM (#29796251) Journal

      You blame the sound, I blame the people.

      I think they should see if there is a correlation to the preferred quality, and how much auto-tuned "music" the people listen to.

      • by FunkyELF ( 609131 ) on Monday October 19, 2009 @01:39PM (#29797627)

        I paid to get my TV ISF calibrated. It looks amazing. But if you brought it inside a Best Buy and sat it next to their other TVs your average Joe would think it looks like crap.
        The TV manufacturers increase the amount of blue to make things appear brighter. People's faces turn green so they up the amount of Red. Then they over-sharpen which introduces artifacts and over-contrast which creates banding.

        Encoding audio in a lossy format no doubtingly does the same thing. They make sure the music still "pop"s to the point where it is exaggerated causing the music to "sound" better.

        The people who say that 48Kbps sounds better than 160 would probably say the same thing compared to the original.

      • Re: (Score:3, Insightful)

        by yurtinus ( 1590157 )
        You blame the people, I blame the editors.

        There is no study here, from TFA (which itself is barely longer than TFS), sixteen people were asked to state which song clip they thought sounded better. I'm surprised the results were better than 50/50. From TFA, all listeners could *tell* a difference and the report was on which one they *prefer*

        Really, there's nothing to see here.
      • Re: (Score:3, Interesting)

        Comment removed based on user account deletion
    • by beelsebob ( 529313 ) on Monday October 19, 2009 @12:29PM (#29796489)

      No, just the headline is massively misleading.

      The article actually states that people (a) could hear the difference (b) thought the lower bit rate stuff sounded better.

      The key being that the two were encoded with two totally different codecs.

      • by icebike ( 68054 ) on Monday October 19, 2009 @12:49PM (#29796831)

        The key being that the two were encoded with two totally different codecs.

        Exactly. And as such they are not comparable.

        This certainly does not say a thing about the ability of people to distinguish between a good encoding and a bad one when the only information provided was the bit rate.

        At best TFA is a testament to AAC. Says nothing about human ability to distinguish.

        You can encode a phone call, typically limited to a frequency response of between, roughly, 350Hz and 3,500Hz at 192kbps. Probably 16kbps would suffice.

      • Re: (Score:3, Informative)

        by sopssa ( 1498795 ) *

        Over a third of participants thought the lower bit rate sounded better.

        Another thing is that majority of people actually have quite crappy speakers, atleast on computers. Lower bitrate sounds "better" on cheap speakers because it dumbs down highest frequency changes in the song.

        • by Chapter80 ( 926879 ) on Monday October 19, 2009 @01:53PM (#29797831)

          Since the 16 subjects were asked "which sounds better" and were not given an alternative "there's no difference" then it's actually possible that 12 of the 16 thought there was no difference, and so they randomly picked A or B. And 6 picked A.

          So it's possible that only 25% could tell the difference and selected the higher bit rate.

          Great study. Very Scientific.

      • Re: (Score:3, Interesting)

        by poetmatt ( 793785 )

        I was going to say I have no idea why they would compare entirely different codecs here. Not to mention that lots of people are simply not audiophiles or not folks with extremely discerning ears to quality. Plenty of people show that AAC/Vorbis is situational and sometimes one can work better or vice versa. [hydrogenaudio.org]

        As a musician, I've had lots of times where irrespective of my quality that I play people think everything is amazing/fantastic.

  • by N3Roaster ( 888781 ) <nealw&acm,org> on Monday October 19, 2009 @12:11PM (#29796187) Homepage Journal

    Are these the same people who prefer MP3 Sizzle [slashdot.org]?

    • Re: (Score:2, Interesting)

      by vertinox ( 846076 )

      Actually... IMO some electronic music sounds better with lossy compression.

      As it sounds more crunchy or crisp.

      Or maybe it seems just louder [wikipedia.org]. ;)

      • by -kevin- ( 90281 ) on Monday October 19, 2009 @12:18PM (#29796313) Homepage

        Or maybe it seems just louder [wikipedia.org]. ;)

        fyi, dynamics compression is independent of data compression

      • Re: (Score:3, Informative)

        by maharb ( 1534501 )

        Listen to some Infected Mushroom at 320 kbps and tell me that again. You can hear so much more when you have the *combination* of good inputs and then high bit rates. If the input sucked to begin with the bit rate doesn't matter, which could be the case in many of these "is it better" studies. I can EASILY tell the difference between some random electronic music and the godliness of recording that is an Infected Mushroom album (even if you don't like the music). And no I am not an audiophile or anything

      • Re: (Score:3, Funny)

        by Anonymous Coward

        "IMO some electronic music sounds better with lossy compression"

        IMHO, hip-hop and rap sound infinitely better with 100% lossy compression but that's just me :-)

    • by Interoperable ( 1651953 ) on Monday October 19, 2009 @12:55PM (#29796937)

      Not to mention video compression [xkcd.com]

    • Apples to oranges (Score:5, Informative)

      by MoxFulder ( 159829 ) on Monday October 19, 2009 @02:15PM (#29798235) Homepage

      This test isn't a complete experimental fiasco (like some of the Microsoft-sponsored listening tests that deem WMA to sound as good at 64k as MP3 at 128k).

      But there are a couple of significant flaws with it, that make the results pretty useless:

      • They used the AB method, rather than the superior ABX method [wikipedia.org]. In the AB method, a participant hears the two versions of the song, without knowing which is which, and then much choose whether one is better, or whether they are equal. In the ABX method, the participant hears two distinct versions, then a third which is identical to ONE OF the first two. They are asked to figure out which of the first two samples is the same as the third. If they perform no better than chance at this task, it's a good indication that the null hypothesis may be correct. Which is very important, since modern audio codecs have gotten so good that their quality is often indistinguishable in practice. It's disingenuous to argue about slight degrees of preference without an attempt to determine their statistical significance.
      • We don't know exactly which codecs were used!!! There are many implementations of AAC+ encoders [wikipedia.org], which may differ markedly in quality (though in 2006, a credible ABX test [mp3-tech.org] found that none was preferred over another to a statistically significant to a 95% confidence interval). Likewise, there are multiple implementations of Ogg Vorbis encoders [wikipedia.org]. The aoTuV patches, in particular, are widely considered to considerably improve sound quality.

      If you want to know about some methodologically-better comparisons of audio codec quality, please see the Codec listening test [wikipedia.org] page at Wikipedia. Full disclosure: I wrote most of this article, and have attempted to compile the results of all the carefully-conducted independent tests that I could find.

      Finally, none of this is to say that we should all demand 160kbps streaming audio if 48kbps can be made to sound just as good. It's just that this study doesn't establish that, not by a long shot. The headline is also wrong in claiming that 1/3 of the participants couldn't distinguish 48k from 160k audio: in fact, they preferred the 48k audio. And preferring one format is very different from claiming that it is of a high-fidelity: for example, audio with a compressed dynamic range is by definition degraded, and yet it persists in commercial rock recordings [wikipedia.org] because uniformly loud music grabs listeners' attention more easily.

  • by Rei ( 128717 ) on Monday October 19, 2009 @12:11PM (#29796191) Homepage

    (although not as low as 46kbps) and reached the same conclusion. Most people vastly overestimate their ability to distinguish tracks encoded at different bitrates. And I've seen study after study that backs this up. This includes self-professed audiophiles, the original authors of particular tracks of music, and so forth.

    • by endikos ( 195750 ) * <bill@endikos.com> on Monday October 19, 2009 @12:15PM (#29796275)

      Here's one such study conducted by the Audio engineering society:

      http://www.aes.org/e-lib/browse.cfm?elib=14195 [aes.org]

      • by Anonymous Coward on Monday October 19, 2009 @01:33PM (#29797541)

        I was actually a participant in this study, and despite their best efforts, their test was seriously flawed. Too many heads of other participants blocking the treble; the speakers were too directional; the amp was not even grounded properly, and as a result, there was a 60-cycle hum throughout. The only way I would trust a study like this is if they did it on people who actually have the listening experience, and in a controlled environment (headphones, and in an environment as close to anechoic as possible).

        I can fairly easily tell the difference between V0 MP3 and the PCM original release, but I spend hours doing close listening on good equipment.

        At the same time, I think people spend way too much time worrying about this shit. If you enjoy your music, and you've heard the difference between the CD and the MP3 rip and still don't care enough to re-rip in a higher bitrate or a lossless format, then good for you. Storage space is getting so cheap now that the argument that lossless formats aren't worth the space they take up no longer holds water. I'm a FLAC convert for many reasons, but most of all, the peace of mind that I (a.) am not missing anything, and (b.) won't be screwed over if a newer format gains popularity.

        • Re: (Score:3, Informative)

          by DavidTC ( 10147 )

          Yeah, I just don't get why people don't use FLAC for their own CDs. Whenever I get a CD, I rip it immediately to FLAC, label it with the metadata and whatnot so I never have to do that again, put them in an artist/album directory, and then encode it however I want it, currently 320kbs mp3.

          The FLACs then get burned to a DVD and deleted from my computer. (Obviously, one of those where you can write multiple sessions to.) Although, like you said, I'm at the point where I could, instead, keep the FLACs around

        • Re: (Score:3, Insightful)

          by brunes69 ( 86786 )

          If the only way you can tell the difference is in a "controlled environment" by someone with "listening experience" (whatever that means0, then the difference is completely irrelevant, both to the average person and to the marketplace.

    • by Rei ( 128717 ) on Monday October 19, 2009 @12:18PM (#29796319) Homepage

      To elaborate: in my testing, I took a couple of random tracks (two Coulton rock tracks and two classical Christmas tracks, both FLAC), and encoded them at 96k, 128k, 160k, and 192k ogg vorbis, then played them each into their own wav file, then distributed the re-encoded wav files and a wav generated straight from the flac (all with randomized filenames) to the people who wanted to take part in the test. There was a statistically significant (although not universal) recognition that the 96k was the worst. There was a correlation on the 128k track, but not a statistically significant one (I may want to do this again with a larger sample size). And the 160k, 192k, and original tracks were as good as random.

      Most people hear 128k and think, "How can a person possibly not get *that*?" But that's really a stereotype from the olden days. There's a huge difference between a 128kbps fixed-bitrate mp3 and a 128kbps VBR ogg. VBR makes a *huge* difference.

      • by godrik ( 1287354 ) on Monday October 19, 2009 @12:37PM (#29796647)

        I think it really depends on your audio setup as well. I used to have crappy speaker and could not make the difference between FLAC and low rate MP3 (I think it was fixed 128kbps).

        When I switched to better speakers then I could actually make the difference. Despite that, I am sure I won't make the difference between 192 VBR and FLAC.

        BTW, since hard drive is cheap this days, I go for FLAC for everything.

    • by TiggertheMad ( 556308 ) on Monday October 19, 2009 @12:19PM (#29796325) Journal
      Thats strange, I find it trivial to identify differing qualities of compression when listening to my music files.

      You look down at the UI, and it tells you what the bitrate is.

      (Joking aside, I have advocated 128 kbps for years, not because of sound quality issues, but rather because most people own cheap computer speakers and/or headphones. You only get quality as good as the weakest link in the system.)
      • by fuzzyfuzzyfungus ( 1223518 ) on Monday October 19, 2009 @12:25PM (#29796425) Journal
        While, as you say, most people have crap speakers/headphones, so anything above 128kbps is largely a waste, there is one major reason to do it anyway.

        If you ever upgrade your hardware, dealing with all your old, low-quality, tracks is a pain. You can re-rip, or suffer through, or throw them all away and get new ones; but it is a hassle. With storage so cheap these days, you might just want to include a little extra, in case you upgrade later.
      • by Cowclops ( 630818 ) on Monday October 19, 2009 @12:26PM (#29796439)

        And I've been telling people for years that the "weakest link" concept in audio reproduction is an oversimplification and therefore wrong.

        There are orthagonal distortion components introduced by various devices. An MP3's digital distortion (sizzle sounds, to borrow from another article somebody linked to) would be IN ADDITION TO poor frequency response and mechanical distortion. It isn't "masked" by it. And it doesn't take significantly more bitrate to go from "crappy" to "great." 128kbps CBR MP3 is pretty crappy, but 160kbps VBR MP3 is indistinguishable from the source "even on great systems." I don't intend to argue what bitrate you consider "sufficient," just that "Listen to a low bitrate because you have crappy speaker" implies that crappy speakers mask MP3 compression artifacts.

        If I were to go out on a limb, I'd say its possible for crappy speakers to distort even more with overcompressed MP3s than good speakers do.

        • by beelsebob ( 529313 ) on Monday October 19, 2009 @12:36PM (#29796629)

          Except that most of the compression gained from mp3 is gained by removing frequencies we can't hear anyway, speakers with poor frequency response absolutely 100% do mask this.

          • Re: (Score:3, Informative)

            by qortra ( 591818 )
            That would be true, except that even crappy computer speakers these days can produce high frequencies just fine. Consider the following speakers that are among the least expensive on Newegg [newegg.com]. They have an advertised frequency response of 100hz to 20,000khz, plenty of range to reveal encoding flaws. Yes, the actual frequency response might not be as good as advertised, but if they're anywhere close, they will not have any trouble revealing encoding flaws.

            In my experience, medium-high frequency reproducti
          • Re: (Score:3, Informative)

            No. This would only be true if the speaker masked the exact same frequencies as the MP3. In this case, you are losing frequency content at the source (lossy MP3 file) and are AGAIN losing frequencies at the speakers. I have found, at least in my experience, that low bitrate stuff is even more unbearable on low end gear than on better systems.
      • by diamondsw ( 685967 ) on Monday October 19, 2009 @12:40PM (#29796685)

        Whereas I advocate the opposite, as disk space is cheap, and you really don't want to go to the hassle of ripping all of those CD's again. But to each their own.

      • Re: (Score:3, Insightful)

        by Hatta ( 162192 ) *

        As long as they're not passing around the MP3s, or never want to upgrade their stereo system 128 is fine. If you ever want to do either of those, 128kbps MP3s will not be good enough.

    • by FrankSchwab ( 675585 ) on Monday October 19, 2009 @01:05PM (#29797125) Journal
      We did the same, ohh, 7 or 8 years ago. Took four tracks (a solo piano, a new Rolling Stones piece, a classical piece, something else), encoded them to 128/192/256 kbps CBR using the Fraunhofer codec of the day, converted them back to WAV files and burned them to an AUDIO CD. Each piece was put on the CD 5 times: The first was the raw track. The following four tracks were the raw track (again) and the 128/192/256 bit versions, in random order.

      Everyone at work was invited to take the disk home, play it on their home stereo, and tell me what each track had been encoded as. This took "computer" items (sound cards, speakers, etc) out of the loop, and let them evaluate on the best system that they had. Being as this was an engineering company with a lot of high-ego types, there was some pretty impressive equipment out there.

      50% of the people who took the challenge were unable to tell the difference between the encoding methods - they simply said "I listened to all five versions of each song, and they sounded exactly the same to me". Most of the others tried to assign bit rates to the various versions, but their results were essentially random - none of them reliably detected even the 128 kbps version. One guy was fairly confident in his results, and reliably detected the 128 kbps version of each song, but didn't make a guess on the higher bit rates as he couldn't tell the difference between them. One guy spent the evening with his spectrum analyzer trying to cheat on the test, but gave up.

      That's when I stopped worrying about bit rates, especially when I spend most of my time these days listening to music in my car over the factory sound system.

      /frank
      • Re: (Score:3, Insightful)

        by JerryLove ( 1158461 )

        The human memory for details seems to be measured in seconds. One big issue in blind-testing speakers is to make sure that you can switch almost instantly between two volume-matched pairs.

        Playing to the end of a song then listening to it again is not going to yield the best objective results; although it does say something subjective pretty strongly.

    • Re: (Score:3, Insightful)

      (although not as low as 46kbps) and reached the same conclusion. Most people vastly overestimate their ability to distinguish tracks encoded at different bitrates. And I've seen study after study that backs this up. This includes self-professed audiophiles, the original authors of particular tracks of music, and so forth.

      This is true. Mostly.

      On most material, you cannot hear the difference; however, every once in a while (though rarely), there will be a song which not even a 320 kbps mp3 can encode properly[1], and there'll be distortion on cymbals or applause or a snare drum or a weird synth. If you don't know how the original is supposed to sound, you won't notice anything strange, but if you do and if you can recognize that "mp3 sizzle", which is far easier for those of us who have been dealing with mp3s ever since the d

    • Re: (Score:3, Interesting)

      by Guspaz ( 556486 )

      It's not just bitrate, at this point. They're comaparing 160kbit Vorbis to 48kbit AAC+.

      AAC+ utilizes two tricks to make low-bitrate audio sound better: parametric stereo, and spectral band replication.

      The first, parametric stereo, stores the audio as monaural with an extremely low-bitrate sideband (2-3kbit/s) to store stereo information.

      The second, spectral band replication, stores half the frequency explicitly (low and midrange). The upper frequencies are then recreated from shaped noise, which works quite

  • by overshoot ( 39700 ) on Monday October 19, 2009 @12:12PM (#29796197)
    on how long they've been cranking their music up to 11.
  • In other news (Score:5, Informative)

    by Etrias ( 1121031 ) on Monday October 19, 2009 @12:13PM (#29796223)
    So, 1/3 of people eh? Hardly a damning assessment when your sampling size is 16 people. Besides, most people I know including myself have some sort of hearing damage from the past or don't really know what to listen for when presented with different types of sound.
    • Re: (Score:3, Insightful)

      by BESTouff ( 531293 )
      Moreover their math is false: if 1/3 of participants gave the wrong anwser, it means 2/3 of participants couldn't tell the difference and choosed randomly.

      ... given a sufficient sample size, as you noted of course.

      • by BForrester ( 946915 ) on Monday October 19, 2009 @12:46PM (#29796769)

        You think that math is troubling? I'm still trying to figure out how to divide a group of 16 people into thirds without staining the carpet.

        • by vlm ( 69642 ) on Monday October 19, 2009 @01:17PM (#29797301)

          You think that math is troubling? I'm still trying to figure out how to divide a group of 16 people into thirds without staining the carpet.

          Considering its a lossy mp3 compression test, 16/3 = 5 is close enough for most people not to notice.

        • Re: (Score:3, Informative)

          by c ( 8461 )

          > I'm still trying to figure out how to divide a group of
          > 16 people into thirds without staining the carpet.

          They disqualified the audiophile in the group who said they all sounded like crap compared to his $167,578 home rig.

          c.

  • bad comparison? (Score:5, Insightful)

    by MacColossus ( 932054 ) on Monday October 19, 2009 @12:14PM (#29796233) Journal
    I would be more impressed if the same encoding format was used. I think both samples should have been ogg or aac and not a mix. If comparing aac at 48 and 160 are the results different? Same goes for ogg at 48 and 160?
    • by loftwyr ( 36717 )
      Exactly, the summary makes it sounds like their comparing bitrates when it's codecs they're comparing. So, in effect, by mixing codecs and bitrates, the test proves exactly nothing.
    • by Malc ( 1751 )

      Isn't AAC @ 48kbs the same as OGG at 128kbs?

      Very dumb comparison.

  • by Chris Mattern ( 191822 ) on Monday October 19, 2009 @12:14PM (#29796237)

    People who can't tell the difference have a 50-50 chance of getting it right. Therefore we can deduce that over *two-thirds* of the population can't tell the difference, by adding in the inferred members who couldn't tell, but guessed right.

    • Re: (Score:3, Funny)

      by jhol13 ( 1087781 )

      NOOOO!!!!

      Did you have to do it? You just ruined over a hundred (or more[1]) years worth of mathematics in statistics.

      Now every Gallup done so far must be discredited, every medical experiment redone, eve..

      My brain hurts, I cannot even think of the chil..consequences.

      [1] depends on how your stat...calendar looks like

  • Apples and Oranges (Score:5, Insightful)

    by Shag ( 3737 ) on Monday October 19, 2009 @12:14PM (#29796247) Journal

    Do it with 48kbps AAC vs. 160kbps AAC, or 48kbps OGG vs. 160kbps OGG, and you might have something meaningful.

    Or, 48kbps AAC vs. 48kbps OGG, and 160kbps AAC vs. 160kbps OGG, if you want a flamewar...

  • Even worse (Score:5, Funny)

    by Anonymous Coward on Monday October 19, 2009 @12:14PM (#29796253)

    In a deaf listening test, 100% couldn't tell the difference between a 160Kbps OGG file and a cannon. Though 3% noted the smell of gunpowder.

    • but my friend, who has been deaf since childhood, does listen too music but from the point of how it feels. His tastes weren't very different from many others of the time period (album rock which has distinctive beats/etc).

      Really didn't crank the base, but it was not loud enough that people around him would stop and point, let alone gesture.

      I only asked after asking why he played his music so much and my level ignorance about deafness was high enough to ask.

  • by Chairboy ( 88841 ) on Monday October 19, 2009 @12:15PM (#29796255) Homepage

    If the higher compression audio had simply used this $500 Denon ethernet cable, the results would have been different:

    http://www.usa.denon.com/ProductDetails/3429.asp [denon.com]

    But seriously, can you make a sweeping statement like "People can't tell 48k audio from 160k" if you're also switching compression technologies? OGG vs. AAC is a whole article on it's own, you just muddy the waters by making this about the compression rate.

    This is just a new version of the old megahertz myth of the CPU wars. Two different 2GHZ processors from different manufacturers are not equal, we all finally figured that out for the most part, right? Now we've moved onwards... to the Kbps myth?

    • by iluvcapra ( 782887 ) on Monday October 19, 2009 @12:47PM (#29796791)

      Jesus I'd never seen those. Just for those of you at home, I'm a professional sound designer for films, and I use ethernet cables that I bought at Fry's for a couple bucks a piece.

      But seriously, can you make a sweeping statement like "People can't tell 48k audio from 160k"

      The issue isn't "can I tell 48kbps from 160kbps" -- the real question should be: can I tell the difference between 48kbps AAC and the original uncompressed recording? AAC can sound "better" or "good" under a lot of situations where it's significantly distorting the original program material. AAC was designed specifically to choose "good sounding" over "accurate" as the bit rates get lower and lower. Also, keep in mind that a side-effect of compressing an audio stream like this is that you'll strip away noise and unusual harmonics from the original, which might cause a lower-rate recording to "sound better," when in fact stuff that the producer actually has in his mix is being removed.

    • Re: (Score:3, Insightful)

      by MBGMorden ( 803437 )

      Indeed, it had to have been the cable.

      There is some wisdom in their story though. Personaly, their data is a bit extreme. I've encoded not at 48Kbps before but certainly at 64Kbps (if you honestly want to know, when ripping "adult" movies from DVD and trying to keep the file a certain size, it makes more sense to give the video extra bitrate and the audio less . . .), and I could certainly tell the difference between 64 and 128Kbps (which is what I rip most regular video's audio track at). Now I've alway

  • As long as the sound is clean and there is no static, no pops, crackles, or hissing, I could care less what it is encoded at. To my ear there really is no difference.

  • ...it turns out that at least 1/3 of all people are over the age of 25.

  • by irchs ( 752829 ) on Monday October 19, 2009 @12:16PM (#29796281) Homepage

    Yeah, but they weren't listening through Monster Cable, you can't tell the difference between anything without Monster equipment...

  • bad title (Score:2, Insightful)

    by mbuimbui ( 1130065 )

    >> 1/3 of People Can't Tell 48Kbps Audio From 160Kbps

    Correction: Over a third of participants thought the lower bit rate sounded better.

    Those are not the same thing. To find out how many people thought they sounded exactly the same, I would have to RTFA.

  • There are a lot of things to mention in this article. They are using VERY high end hardware that can interpolate the sound and cause sound clipping (which makes things sound metallic) to be minimized. They also didn't mention what songs were chosen. A lot of music is mastered to sound good on poor quality speakers and thus the 48 Kbps may actually not be the limiting factor.
    At least there going to be a new reason to sell audio snake oil now.
  • Relevant ? (Score:2, Interesting)

    by Jerome H ( 990344 )

    From the article: "We dragged 16 people", I'm no stats engineer but isn't that far too low ?

  • They are using two completely different codecs. Try 48kbps mp3 vs 160kbps mp3 and see.

  • OGG isn't a audio codec.
    CNET isn't a tech news site.

  • Summary misleading (Score:5, Insightful)

    by spinkham ( 56603 ) on Monday October 19, 2009 @12:20PM (#29796341)

    The summary is quite misleading.
    It sounds like 100% of the participants could tell the difference between the two encodings, just 1/3 of the people thought the more simple, clean, highly compressed version sounded better. 2/3 of people thought the high bitrate version sounded better.

    When choosing compression, the better way to go is to shoot for transparency [wikipedia.org] versus the uncompressed source, not which audio sounds better to your ears.

    That's why ABX [wikipedia.org] is the industry standard for compression comparison, not a simple AB test as in this experiment.

  • compared to what ? (Score:4, Informative)

    by Brigadier ( 12956 ) on Monday October 19, 2009 @12:20PM (#29796343)

    I say the only valid comparison is listening to the live music, vs the digital format. This way you compare to the original and your not just saying which sounds better (which is subjective). I once worked with a audio system designer and everything was tested using analogue formats with various types of music preferably classical because of it's range in sound.

  • by mapkinase ( 958129 ) on Monday October 19, 2009 @12:20PM (#29796347) Homepage Journal

    "Of the 16 people tested"

    Good-bye.

  • 2/3rds can (Score:3, Interesting)

    by Galestar ( 1473827 ) on Monday October 19, 2009 @12:20PM (#29796351) Homepage
    Title of article should be: 2/3 of people CAN tell the difference...
  • Most people only really have broad demands on how their music sounds. Give them fairly deep bass, no obvious crackle at the high end, and they'll pretty much be happy with anything in between. If they're used to a "lower-end" listening experience to begin with (cheap headphones, laptop speakers, low-end stereos), then they'll be even less picky overall.

    It also wouldn't surprise me if a fair number of the participants just picked one arbitrarily, just for the sake of giving an answer.

  • let's be clear (Score:5, Insightful)

    by Vorpix ( 60341 ) on Monday October 19, 2009 @12:22PM (#29796401)

    this summary is misleading. they were asked to choose which they thought sounded better. the listeners DID notice a difference between the two, and for some reason 1/3 of the participants enjoyed the lower bitrate version better. perhaps it had less harsh high tones or something about it was more pleasurable to them... that doesn't mean that the higher bitrate didn't honestly sound more accurate to the source material. Perhaps uncompressed audio should have also been incorporated into the test. If they still choose the lower bitrate over uncompressed, then it's clear that some listeners prefer the song with the changes inherent to compression.

    this was a very unscientific study, with a very small sample size, and really shouldn't be front page on slashdot.

  • Comment removed based on user account deletion
  • One third of the US population cant tell "shit from shineola".

  • Preferences (Score:5, Insightful)

    by gorfie ( 700458 ) on Monday October 19, 2009 @12:26PM (#29796435)
    I used to sell audio equipment as a teenager and I recall different people had different ideas about what constituted quality audio. Some people liked deep muddy base, other people liked loud midranges, etc.. I think the study's conclusion is all wrong... it's not that people can't tell the difference, it's that people sometimes prefer the lower quality bitrate. Personally, I just want things to sound representative of the real-life equivalent. :)
  • by whyde ( 123448 ) on Monday October 19, 2009 @12:29PM (#29796501)

    Today's low-bitrate MP3/AAC will be tomorrow's vinyl.

    I firmly believe that you prefer what you're accustomed to hearing in the first place. Most kids today have grown up hearing nothing better than highly-compressed FM or low-bitrate MP3 music. They don't know anything better, and given the option of hearing better music, perhaps even uncompressed, with a much larger dynamic range and noise floor, they'll gravitate to what their ears and brain have been trained to appreciate.

    Tomorrow's world will have "128Kbps MP3 Afficionado" publications extolling the virtues, "warmth", and "naturalness" of the low-bitrate MP3. And audiophiles will pay top-dollar for crippled hardware and overcompressed, undersampled music tracks.

  • "Spotify uses 160Kbps OGG compression for its free service, whereas Sky Songs uses 48Kbps AAC+ compression"

    HOW do the compression efficiency of both compare and what royalties patent rights apply to either.
  • What's the big deal? Two thirds of the people can't even tell decent music from out of tune shit.
  • I stopped at (Score:4, Insightful)

    by obarthelemy ( 160321 ) on Monday October 19, 2009 @12:33PM (#29796571)

    "we tested with Billie Jean"

    I don't hate that song.. but as a testing ground for music hardware/software, it sucks. And you should always test with different types of music.

    Also, small sample size (16), only 1 song in 2 versions, presumably always in the same order, on hardware that has nothing to do with what everybody uses (does that lessen or worsen compression characteristics ?), no control group (wanna bet that with 2 exact same versions, song A or song B consistently comes out on top ? Coke and Pepsi worked that one out long ago). No indication how responses were collected (group ? interviewer ? biased ?).

    made me chuckle. amateurs.

  • by rgviza ( 1303161 ) on Monday October 19, 2009 @12:50PM (#29796851)

    I was all excited that my new car came with a satellite radio reciever, then bitterly disappointed with the sound quality and didn't subscribe. I'm considering replacing it with an HD reciever, once I hear one to find out what "HD" really means. Satellite radio is utter crap for sound quality.

    CD quality mp3 is 320kbps.I can understand not being able to tell 48kbps from 160kbps (especially when a different codec is used for each, the quality of the codec and the configuration of it is key). It's hard to tell the difference between crap and sh*t. The test is only meaningful as a bitrate test if the same codec and encoding settings are used. Otherwise it's apples to oranges. The bitrate isn't nearly as important as how it's encoded unless both streams are done exactly the same way (except for bitrate).

    This test smacks of Apple fanboism. Do a real bitrate test using the same codec and settings (outside of bitrate) and I guarantee you'll get better listener accuracy.

    Why on earth would you do a bitrate test with two different codecs unless the test was really marketing propaganda for one of the codecs? /filed in the Apple marketing bullshit drawer

    This is a codec test, not a bitrate test. As a "can a user tell the difference between these bitrates" test, the results are completely worthless. It's more like a "AAC rulez! look a 48k AAC stream sounds as good as a 160kbps Ogg stream!" /barf

  • Seems a poor test (Score:3, Insightful)

    by JerryLove ( 1158461 ) on Monday October 19, 2009 @12:53PM (#29796909)

    Based on the article, the testing seems to have very little in the way of meaningful results.

    A single instance of a single song with two different encoders given to listeners who hear "more bass" as a quality where the results were so close to split (two people shy of 50/50).

    To gather meaningful data: songs must be switched quickly: you should go through a variety of materials (it's worth noting that some compressions have more trouble with certain types of sounds than others), and (ideally) there should be a reference from which to work.

    The goal of compression, in theory at least, is to maintain meaningful fedility. Yes, that means that "the part we notice most" is most important: but that's no excuse for causeing "a pleasent error" better than "correct reproduction".

    Of course, I've never tested these encoders. It's possible that the lower bitrate encoder did a better job.

  • There's no news here. The HE AAC codec (called AAC+ in the Coding Technologies implementation, and now called Dolby Pulse after Dolby's acquisition) is a highly advanced spectral band replication codec, and can be pretty darn transparent down to around 48 Kbps. That there was about a 2:1 preference for the high bitrate Ogg in a highly nonscientific small sample size test like this is a yawner. http://en.wikipedia.org/wiki/HE_AAC [wikipedia.org]
  • Pop Music (Score:3, Interesting)

    by StormyMonday ( 163372 ) on Monday October 19, 2009 @03:31PM (#29799585) Homepage

    Pop music is engineered to be played on cheap equipment. After all, that's what most people have. Practically nobody has ever heard Michael Jackson without a ton of electronics between them. You want a real comparison, use classical or jazz, where folks know what a *real* live performance sounds like.

    It's also notable that the people who liked the lower bit rate recording said "more bass == better". "More bass" has been the "gold standard" in pop music for a good number of years -- the harder it punches you in the stomach, the "better" it is.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...