Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Music News

Can We Really Tell Lossless From MP3? 849

EddieSpinola writes "Everyone knows that lossless codecs like FLAC produce better sounding music than lossy codecs like MP3. Well that's the theory anyway. The reality is that most of us can't tell the difference between MP3 and FLAC. In this quick and dirty test, a worrying preponderance of subjects rated the MP3 encodes higher than the FLAC files. Very interesting, if slightly disturbing reading!" Visiting with adblock and flashblock is highly recommended, lest you be blinded. The article is spread over 6 pages and there is no print version.
This discussion has been archived. No new comments can be posted.

Can We Really Tell Lossless From MP3?

Comments Filter:
  • Not Really (Score:5, Insightful)

    by Mikkeles ( 698461 ) on Tuesday November 17, 2009 @10:14PM (#30139020)

    and certainly not in a typical house room, car, bus, or bike.

    • by goombah99 ( 560566 ) on Tuesday November 17, 2009 @10:21PM (#30139094)

      I'm sure I can tell 128 MP3 is not so good. it's sounds a bit hot to my ears. Oddly perhaps this happens especially when there is clipping in the music (see for example green day) or shreikin trebles ( "battle without mercy" kill bill sound track). At first this seemed counter intuitive to me since you think that adding more distortion would be the most easily hidden during distortion, right? My rationalization is that whatever the MP# psycho acoutic model is, it's best for music with harmonies and tonal trajectories in different registers (base, tenor, trebble) and not music that has all sorts of aliased frequencies randomly jumping around in volume. I dont' really know but I can hear it. With normal music you may not hear the change in intonation because it simply sounds equally good even if it is altered.

      But By 192 MP3 I cannot tell the difference. 128 AAC seems to be about as good.

       

      • by tech10171968 ( 955149 ) on Tuesday November 17, 2009 @10:59PM (#30139430)
        This could also have something to do with the way a lot of albums are mixed these days. Unfortunately, it seems that many studios are compressing the hell out of the music; I guess it has more to do with music industry execs thinking that their acts need to be louder to keep from being drowned out on the radio by the competition (who are also compressing their music into oblivion). I'm no audiophile but I abhor the practice; it has the effect of making the music come out of the speakers like a 747 on full throttle.

        The bandwidth "ceiling" also has the deplorable effect of not giving the tracks room to "breath"; certain otherwise audible higher frequencies can get "lost in the sauce" (listen to an older recording and you'll hear the difference). The result is often akin to the difference between quietly closing a door and slamming it.
        • by ProfessionalCookie ( 673314 ) on Wednesday November 18, 2009 @03:14AM (#30140948) Journal
          I think you're find they're compressing the hell into music.
        • by Terrasque ( 796014 ) on Wednesday November 18, 2009 @06:13AM (#30141752) Homepage Journal

          I think you're spot on with that guess. For example, Red Hot Chili Pepper's cd release of Stadium Arcadium [wikipedia.org] have been especially critisized for being too compressed [youtube.com] (a result of the loudness war [wikipedia.org]. Someone at hometheaterforum.com forum created a comparison between the CD [hometheaterforum.com] and the LP [hometheaterforum.com] (which had a much better mastering) release of the album, where you can clearly see the difference.

          Now, the norm for most music released now is to mangle it in that way. And the audience is used to hear it that way too. So mp3 compression adding more artifacts to it and removing tones, thus mangling the music further, might sound "better" for a lot of the audience, because that's what they're trained to hear.

    • by Charles Dodgeson ( 248492 ) <jeffrey@goldmark.org> on Wednesday November 18, 2009 @12:01AM (#30139864) Homepage Journal

      and certainly not in a typical house room, car, bus, or bike.

      I had been buying things from iTunes (128kbps AAC) and noticed no problems in my car or with my cheap computer speakers (with various computer noises in the room). I had, however, burned a few disks from iTunes and played them on my low end component system. Again, all was reasonably well until I played classical music that way.

      When I first played downloaded classical music on that system I thought that something was broken. It was truly and horribly unlistenable. It took me a while to isolate the problem, but after other disks played fine and this disk played "fine" in my computer and car I finally figured out what the problem was.

      Between that time and the introduction of iTunes+ (256kbps AAC) I stopped getting compressed classical (and some jazz) tracks.

      What was so surprising about this experience is that (a) I hadn't set it up as a test of my hearing, but I noticed the difference entirely spontaneously. Indeed it hadn't even occurred to me that this might be an issue. And (b) I don't at all consider myself to be an audiophile. My hearing really isn't all that good.

      The lesson is that what matters is what you hear with your music in your listening environment. In my most common listening environments it's all good. And with most of my music it's all good. But with a small subset of my music in one of my listening environments, bit rates can make the difference between unlistenable to perfectly enjoyable.

    • Re: (Score:3, Interesting)

      by carlmenezes ( 204187 )
      The brain bases the "quality" of music you listen to on the majority of music you have listened to in your younger ears. If that has been mp3, well then you would "prefer" an mp3 sound, weird as that may be. This is the same phenomenon that is responsible for people preferring vinyl over CD, for example. Try the same experiment on your kids and yes, they will prefer the mp3 version. If you were already listening to a lot of music when mp3s hit the mainstream, you'll probably find you prefer the lossless ver
  • by FlyingSquidStudios ( 1031284 ) on Tuesday November 17, 2009 @10:14PM (#30139024)
    If the mix doesn't sound good on almost any device, it wasn't mixed well. Audiophiles seem to think we don't take the fact that most people don't have high-end audio gear and lossless audio into account.
    • by Brett Buck ( 811747 ) on Tuesday November 17, 2009 @10:25PM (#30139122)

      Oh, absolutely. There is no doubt that the biggest problem, by far, is the upfront engineering, not the file format. I have plenty of DDD CDs and other items where the digitization of the data involves essentially no loss - but are still terrible recordings that are painful to listen to. Only when everything else it darn near ideal does the compression method/bit rate even become detectable. And the vast, vast majority of cases, and as far as I know never for any portable device, are the conditions ideal. A crappy 128K MP3 of a good performance with good engineering can be a joy.

          The results are not at all surprising to me. And of course the "audiophile" community is "stuck on stupid" in some cases. ANYONE who thinks information recorded in tiny wiggles in groves and played through a bunch of springs (stylus, cartridge coils, tonearm, not to mention the non-trivial compliance of the record itself) and then amplified by two-three orders of magnitude is a more accurate representation than a full digital string (almost independent of bit rate) is deluding themselves.

              Brett

      • by Shadow of Eternity ( 795165 ) on Tuesday November 17, 2009 @10:41PM (#30139274)

        You forgot that they also use special tripolarity magnetic alignment cordage with tru-neg vacuum standoffs to perpendicularly align the electrons and thus properly reproduce the non-hertzian frequences.

      • Re: (Score:3, Informative)

        by mcgrew ( 92797 ) *

        ANYONE who thinks information recorded in tiny wiggles in groves and played through a bunch of springs (stylus, cartridge coils, tonearm, not to mention the non-trivial compliance of the record itself) and then amplified by two-three orders of magnitude is a more accurate representation than a full digital string (almost independent of bit rate) is deluding themselves.

        You have no grasp whatever how a turntable works. The "tiny grooves and wiggles" are a very precice analog of the sound waveforms themselves.

    • by Scaba ( 183684 ) <joe@joefran c i a .com> on Tuesday November 17, 2009 @10:56PM (#30139410)

      Other things audiophiles don't take into account:

      1. they can't tell the difference between lossless and lossy at a reasonable compression, either
      2. bragging about buying $5000 speakers makes you look like someone used lossy compression on your brain
      3. the average listener can tell the difference between having a conversation with a real person about music versus listening to an insecure nerd trying to one-up everyone.
      • by BlueWaterBaboonFarm ( 1610709 ) on Tuesday November 17, 2009 @11:20PM (#30139584)
        This sort of comment is a bit frustrating to me. Some can tell the difference between even the most minute differences in lossy vs lossless. They may be a dramatic minority, but all the same, they are entitled to spend 3 month salary on their equipment to enjoy music as they see fit. Similarly, I can't enjoy a $1000 bottle of wine any more than a $100 bottle; but that's no reason to say that a $100 bottle of wine is just as good as the $1000 wine because the vast majority can't tell the difference. I recognize the validity of you're point, in that most can't tell the difference, but would like to pretend they can.
        • by amRadioHed ( 463061 ) on Wednesday November 18, 2009 @12:08AM (#30139916)

          Likewise there's no reason to say that a $100 bottle of wine isn't better than a $1000 bottle just because someone is willing to pay for it. Frankly anything over $30 is a waste of money. All your paying for is rarity, not quality.

          • by EQ ( 28372 ) on Wednesday November 18, 2009 @12:39AM (#30140150) Homepage Journal

            Likewise there's no reason to say that a $100 bottle of wine isn't better than a $1000 bottle just because someone is willing to pay for it. Frankly anything over $30 is a waste of money. All your paying for is rarity, not quality.

            I think you missed it, regarding wine -- you have it backwards. Quality is rarity. Poor quality stuff is very common. Higher quality is usually a fortunate circumstance of a particular harvest of a particular grape in a particular area of a particular vineyard, and combined with a good vintner's touch. So high quality is a rare thing. Its not the rarity that makes it pricey, its the fact that high quality wine is remarkably rare and therefore pricey.

            • Re: (Score:3, Informative)

              by hey! ( 33014 )

              No, that's not quite right with wines either.

              It used to be that anybody could buy a good wine -- if he was wiling to shell out the money. Any fool could walk into a reputable wine store with $100 and walk out with a very good bottle of wine. The expert was somebody who could walk out with a good bottle of wine after spending $15.

              Things have changed. Vintners are very scientifically skilled at producing very consistent, reasonably good wine no matter what the year's growing conditions were like. As a resul

  • by syousef ( 465911 ) on Tuesday November 17, 2009 @10:15PM (#30139030) Journal

    128bps is certainly not enjoyable for certain classical pieces. By the time you've hit 192, it's fine. At 320kbps I can't tell the difference. If that means I have "tin ears" I'm thankful for them. They save me thousands of dollars in high end equipment and they save me using obscure poorly supported lossless formats and then having to convert to mp3 half the time anyway.

    Apart from a new survey of an old topic is there anything new here?

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) * on Tuesday November 17, 2009 @10:16PM (#30139042)
    Comment removed based on user account deletion
    • That's largely the point. The "good enough" mark is largely dependent upon the complexity of the music more than just about anything. Some very simple music might sound very good at only 128kbps, whereas more complex music might demand to have the entire 192kbps.

      I thought the conclusion going back quite a while was that 192kbps was good enough for pretty much anybody and that anymore than that was really just for specialty use.
      • Re: (Score:3, Insightful)

        by cptdondo ( 59460 )

        I think a large part of it is also how the music is recorded. The older recordings are recorded at a much lower level, taking advantage of the full dynamic range of the medium. The newer recordings are all packed into the loudest little bit so the dynamic range is compressed.

        Add to that the simple fact that most people today listen to music that's digitally encoded on tiny little earplugs.

        Now expose them to a full orchestra in a well-designed sound hall. They simply have no basis for hearing the range of

    • by Sycraft-fu ( 314770 ) on Tuesday November 17, 2009 @10:54PM (#30139390)

      One problem is the simple A/B and asking which people like better. Well that is fine if you are doing something like testing two compression formats to see which has a sound people prefer. That is not fine if the question can people tell the difference between compressed and uncompressed music. For that you need an ABX test. X is a reference uncompressed sample, A and B are randomized such that one is uncompressed, one is not. People are then asked to identify the one that is the same as X. A test like that lets you tell if people can hear a difference, regardless of if they like it or not.

      Also there is another angle to why people might choose to use uncompressed music and that is if there is any additional processing (like equalization) planned for later. Psychoacoustic compression schemes can have problems when processed later. Reason being that they do rely on things like masking, in that because X is happening, we can't hear Y. However when the balance of the sound is altered, well then that isn't necessarily the case anymore.

      How important is that? Probably not very in a lot of cases. However how important is storage space? Last I checked 1TB was under $100. Storage is cheap. There's not really a need to milk every last bit out of a file. FLAC'd discs are in the realm of 300MB for a full CD. Big deal. I'm got space to spare, so why not go lossless?

      What it really comes down to is what is "good enough" really depends on the situation. Depends on the music (some kinds cause more trouble for encoders), the listener, the environment, storage constraints and so on. I mean 64k is good enough to recognize the music. A 64k AAC or WMA is fine, FM radio quality maybe, and even a 64k MP3 is listenable. Is there distortion over what was on the CD? Sure, but maybe it is good enough in some situations (like say you need to be able to transmit stereo audio on a single DS-0 channel).

      I really don't like these tests that try to give the one magic rate is that is good enough for all situations. Especially when they use bad testing methodology.

      Personally, I'm a fan of lossless compression because then there's just not any additional errors. I've got the space so why not eliminate potential problems?

  • Quick and dirty tests are not good enough to test this.

    We need significant sample sizes, double blind testing, and appropriately rigorous scientific methodology.

  • Comment removed based on user account deletion
    • Re: (Score:3, Insightful)

      by mcgrew ( 92797 ) *

      It's of my understanding that when you rip CDs to WAV or FLAC, you don't have an option to normalize your audio like you do with MP3s

      There's no need to; they're bit for bit identical (well, wav is, flac or shn are after decompression). Your wav or flac or shn will sound exactly the same. Your MP3 or OGG won't.

  • Audiophiles have known for decades that most listeners cannot discern excellent from mediocre music. Most people think that if there is lots of bass and the music is loud without obvious distortion, their system is great.

    • Ugh (Score:5, Insightful)

      by TrekkieGod ( 627867 ) on Tuesday November 17, 2009 @10:57PM (#30139424) Homepage Journal

      Audiophiles have known for decades that most listeners cannot discern excellent from mediocre music. Most people think that if there is lots of bass and the music is loud without obvious distortion, their system is great.

      Most people have known for decades that audiophiles are full of crap. Every single time I've seen a double-blind test to see if they can hear the difference on what they claim they can hear, turns out they can't. Hey, good for the people selling them $1,000 audio cables.

      That said, there's a good reason to go with FLAC. Want to re-encode a lower quality version for your storage-space-limited device? You can do that without additional quality loss, just like re-ripping from the cd. Want to change your collection to ogg because it sounds better at lower bitrates? Again, go ahead.

      Basically, it's nice having a hard drive copy that is lossless, because you can re-encode it into the lossless codec of your choice for whatever device you want without introducing further artifacts.

      • Re: (Score:3, Interesting)

        by Jared555 ( 874152 )

        Intelligent audiophiles don't fall for the $1000 cables, etc.

        When you want to listen to a lot of movies at dolby reference levels without any noticeable distortion in a larger room you are going to spend a lot more on speakers because movies frequently output levels below 20hz (even if you can't hear the sound it is outputting you can feel it and definitely hear the port noise on the subwoofer)

        The problem is a lot of the people who think they are audio knowledge gods will buy the $1000 cables even if lab eq

  • by TheReaperD ( 937405 ) on Tuesday November 17, 2009 @10:18PM (#30139060)

    I have found that though I can tell the difference between a FLAC and 128Kbps MP3, most of my friends can't. Most of them, if I play the same song back to back, one FLAC and one MP3, they will almost always pick the MP3. :( Thus far, except for me, the only reason I can justify ripping things to FLAC is because I can then convert the file to whatever loss compression format is needed, MP3, AAC, OOG, etc.for portable music players (yes people, the iPod is not the only music player), without the double compression loss.

    • > I have found that though I can tell the difference between a FLAC and 128Kbps
      > MP3, most of my friends can't. Most of them, if I play the same song back to
      > back, one FLAC and one MP3, they will almost always pick the MP3.

      Well, then, they obviously _can_ tell the difference.

  • You may not be able to tell the difference between MP3 and the original CD audio, but as soon as you subtract the right channel from the left channel, you sure can. Elements which would perfectly cancel from subtraction instead sound warbly.

  • by Anonymous Coward on Tuesday November 17, 2009 @10:19PM (#30139074)

    I definitely can tell the difference between MP3 and Lossless.

    When you're an audiophile like myself who has invested in Monster (tm) branded cables, the actual bits are richer and reproduced more faithfully than with the gear the plebs use.

    Protip: Also use Denon Link Cables [audiojunkies.com] with the built-in packet directionality device. Your TCP's wont know which way they are going without it.

    • Re: (Score:3, Funny)

      by Tynin ( 634655 )
      A protip that doesn't suggest a wooden volume knob?! Any audiophile worth his Monster cables knows that micro vibrations created by the volume pots and knobs find their way into the delicate signal path and cause audio degradation. I imagine you knew this, but the fact you omitted it is a disservice to this great community!
    • Re: (Score:3, Funny)

      by jrumney ( 197329 )

      When you're an audiophile like myself who has invested in Monster (tm) branded cables, the actual bits are richer

      It's a basic law of commerce, the bits are richer because you are poorer.

  • It depends (Score:3, Interesting)

    by Midnight Thunder ( 17205 ) on Tuesday November 17, 2009 @10:19PM (#30139078) Homepage Journal

    It all really depends on the bit rate of the MP3, the type of music you are listening to and the equipment you are using to listen to the music with. It also depends if you know what you are listening for. For example between 128Kbps and 192Kbps MP3 I find the former flatter than the latter.

  • by Snowblindeye ( 1085701 ) on Tuesday November 17, 2009 @10:21PM (#30139090)

    Most people greatly overestimate how well they can hear these differences, but the never actually try it in ABX testing. I tried it years ago and I can't hear a difference between most codecs at reasonable bitrates and unencoded originals.

    Here is an old classic from the Hydrogenaudio forums, from someone would bought expensive head phones and set up ABX testing. He was very shocked when he couldn't even tell the difference between FLAC and Vorbis at 64kb/s.

    ABX Just Destroyed My Ego, My perception of my bitrate needs was greatly inflated. [hydrogenaudio.org]

    • Re: (Score:3, Insightful)

      MP3 reacts poorly to cymbals at low bit rates. They get muddled by the codec and come out sounding horrid. Go listen to the Prelude in Bizet's Carmen opera.

      At 128Kbps MP3, it sounds horrid, even on mid-range hardware and headphones. Bump that up to 160Kbps and it's passable, or go up to 192/256/320. Whichever gooses your willies.
  • The conclusion of the article is significant: "The only person to get all four tracks right is someone who listens to their headphones at pitifully low volumes and hasn't attended any rock concerts. We can think of two explanations. One, the subject has particularly sensitive ears, so doesn't need to turn the volume up high. Two, the subject hasn't wrecked their hearing through years of listening to a walkman/MP3 player at high volumes and/or seeing Motorhead at the Hammersmith Odeon. Arguably, both apply."
  • by topham ( 32406 ) on Tuesday November 17, 2009 @10:24PM (#30139114) Homepage

    There is a reason for it, and it isn't what most people think.

    It's related to how the brain handles white balance when it comes to colours. Your brain compensates for missing, or contradictory information. After a while you get used to it and don't notice it, and then when you are presented with something closer to 'perfect' you may, or may not recognize it as being all that different.

    Sat Radio has relatively poor quality, but after listening to it for an hour or two the artifacts get filtered out by my brain (all but the worst ones anyway) and I don't notice it; but expose somebody to it for the first time and they will cringe.

  • Hmmm... (Score:3, Insightful)

    by Knightman ( 142928 ) on Tuesday November 17, 2009 @10:25PM (#30139118)

    I kinda find it funny that you need to have adblock and flashblock to visit a site named TrustedReviews so your browser doesn't go into a tailspin... It's like having Sid Fernwilter smile at you and say "Trust me!"

    Anyway, 192kbps MP3's is good enough for most people so I don't really see the point with FLAC unless you are an audiophile which means you don't touch encoded/compressed music anyway.

  • I've never been able to hear the difference but my hearing isn't great and I'm not a music person so I wasn't completely sure. But this isn't that surprising. Note how the audiophile community has so many strange ideas about what sounds better that James Randi has actually bothered to include some of their claims as acceptable for his million dollar challenge (this is a prize if you can demonstrate supernatural or paranormal abilities under controlled conditions- http://www.randi.org/site/index.php/1m-chal [randi.org]
  • by Cowclops ( 630818 ) on Tuesday November 17, 2009 @10:26PM (#30139130)

    I've been saying this for years - it is not hard to reach a point where an MP3 is indistinguishable from the uncompressed source, "even if you have top notch equipment and well-practiced hearing skills."

    It is basically scale of bitrate vs odds that the recording will be indistinguishable at that bitrate.

    My personal experience tells me that most songs are audibly degraded at 128kbps, some songs are audibly degraded at 160kbps, few songs are audibly degraded at 192kbps, and nothing I've yet experienced is audibly degraded at 256kbps. And this is being conservative... with a superior modern codec like LAME, MP3 may be even harder to distinguish at 128kbps than you might expect. Other codecs besides MP3 could be even better, but I don't have enough experience with other codecs, so I can't comment there. Plus, VBR makes the situation even better. You could have a lower average bitrate but still achieve a signal thats indistinguishable from the original with VBR.

    Nonetheless, I just rip all my music as .wav now for archiving. To me its not even worth the effort to convert that to FLAC or other lossless codecs, because that just means an additional decoding step if I ever want to use the music for purposes besides playing it live in Winamp. An $80 1TB hard drive can hold $19,000 worth of uncompressed CDs. Sure... in flac format I could store more like $60,000 worth... but who has a $20,000 CD collection let alone a $60,000 one?

    Anyway, the primary counterarguments I've heard are either from neurotic audiophiles that think "mathematically lossy" means "audibly lossy." People from that same category justify multi-thousand-dollar power cables to their amplifier and claim night and day differences, so their opinions can safely be ignored.

    The other end of the fence says low bitrate stuff sounds "perfect." In my experience when presented with a reasonable comparison, even audio-ignorant people can tell the difference between a crap 128kbit mp3 and the original, but that difference might not be immediately obvious on, for example, built-in laptop speakers.

  • by gringer ( 252588 ) on Tuesday November 17, 2009 @10:28PM (#30139150)

    Sure, MP3s sound better than FLAC, but if you used *both*, you'd get even better sound.

  • MPEG 1 layer 3 (MP3) encoding was designed as a 'perceptual encoding' algorithm where less "effort" (fewer bits) is given to signals that fall below below a threshold based on the other signals present. For example, a quiet tone close in frequency to a loud tone cannot be heard by the human ear, so no effort needs to be expended on reproducing it. All we're debating is whether the engineering behind this is sufficient. Certainly at lower encoding rates the distortion characteristics get very weird, thoug
  • by Leebert ( 1694 ) * on Tuesday November 17, 2009 @10:35PM (#30139214)

    A good part of the reason that people use FLAC et al is NOT to listen to, but to avoid re-ripping CDs or transcoding when switching lossy formats.

  • by bmo ( 77928 ) on Tuesday November 17, 2009 @10:36PM (#30139224)

    With most real music (as in not coming out of a sequencer with the highs already filtered out), yes, you can tell if your upper frequency hearing is toasted by too many rock concerts. You can tell most definitely with some specific songs that sound like crap even in the vocal range if it's lossy ("Sad To See the Season Go" by Cowboy Junkies, in particular).

    Hi-hats or any other cymbal, bells, glockenspiels, etc., all sound like shit in anything below 256. I can't describe the distortion other than to say it sounds hissy. Go ahead, listen to ANY Police tunes in low bitrate. I defy you to not cringe at how MP3 ruins Stuart Copeland's percussion.

    The only music that doesn't suffer badly from mp3's lossy distortion is electronica and its related genres. Erasure sounds just fine at 192.

    --
    BMO

  • by the eric conspiracy ( 20178 ) on Tuesday November 17, 2009 @10:36PM (#30139226)

    I've been exposed to people who write audio codecs for a living. They can tell because they've become sensitive to the artifacts present in MP3s. They also can pick up problems with CD's that haven't been dithered properly. They can easily pick out MP3 even at 320kbps. These are specialists. But even in this study there was one individual who had a high success rate.

    At 192K and a good pair of headphones with good material I think most people could learn pretty quickly to pick up the difference - loss of stereo image at higher frequencies is pretty easy to pick up.

    There are also studies available that point out the advantages of high bit rate recordings - these enable the use of sophisticated filters that eliminate some of the issues present with CD sound. If you are interested and have a mathematical bent, look up the work of Meridian's Peter Craven. Again the differences can be detected by specialists. I'm old enough so that my ears are not good enough to pick up these improvements.

    I rip to FLAC and convert for my portables because of these factors.

    If you want to try some testing yourself visit hydrogenaudio. They have apps set up to do abx comparisons so you can test yourself.

  • by addikt10 ( 461932 ) * on Tuesday November 17, 2009 @10:37PM (#30139238)

    For me, it is easy. If I spend hours listening to lossy compressed music, I start to get headaches. It doesn't happen when I'm listening to lossless compression.

    For me, that is end of story.

  • Who cares? (Score:4, Insightful)

    by mister_playboy ( 1474163 ) on Tuesday November 17, 2009 @10:43PM (#30139290)

    The small size of lossy audio was an important factor when storage capacity was limited. This is no longer an issue, so there's not much reason to bother with lossy music when dealing with the storage capacity of current devices. 100GB of music would be an absolutely massive collection, yet that would only occupy less than 10% of a US$100 1TB drive. The 16GB common is portable devices is enough for more FLAC than you would listen to for even a fairly lengthy journey. It's certainly still of use in streaming media, but the bar for quality isn't usually set very high in that area. Full CD quality FLAC streams should be usable on home broadband within 5 years, I would hope...

    The reasons to argue against FLAC just aren't that relevant anymore. Bits are cheap, who cares if you save a few?

  • Most people have spent so much time with iPod earbuds that they've killed their hearing, and that's why they can't tell the difference between formats. Besides, I think most audiophiles would agree that it's file format + speakers/headphones that make a difference.

    Now, I'm not saying that everything should be in FLAC and you should blow your budget on $500 headphones, since most people probably won't be able to tell the difference, however, I consider it just an accomplishment if people can enjoy their m
  • by evilviper ( 135110 ) on Tuesday November 17, 2009 @11:42PM (#30139740) Journal

    I did a study myself... One at a time, I took people off the street, and told them to make a rocket that could go into space. None of them could. The result is clear: space travel is impossible.

    Lossy audio coding is an area of intensive scientific study. All the comments here amount to a bunch of 6 year-old kids debating where babies come from...

    The answer to the question is quite simple, and has been known since the 1980's. The rule of Perceptual Entropy is that you need a minimum bitrate of 176kbps for 44.1kHz stereo. If you're encoding below that, it can't possibly be indistinguishable from the original. ITU-R BS.1116-1 testing has proven that simple fact out over and over again.

    And don't bother claiming your 192kbps MP3s sound perfect, either. MP3 is certainly not the ideal audio format, so it doesn't come that close. But much more importantly, it (like all low-bitrate audio codecs) is a frequency domain codec, making it impossible to avoid pre-echo and the like AT ANY BITRATE. MP3, AAC, Vorbis, et al. just can't possibly do it.

    The only possible competitors for indistinguishable (transparent) lossy audio coding are time domain codecs, primarily: MPEG-1 Layer II, and Musepack. Some hybrids like AC-3 exist as well.

    Amateur testing is pretty pointless... You're no longer judging which sounds more like the original, you're picking the one whose distortions you like more. Low bitrate codecs often throw in a relatively small amount of noise, which masks artifacts, and simply sounds sufficiently different that it's no longer the same audio. Compare a song (from a CD), to the same after normalizing the volume, and you'll have the same problem... You'll probably pick the modified version as sounding better, even though both are lossy, and at first glance, the same audio.

    I can certainly imagine the next generation of lossy audio codecs will pitch-shift music to an octave people generally prefer, to get a higher rating on such "tests". Cheap igital cameras often do the same thing... over-correcting gamma to make every picture more white (bluish, really) and turning up the contrast to make it more vivid, so much so that it looks "better than the real thing".

  • by DynaSoar ( 714234 ) on Wednesday November 18, 2009 @12:50AM (#30140230) Journal

    The present study suffers from that methodological malady known in scientific circles as being "fucked". Please bear with me as I explain this technical term.

    The question posed in the text is 'can we tell the difference'. One assumes from this that the answer is yes or no. Testing this question would require playing two versions and asking whether they're the same (can't tell the difference) or different (can etc.).

    But that's not what gets asked. The subjects get asked to tell which version sounds better. The question assumes they can tell the difference. Even if they can't tell the difference they are forced by the design to choose one over the other as if they can.

    Since they are forced to say which sounds better even if they can't tell the difference (something impossible to determine from this design) then they are simply guessing or picking one arbitrarily, and there is no way to determine if or when this occurred. Thus, the results are not only unable to answer the original question, they are unable to answer anything because the data do not even necessarily represent answers.

    The design is so fatally flawed that there is nothing that can be pulled out of it. It's complete garbage.

    As an aside, I'm not familiar with the musical pieces used, but I'm betting they're fairly new. For years now recordings have been increasingly compressed by the engineers. Most popular works produced in this decade are already so compressed that you can't tell much difference between the original and a recording of it having been compressed yet again, no matter by what method.

    To tell the difference between compressed versions one should start with an uncompressed source. And for a person to be able to hear a difference in two versions, they should already be familiar with the original in uncompressed form so they can try to say whether one sounds more like the original than the other (the alternative being both sound worse or both sound like it). If they have no clue what it's supposed to sound like, any attempt to say which sounds better is badly broken due to having no reference with which to compare them.

    No attempt was made to determine whether the subjects even had normal hearing. And I don't mean just asked (though that should be done) but tested. People can have frequency drop outs that they're unaware of and that would affect the results.

    There are so many problems with the study that it is completely useless. The problems were of the authors' making. Thus, they did not know what they were doing. This is what we mean by "fucked".

    I want to know who determined that 'trusted' was a good name for the magazine/blog/honey wagon in which the article appears. I wouldn't trust them to test light bulbs to see if they're burnt out.

    • Since they are testing people's perceptions this is in part a psychological test. You cannot conduct perceptual tests directly because perception is affected by the conscious mind. Thus if you asked people "do you hear a difference," you are likely to get many false positives since you are predisposing people to seek a difference. Instead you ask people which one sounds better.

      "even if they can't tell the difference (something impossible to determine from this design) then they are simply guessing or pickin

    • Re: (Score:3, Informative)

      by ljw1004 ( 764174 )

      It's a fine test design...

      If you have two identical pieces of music and you require people to rank them in order of preference, then the results will necessarily be perfectly random. This provides a built-in calibration.

      Conversely, if the results are not random, then the people could necessarily tell differences in them.

      Imagine if you merely asked people to say whether they perceived a difference but without asking them which one they prefer. Such a design would have no built-in "calibration", in the sense

  • by leereyno ( 32197 ) on Wednesday November 18, 2009 @01:39AM (#30140502) Homepage Journal

    Some of your audiophiles will tell you that tube based amplifiers produce less distortion than transistor based models.

    The truth is that they often produce MORE distortion, only the distortion that they produce is pleasing to the ear, where as the distortion created by transistor based amps tends to be unpleasant to listen to.

    If listeners are rating MP3's as superior to FLAC, it is most likely because the psycho-acoustic models used by the codes are introducing artifacts that improve the sound of the music, at least according to the subjective opinion of those listeners.

    What you have to realize is that there is no perfect recording of music or any other form of audio data. All music is distorted as compared to what it actually sounded like in the studio. Some of this distortion is deliberate, which is why you have all those knobs and dials on the mixing console. A lot of music nowadays is compressed, which creates more deliberate distortion. Encoding that analog data into a 16bit digital stream stream at 44khz produces yet more distortion.

    At the end of the day you have to figure out what sounds best to you because all of it will have distortion of some sort or another.

  • by gordguide ( 307383 ) on Wednesday November 18, 2009 @01:43AM (#30140522)

    Fraunhofer spent considerable time and effort to build a lossy codec that was indistinguishable, to most listeners, from uncompressed music (44.1/16-bit) files. mp3 codecs (and the improved codecs that followed, such as AAC or Ogg) all craft the file in such a way as to make the parts "thrown out" the least noticeable and the parts "we keep" the most important cues. Unlike other digital audio compression methods that preceded them, mp3 codecs are built from the ground up to retain most or all of the music signal that human hearing and the brain need to enjoy a satisfying musical performance, and to concentrate on discarding what seems unnecessary to that end.

    That they succeeded is hardly groundbreaking news. That some listeners can tell the difference is also hardly groundbreaking news; there were a significant minority amongst Franuhofer's listening panels who were almost always able to discern which was which. At some point the majority of casual listeners were not able to do so with any consistency. That's when they said "OK, we'll use this method, then."

    There is nothing wrong with well engineered lossy codecs, as anyone who has even a passing familiarity with sat radio or mp3 via computer or music player can easily attest. To say there is no difference, or that an mp3 is "CD quality", is the kind of hyperbole that can't go unchallenged. To be a bit more honest and say "it sounds pretty good" or "I like the way it sounds" is fine, however.

    Most people are OK with some form of lossy codec; in the environment we most often listen these days, it's limitations are not drawbacks, and possibly not even evident (i.e. in a car; there is plenty of extraneous sound to mask most limitations of compressed audio; and as anyone who has ever used a sound pressure meter in a running vehicle on even a deserted road can tell you, the low-frequency noise of any automobile just going about it's business is very high and much of it is subsonic, which we can't normally hear but none the less masks lower-level detail information on music we might be listening to). It's not a crime to say you're OK with mp3, even if you can tell the difference between lossy and lossless formats.

    There's a saying in the sound industry: "Musicians have the worst stereos". And, generally, they do. The reason has more to do with how they listen than what they're listening on: musicians will mentally fill in the sound by following the notes themselves, and things like the beat, the rhythm, the tone, and the timing of the players and their instruments. It's as if they are playing the notes themselves, in their heads, and they need only the elemental cues to do so.

    If you love a song, you don't have to hear it under ideal conditions to enjoy the performance. These are the kinds of things Fraunhofer concentrated on making sure remained in the mp3 after compression. It's supposed to sound good; that was the whole point, and that's why the Fraunhofer codecs succeeded, despite the royalty payments due.

    All that still does not take away from the enjoyment of uncompressed formats, reproduced competently by accurate equipment, in the appropriate environment. Your car or via earbuds on the street are not those types of environments, and mp3s etc are perfectly reasonable compromises between quality and the need for reduced data footprints. There is a place for both uncompressed and compressed formats; they are not mutually exclusive.

  • by zuki ( 845560 ) on Wednesday November 18, 2009 @06:44AM (#30141882) Journal
    While I would tend to agree with the opinions already expressed that:
    • by now people have become so used to MP3 'sizzle' artifacts that they think it is part of the music, and have started preferring it to the cleaner original sounds.
    • it takes someone working in the studio or with an extremely keen ear listening on reference-grade monitoring to detect the artifacts in the compressed versions

    I find it really disturbing that no one ever brings up the fact that these results will vary widely depending on the size of the listening space.
    While my own experience is that at home (Genelec 1031A, Shure EC530 in-ear monitors, etc..) or in a small studio it is somewhat difficult to pick those
    differences out, as soon as the same test is conducted in a larger acoustic space, they jump out to the point that it is obscene and hard to ignore.

    As I have already said many times in previous posts here, we can sit here and argue all day long about which looks better on our laptop's LCD monitor, a 65 Kbytes .jpg
    or a 82 Meg .tiff file of the same photo. Yet when we take these same two files and print them at 5' x 12' billboard size, the jpg file will appear so grossly grainy and pixelated,
    while the .tiff file will maintain a much more coherent presentation of what the original picture looked like.

    In other words, large-scale sound systems tend to act as magnifiers for these minute artifacts and differences which lossy audio compression introduces, and there is no
    question in my mind that when the same tests are performed in an auditorium or a reasonably anechoic concert hall (in open air even better because no reflections) it
    does immediately become quite apparent how much the lossy encoding process actually messes with the information.

    It is not merely a function of frequency response, distortion and other lab specs, rather a more fundamental one of the poorly-understood characteristics that give music its
    inner dynamic, the 'punch' in the low frequencies, the cleanliness in the top end and tails of reverbs, as well as many times the resultant waveforms of many combined
    harmonic sounds in the midrange, probably a bit more so on acoutic instruments, but not always necessarily so.

    I would welcome similar tests done on a reasonable sound reinforcement rig, like a typical line array system with 50,000 watts of power in a room which can accommodate
    1,500 people, a pretty standard setup for concerts and DJ gigs. (keeping in mind that in such systems there is a digital processor in the chain through which the sound will pass)

    There are much deeper implications to this, such as the fact that vinyl and open-reel while flawed to some extent still offer the human ear a much smoother experience
    in acoustic spaces of that size, as CD and DVD players do a very poor job of reconstituting the the 'slices' of digital audio after D/A conversion, yes great master clocking will
    make the signal sound more bearable, but there is a continuity between the waveforms which analog seems to do much better than most digital systems ever can at the sampling
    rates they are currently working at, and which I am sad to report haven't really changed a lot since 1981 when the CD spec was developed, SACD being a step in the right direction.

    That these older analog formats are not even included in the tests means pretty much the equivalent of the one-eyed man being crowned the leader of the kingdom of the blind.
    Which is why to this day, many of the top professional DJs insist on playing from analog sources such as vinyl, which while they have certain inconvenient artifacts of their own, do offer
    something else that the human ear craves for, and is really keenly attuned to: continuity of sound, and the smoothness of a natural waveform. This effect is clearly demonstrated by making
    an high-quality open-reel tape copy of a CD, and playing the two side-to-s

  • by daffmeister ( 602502 ) on Wednesday November 18, 2009 @06:52AM (#30141926) Homepage

    As many people have said, a lot of this is just what you are used to.

    An anecdotal tale from my previous life as a shop-floor assistant at a hi-fi store. We used to sell cheap consumer stuff alongside the serious amps and speakers but probably about a quarter of my customers genuinely preferred the sound coming from a cheap boombox to a more serious setup. It'd make my ears bleed it was so bad but it was the sort of sound they were used to and they liked it.

  • by gatkinso ( 15975 ) on Wednesday November 18, 2009 @07:30AM (#30142106)

    Eliminate the stuff which most of us can't hear?

You are always doing something marginal when the boss drops by your desk.

Working...