Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Media Music

1/3 of People Can't Tell 48Kbps Audio From 160Kbps 567

An anonymous reader writes "Results of a blind listening test show that a third of people can't tell the difference between music encoded at 48Kbps and the same music encoded at 160Kbps. The test was conducted by CNet to find out whether streaming music service Spotify sounded better than new rival Sky Songs. Spotify uses 160Kbps OGG compression for its free service, whereas Sky Songs uses 48Kbps AAC+ compression. Over a third of participants thought the lower bit rate sounded better."
This discussion has been archived. No new comments can be posted.

1/3 of People Can't Tell 48Kbps Audio From 160Kbps

Comments Filter:
  • by Anonymous Coward on Monday October 19, 2009 @01:13PM (#29796225)

    I don't care if 99% of the population cant tell the difference between the two, I can and I want all my audio to be 320Kbps

  • bad comparison? (Score:5, Insightful)

    by MacColossus ( 932054 ) on Monday October 19, 2009 @01:14PM (#29796233) Journal
    I would be more impressed if the same encoding format was used. I think both samples should have been ogg or aac and not a mix. If comparing aac at 48 and 160 are the results different? Same goes for ogg at 48 and 160?
  • by Chris Mattern ( 191822 ) on Monday October 19, 2009 @01:14PM (#29796237)

    People who can't tell the difference have a 50-50 chance of getting it right. Therefore we can deduce that over *two-thirds* of the population can't tell the difference, by adding in the inferred members who couldn't tell, but guessed right.

  • Apples and Oranges (Score:5, Insightful)

    by Shag ( 3737 ) on Monday October 19, 2009 @01:14PM (#29796247) Journal

    Do it with 48kbps AAC vs. 160kbps AAC, or 48kbps OGG vs. 160kbps OGG, and you might have something meaningful.

    Or, 48kbps AAC vs. 48kbps OGG, and 160kbps AAC vs. 160kbps OGG, if you want a flamewar...

  • by Chairboy ( 88841 ) on Monday October 19, 2009 @01:15PM (#29796255) Homepage

    If the higher compression audio had simply used this $500 Denon ethernet cable, the results would have been different:

    http://www.usa.denon.com/ProductDetails/3429.asp [denon.com]

    But seriously, can you make a sweeping statement like "People can't tell 48k audio from 160k" if you're also switching compression technologies? OGG vs. AAC is a whole article on it's own, you just muddy the waters by making this about the compression rate.

    This is just a new version of the old megahertz myth of the CPU wars. Two different 2GHZ processors from different manufacturers are not equal, we all finally figured that out for the most part, right? Now we've moved onwards... to the Kbps myth?

  • bad title (Score:2, Insightful)

    by mbuimbui ( 1130065 ) on Monday October 19, 2009 @01:16PM (#29796291)

    >> 1/3 of People Can't Tell 48Kbps Audio From 160Kbps

    Correction: Over a third of participants thought the lower bit rate sounded better.

    Those are not the same thing. To find out how many people thought they sounded exactly the same, I would have to RTFA.

  • Some other factors (Score:2, Insightful)

    by arugulatarsus ( 1167251 ) on Monday October 19, 2009 @01:17PM (#29796295)
    There are a lot of things to mention in this article. They are using VERY high end hardware that can interpolate the sound and cause sound clipping (which makes things sound metallic) to be minimized. They also didn't mention what songs were chosen. A lot of music is mastered to sound good on poor quality speakers and thus the 48 Kbps may actually not be the limiting factor.
    At least there going to be a new reason to sell audio snake oil now.
  • by Chris Mattern ( 191822 ) on Monday October 19, 2009 @01:17PM (#29796309)

    I'd pay for it if I got to watch you do a blind listening test.

  • by Rei ( 128717 ) on Monday October 19, 2009 @01:18PM (#29796319) Homepage

    To elaborate: in my testing, I took a couple of random tracks (two Coulton rock tracks and two classical Christmas tracks, both FLAC), and encoded them at 96k, 128k, 160k, and 192k ogg vorbis, then played them each into their own wav file, then distributed the re-encoded wav files and a wav generated straight from the flac (all with randomized filenames) to the people who wanted to take part in the test. There was a statistically significant (although not universal) recognition that the 96k was the worst. There was a correlation on the 128k track, but not a statistically significant one (I may want to do this again with a larger sample size). And the 160k, 192k, and original tracks were as good as random.

    Most people hear 128k and think, "How can a person possibly not get *that*?" But that's really a stereotype from the olden days. There's a huge difference between a 128kbps fixed-bitrate mp3 and a 128kbps VBR ogg. VBR makes a *huge* difference.

  • Summary misleading (Score:5, Insightful)

    by spinkham ( 56603 ) on Monday October 19, 2009 @01:20PM (#29796341)

    The summary is quite misleading.
    It sounds like 100% of the participants could tell the difference between the two encodings, just 1/3 of the people thought the more simple, clean, highly compressed version sounded better. 2/3 of people thought the high bitrate version sounded better.

    When choosing compression, the better way to go is to shoot for transparency [wikipedia.org] versus the uncompressed source, not which audio sounds better to your ears.

    That's why ABX [wikipedia.org] is the industry standard for compression comparison, not a simple AB test as in this experiment.

  • let's be clear (Score:5, Insightful)

    by Vorpix ( 60341 ) on Monday October 19, 2009 @01:22PM (#29796401)

    this summary is misleading. they were asked to choose which they thought sounded better. the listeners DID notice a difference between the two, and for some reason 1/3 of the participants enjoyed the lower bitrate version better. perhaps it had less harsh high tones or something about it was more pleasurable to them... that doesn't mean that the higher bitrate didn't honestly sound more accurate to the source material. Perhaps uncompressed audio should have also been incorporated into the test. If they still choose the lower bitrate over uncompressed, then it's clear that some listeners prefer the song with the changes inherent to compression.

    this was a very unscientific study, with a very small sample size, and really shouldn't be front page on slashdot.

  • by Anonymous Coward on Monday October 19, 2009 @01:24PM (#29796413)

    This wasn't a proper repeated ABX double-blind listening test, nor an ABC-HR, but just a single-trial single-blind AB for each person, with one track and no hidden reference. Pathetic and unscientific, and definitely shouldn't be presented as a valid listening test, given how susceptible audio research is to error.

    Dear C|Net: If you're going to do a listening test, please don't just do something that'd get you laughed at on (then banned from) Hydrogenaudio. It's easy to do it properly - Hydrogenaudio have been doing it for years, and that's how the encoders are tuned. Doing it wrong tells you nothing of value.

    Previous, proper ABX double-blind listening tests have proved that Vorbis -q5 (using AoTuV b5.5), which is what Spotify use, is perceptually transparent on almost all listeners on almost all audio. Meanwhile, 48kbps AAC-HE+SBR with a good encoder is best-in-class for its bitrate at the moment, but is very poor at some sounds which spectral band replication tends to make too prominent or artificial; electronic music encodes well, but classical most certainly does not. It almost always is distinguishable in ABX, although it ranks moderately highly in ABC-HR, especially for its bitrate, on untrained listeners. It's not even remotely a competitor to Vorbis -q5, though (or LAME 3.98 -V2 for that matter).

    Want research sources? Hydrogenaudio listening tests, and/or peer-reviewed papers conducted using similar/the same methodologies. Want to contradict those? Do your tests properly first.

  • by fuzzyfuzzyfungus ( 1223518 ) on Monday October 19, 2009 @01:25PM (#29796425) Journal
    While, as you say, most people have crap speakers/headphones, so anything above 128kbps is largely a waste, there is one major reason to do it anyway.

    If you ever upgrade your hardware, dealing with all your old, low-quality, tracks is a pain. You can re-rip, or suffer through, or throw them all away and get new ones; but it is a hassle. With storage so cheap these days, you might just want to include a little extra, in case you upgrade later.
  • Preferences (Score:5, Insightful)

    by gorfie ( 700458 ) on Monday October 19, 2009 @01:26PM (#29796435)
    I used to sell audio equipment as a teenager and I recall different people had different ideas about what constituted quality audio. Some people liked deep muddy base, other people liked loud midranges, etc.. I think the study's conclusion is all wrong... it's not that people can't tell the difference, it's that people sometimes prefer the lower quality bitrate. Personally, I just want things to sound representative of the real-life equivalent. :)
  • by beelsebob ( 529313 ) on Monday October 19, 2009 @01:29PM (#29796489)

    No, just the headline is massively misleading.

    The article actually states that people (a) could hear the difference (b) thought the lower bit rate stuff sounded better.

    The key being that the two were encoded with two totally different codecs.

  • by whyde ( 123448 ) on Monday October 19, 2009 @01:29PM (#29796501)

    Today's low-bitrate MP3/AAC will be tomorrow's vinyl.

    I firmly believe that you prefer what you're accustomed to hearing in the first place. Most kids today have grown up hearing nothing better than highly-compressed FM or low-bitrate MP3 music. They don't know anything better, and given the option of hearing better music, perhaps even uncompressed, with a much larger dynamic range and noise floor, they'll gravitate to what their ears and brain have been trained to appreciate.

    Tomorrow's world will have "128Kbps MP3 Afficionado" publications extolling the virtues, "warmth", and "naturalness" of the low-bitrate MP3. And audiophiles will pay top-dollar for crippled hardware and overcompressed, undersampled music tracks.

  • Re:In other news (Score:3, Insightful)

    by BESTouff ( 531293 ) on Monday October 19, 2009 @01:31PM (#29796537)
    Moreover their math is false: if 1/3 of participants gave the wrong anwser, it means 2/3 of participants couldn't tell the difference and choosed randomly.

    ... given a sufficient sample size, as you noted of course.

  • I stopped at (Score:4, Insightful)

    by obarthelemy ( 160321 ) on Monday October 19, 2009 @01:33PM (#29796571)

    "we tested with Billie Jean"

    I don't hate that song.. but as a testing ground for music hardware/software, it sucks. And you should always test with different types of music.

    Also, small sample size (16), only 1 song in 2 versions, presumably always in the same order, on hardware that has nothing to do with what everybody uses (does that lessen or worsen compression characteristics ?), no control group (wanna bet that with 2 exact same versions, song A or song B consistently comes out on top ? Coke and Pepsi worked that one out long ago). No indication how responses were collected (group ? interviewer ? biased ?).

    made me chuckle. amateurs.

  • by Hatta ( 162192 ) * on Monday October 19, 2009 @01:45PM (#29796753) Journal

    As long as they're not passing around the MP3s, or never want to upgrade their stereo system 128 is fine. If you ever want to do either of those, 128kbps MP3s will not be good enough.

  • by rgviza ( 1303161 ) on Monday October 19, 2009 @01:50PM (#29796851)

    I was all excited that my new car came with a satellite radio reciever, then bitterly disappointed with the sound quality and didn't subscribe. I'm considering replacing it with an HD reciever, once I hear one to find out what "HD" really means. Satellite radio is utter crap for sound quality.

    CD quality mp3 is 320kbps.I can understand not being able to tell 48kbps from 160kbps (especially when a different codec is used for each, the quality of the codec and the configuration of it is key). It's hard to tell the difference between crap and sh*t. The test is only meaningful as a bitrate test if the same codec and encoding settings are used. Otherwise it's apples to oranges. The bitrate isn't nearly as important as how it's encoded unless both streams are done exactly the same way (except for bitrate).

    This test smacks of Apple fanboism. Do a real bitrate test using the same codec and settings (outside of bitrate) and I guarantee you'll get better listener accuracy.

    Why on earth would you do a bitrate test with two different codecs unless the test was really marketing propaganda for one of the codecs? /filed in the Apple marketing bullshit drawer

    This is a codec test, not a bitrate test. As a "can a user tell the difference between these bitrates" test, the results are completely worthless. It's more like a "AAC rulez! look a 48k AAC stream sounds as good as a 160kbps Ogg stream!" /barf

  • by MBGMorden ( 803437 ) on Monday October 19, 2009 @01:50PM (#29796855)

    Indeed, it had to have been the cable.

    There is some wisdom in their story though. Personaly, their data is a bit extreme. I've encoded not at 48Kbps before but certainly at 64Kbps (if you honestly want to know, when ripping "adult" movies from DVD and trying to keep the file a certain size, it makes more sense to give the video extra bitrate and the audio less . . .), and I could certainly tell the difference between 64 and 128Kbps (which is what I rip most regular video's audio track at). Now I've always ripped audio-only tracks at 192Kbps just because it doesn't take too much extra space, but truthfully once you get above 128Kbps I certainly can't tell the difference between the compressed and original anymore. And truthfully, I'd wager than MOST "audiophiles" can't tell either.

    A lot of it in my mind is just pure elitism. Hell I love music and I still just don't get it. Having taken up electric guitar lately, it's gotten even worse with guitarists describing the sound of a particular instrument. I kid you not, you can use ANY adjective you want when describing a sound to these people and they won't think anything of it. Walk up to one and say "I just put these new pickups in my guitar. They sound a bit buttery. A little on the salty side but not too lazy. On the low end though they are TOTALLY dark and shiny.". Your test subject isn't likely to even bat an eye before agreeing but recommending that you switch to XYZ if you'd like your sound a bit more flimsy and dry.

  • Re:Yes but.... (Score:4, Insightful)

    by Metasquares ( 555685 ) <slashdot.metasquared@com> on Monday October 19, 2009 @01:51PM (#29796871) Homepage
    Then the summary is misleading in presenting this as a comparison of bitrates. The article is really comparing the audio quality of the two services.
  • Seems a poor test (Score:3, Insightful)

    by JerryLove ( 1158461 ) on Monday October 19, 2009 @01:53PM (#29796909)

    Based on the article, the testing seems to have very little in the way of meaningful results.

    A single instance of a single song with two different encoders given to listeners who hear "more bass" as a quality where the results were so close to split (two people shy of 50/50).

    To gather meaningful data: songs must be switched quickly: you should go through a variety of materials (it's worth noting that some compressions have more trouble with certain types of sounds than others), and (ideally) there should be a reference from which to work.

    The goal of compression, in theory at least, is to maintain meaningful fedility. Yes, that means that "the part we notice most" is most important: but that's no excuse for causeing "a pleasent error" better than "correct reproduction".

    Of course, I've never tested these encoders. It's possible that the lower bitrate encoder did a better job.

  • There's no news here. The HE AAC codec (called AAC+ in the Coding Technologies implementation, and now called Dolby Pulse after Dolby's acquisition) is a highly advanced spectral band replication codec, and can be pretty darn transparent down to around 48 Kbps. That there was about a 2:1 preference for the high bitrate Ogg in a highly nonscientific small sample size test like this is a yawner. http://en.wikipedia.org/wiki/HE_AAC [wikipedia.org]
  • by Anonymous Coward on Monday October 19, 2009 @02:32PM (#29797529)

    When listening to audio in a car, there is a problem with background noise. The sound pressure level in an average car at 70 mph will be in the region of 70 dB. The threshold of pain is at a sound pressure level of about 120 dB. So in a car there is a maximum dynamic range of no more than 50 dB and in reality far less.

    As CDs spread from homes to cars, CDs which made use of the full dynamic range available in the technology were difficult to hear clearly (and painlessly) as either the loud passages were too loud for most passengers or the quiet passages would be inaudible. So they adapted to the new common audio setup and began mastering CDs with a smaller dynamic range. Since in-car listeners are a major audience, many FM stations also gean using automatic dynamic range compression technology. Not all radio stations do -- BBC Radio 3 (in deference to its listeners' preference for better audio setups) does not use it in the evening.

    As much as I enjoy my iPods, one of my greatest fears related to it was that an increasing dominance of portable, digital audio players with earbuds would ignite something akin to "Loudness Wars -- Round Two" in terms of selling audio pre-recorded at lower bit rates.

  • by FunkyELF ( 609131 ) on Monday October 19, 2009 @02:39PM (#29797627)

    I paid to get my TV ISF calibrated. It looks amazing. But if you brought it inside a Best Buy and sat it next to their other TVs your average Joe would think it looks like crap.
    The TV manufacturers increase the amount of blue to make things appear brighter. People's faces turn green so they up the amount of Red. Then they over-sharpen which introduces artifacts and over-contrast which creates banding.

    Encoding audio in a lossy format no doubtingly does the same thing. They make sure the music still "pop"s to the point where it is exaggerated causing the music to "sound" better.

    The people who say that 48Kbps sounds better than 160 would probably say the same thing compared to the original.

  • by A Friendly Troll ( 1017492 ) on Monday October 19, 2009 @02:50PM (#29797771)

    (although not as low as 46kbps) and reached the same conclusion. Most people vastly overestimate their ability to distinguish tracks encoded at different bitrates. And I've seen study after study that backs this up. This includes self-professed audiophiles, the original authors of particular tracks of music, and so forth.

    This is true. Mostly.

    On most material, you cannot hear the difference; however, every once in a while (though rarely), there will be a song which not even a 320 kbps mp3 can encode properly[1], and there'll be distortion on cymbals or applause or a snare drum or a weird synth. If you don't know how the original is supposed to sound, you won't notice anything strange, but if you do and if you can recognize that "mp3 sizzle", which is far easier for those of us who have been dealing with mp3s ever since the days of early FhG and Xing at 112/128 kbps, the track or even the album is ruined entirely. And I do mean entirely. There's no point in listening to it anymore because you know it's flawed and you'll just be spending time trying to listen to the sound waves instead of music.

    For example, today I listened to a straight FLAC encode of an Armin van Buuren live set / album from Ibiza 2008 (or something) and several tracks were totally messed up - the "sizzle" was perfectly clear, as AvB either used mp3s directly, or burned mp3s to CDs.

    [1] I remember ripping a Cranberries album in the late 90s, when lossless container formats didn't exist and the best encoder was the original Fraunhofer, which couldn't deal with Dolores and her band at all, even with 256 kbps. The situation has since improved greatly (though it's not anywhere near perfect), but we have the opposite circumstances: those were the days of 1-2 GB hard drives and .wav lossless was just out of the question, whereas today we have 1-2 TB drives and 300-500 MB for a FLAC is a drop in the storage ocean. I don't care about mp3s at all anymore. I buy a CD, I rip it and encode to FLAC - what the hell else am I going to spend disk space on?

  • by Chapter80 ( 926879 ) on Monday October 19, 2009 @02:53PM (#29797831)

    Since the 16 subjects were asked "which sounds better" and were not given an alternative "there's no difference" then it's actually possible that 12 of the 16 thought there was no difference, and so they randomly picked A or B. And 6 picked A.

    So it's possible that only 25% could tell the difference and selected the higher bit rate.

    Great study. Very Scientific.

  • ipod users... (Score:0, Insightful)

    by Anonymous Coward on Monday October 19, 2009 @02:53PM (#29797833)
    >> They simply don't have the mental acuity to care.

    In other words, the ipod users. The ones who just think how cool it looks, not how good it sounds.
    e.g. iriver kicks ipod ass any time when it comes to sound quality. So does Sony (I hate them too, but this is my experience).
  • by JerryLove ( 1158461 ) on Monday October 19, 2009 @03:08PM (#29798109)

    The human memory for details seems to be measured in seconds. One big issue in blind-testing speakers is to make sure that you can switch almost instantly between two volume-matched pairs.

    Playing to the end of a song then listening to it again is not going to yield the best objective results; although it does say something subjective pretty strongly.

  • by DJRumpy ( 1345787 ) on Monday October 19, 2009 @03:09PM (#29798133)

    This 'test' seems rather lacking. It doesn't note if the AAC is HE or LC. That can have a very big impact on quality as HE takes more processing power but delivers much better quality at low bitrates. Each codec would also have it's quirks and 'tricks' that establish it's strong and weak points. Some people will simply like one aspect of a codecs compression methods over another, whether that pertains to filterout out high frequency, chopping out repetitive or white noise that is typically not heard, or whatnot.

    The fact that they also only tested 16 people should tell the rest of the story. It's not even remotely a good sampling of users and considering the source, it probably consists of users who are 'in the know' about compression techniques and what to listen for.

    I would be very interested in a larger study with a random sampling of the users of these two services, with a much larger study group to see what it shows.

  • by Korin43 ( 881732 ) on Monday October 19, 2009 @03:10PM (#29798143) Homepage
    Variable bitrate produces a file where the average is the given bitrate, but it changes the bitrate at any given point based on need. So for example, any parts of the song that are silence will be encoded at very low bitrates, and complicated parts will be encoded at higher bitrates. If you encode with a fixed bitrate, you're wasting space encoding simple parts and not encoding the complicated parts as high as you should.

    So for your second question we should turn it around into "is 128 kbps VBR better than 192 kbps fixed?", and the answer is that it depends on the song, but you probably wouldn't be able to tell the difference anyway. A more interesting question is "is 128 kbps VBR better than 128 kpbs fixed" and the answer is yes, always.

    Of course, unless you're some kind of audio god (and I mean, completely inhuman), you probably can't guess what bitrate you need anyway, which is where quality-based encoders come in. If you want to encode a file in ogg vorbis, you just say what quality you want, and it figures out what the average bitrate needs to be to encode it.
  • by thisnamestoolong ( 1584383 ) on Monday October 19, 2009 @03:14PM (#29798213)
    CDs are encoded at 44.1 kHz, 16-bit stereo. If you do a little math, this comes out to 1.4 Mbps, meaning that to get you audio down to such a low bit rate you need to eliminate 29 out of every 30 bits. If anybody out there is incapable of hearing the difference, they need to go get a hearing test right away, as that level of compression is EXTREMELY destructive to the quality of the audio.
  • by yurtinus ( 1590157 ) on Monday October 19, 2009 @03:18PM (#29798307)
    You blame the people, I blame the editors.

    There is no study here, from TFA (which itself is barely longer than TFS), sixteen people were asked to state which song clip they thought sounded better. I'm surprised the results were better than 50/50. From TFA, all listeners could *tell* a difference and the report was on which one they *prefer*

    Really, there's nothing to see here.
  • Re:ipod users... (Score:5, Insightful)

    by cayenne8 ( 626475 ) on Monday October 19, 2009 @03:29PM (#29798493) Homepage Journal
    I agree.

    I think a LOT of this has to do with so many of today's kids not KNOWING what good sound reproduction CAN sound like.

    I've been building my stereo system ever since I was a kid. I walked into a high end audio shop at about age 12...and first heard Klipschorn's [klipsch.com] hooked to McIntosh [mcintoshlabs.com] tube amp, and I couldn't believe my ears...

    It was right then, that I started building my system so I could have that some day. And, today...after buying piece here..piece there, deal on this..selling it and improving one piece at time (ok, thieves and insurance helped with the speakers at the end), I almost have that set up.

    People that come over and hear it...are often amazed how good it sounds....they often exclaim they hear new things and nuances in familiar songs they'd never heard before.

    Sure, I like an iPod, I have a couple of them...a shuffle for the gym, and a classic for travel, in the car..etc. I have good earphones for them, Shure 530's I think....but, I do realize that these are for very POOR listening environments. I try to get my music in the best source I can (this means CD's at this time, can't buy lossless online yet), I rip them to flac for home stereo usage..and decently high quality mp3 for portable use.

    Unfortunately, somewhere between now and when I was a kid...people stopped buying good home audio systems. I don't quite know what or what happened. Somewhere along the line...ONLY portable players came into vogue...and it is sad that so many are losing out how good sound reproduction can be. I dunno if it is cause or effect....but, so much of todays music is mixed so poorly, overly compressed with no dynamic headroom anymore. So, maybe there isn't much point to getting good gear, if new music is no longer mixed to get the most out of it.

    But, as far as good gear goes....you needn't go overboard on the super audiophile non-sense and voodoo that is out there, but, with respect to solid audio gear...to a certain extent, you do get what you pay for...

  • by StrategicIrony ( 1183007 ) on Monday October 19, 2009 @03:47PM (#29798849)

    Hertz and bits are independent. You can sample something at 150kHz but then compress it to 14kbps. You can likewise, sample something at 14kHz and then encode it at 1.5Mbps.

    There was absolutely no (none, zero) discussion on the sampling rate and I was correcting that.

  • Re:ipod users... (Score:5, Insightful)

    by droopycom ( 470921 ) on Monday October 19, 2009 @04:35PM (#29799669)

    Unfortunately, somewhere between now and when I was a kid...people stopped buying good home audio systems. I don't quite know what or what happened.

    Maybe they didnt enjoy it.

    Why do you think people enjoy music and songs ? Most likely, not because its reproduced faithfully, or because they care about the nuances.

    Almost nobody cares about the nuances, they like beats and bass or dancy tunes, gansta lyrics or love stories, stuff that is accessible and they can relate to.

    Sure, some people care about that little uptick from the violin on the 3rd measure of the 6th symphony of whoever that you can only hear with an hi-fi system in a quiet room. But most people just listen to music to either give them some energy for their workout, have fun at parties or concert, or drone the sounds of their miserable commutes, dreary jog run or boring life.

    Same reasons people eat fast food instead of fine cuisine I guess.

  • by g00ey ( 1494205 ) on Monday October 19, 2009 @04:55PM (#29799967)
    And the equipment they use. The chain is never stronger than its weakest link. There is no point in testing say 24bit@96kHz uncompressed if the audio equipment cannot deliver it.
  • by brunes69 ( 86786 ) <[slashdot] [at] [keirstead.org]> on Monday October 19, 2009 @04:59PM (#29800007)

    If the only way you can tell the difference is in a "controlled environment" by someone with "listening experience" (whatever that means0, then the difference is completely irrelevant, both to the average person and to the marketplace.

  • Re:ipod users... (Score:3, Insightful)

    by VeNoM0619 ( 1058216 ) on Monday October 19, 2009 @05:24PM (#29800359)

    People that come over and hear it...are often amazed how good it sounds....they often exclaim they hear new things and nuances in familiar songs they'd never heard before.

    Same is true when I turn up the treble, or turn up the bass. I can hear different "parts" of a song. My headphones sound differently than my speakers, both have unique sounds that I hear the song differently. Which version is better is entirely subjective and opinionated to a certain point. Sure, hearing clipping will be a dead giveaway on poor quality, but not when (in effect) the equalizer settings are different from each system.

    My brother loves songs with no bass, and higher treble. I prefer songs with middle to lots of bass and middle to no treble. He can hear a song on my speakers and hate it, but take it to his system and love it. Song enjoyment based on people's preferences is not scientific.

  • Re:ipod users... (Score:5, Insightful)

    by Damek ( 515688 ) <adam&damek,org> on Monday October 19, 2009 @05:41PM (#29800567) Homepage

    I grew up in the 80s and early 90s and most people I knew just had off-the-shelf radio/cassette/record-players from Target or wherever. Myself included. And the music always sounded good enough. It still does. I had a couple friends who turned audio hobbyist but I never saw the point. They spent loads of money and seemed to enjoy the music less.

    And nowadays, emphasis should really be on enjoying music live, anyway. I might be wrong but I expect distribution will bring less and less money, but not less fame - and fame will bring performances and money.

    If I want to carry my favorite artists with me, or listen to them at home, I have bigger things to worry about and spend on than the quality of the audio. Good enough is good enough for that.

I've noticed several design suggestions in your code.

Working...