Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Music Media

Does Portable Music Have to be Compressed? 540

FunkeyMonk writes "The Christian Science monitor has an article discussing the gap between music fans and audiophiles when it comes to portable music. Would you pay a few cents more to have lossless downloads from iTunes and other online music retailers? As a classical musician myself, I choose not to download most of my music, but rather rip it myself in lossless format."
This discussion has been archived. No new comments can be posted.

Does Portable Music Have to be Compressed?

Comments Filter:
  • more for non-DRM (Score:5, Informative)

    by yagu ( 721525 ) * <{yayagu} {at} {gmail.com}> on Sunday December 03, 2006 @11:23AM (#17089242) Journal

    Actually I'd like to be able to get an "original" image a la the CDs you buy, but allow single CD tracks. Would I pay more for that? I don't know. I've never bought any of the DRM'ed crap because it's DRM'ed, so I don't know how badly (or well) compressed they are.

    If there are audible compression artifacts anywhere in today's downloadable DRM'ed music I'd probably insist the compression be less or not at all, after all I'm paying for music, and a compression artifact (to me) is analogous to stuck pixels in a monitor or camera... my threshold of tolerance is zero for that.

    (I had one of the very original SONY Mini-disk recorders, and remember a passage of a Doobie Brothers track where some high pitched bells instead of sounding like high pitched bells sounded like someone sneezing... unacceptable... completely altered my experience of MD (along with numerous other things about SONY).)

    So, bottom line, DRM aside, I consider it the responsibility of the music industry to deliver what they claim they are delivering... music (usually). I'm willing to bet what they are delivering has artifacts... I wouldn't pay more to get rid of that, I'd demand they replace the defective product.

    The nice thing about my CDs and my derivative mp3 collection (recorded at 320 VBR) is if I hear an artifact in my track, I have the unedited original, I rip it at higher quality until the artifact isn't there.

    (As an aside, I think the article makes an exceptionally great point not directly related to the users:

    That's important to sound engineers, too. "You spend a long time training your ears and striving to perfect your craft and put out a better product," says Jeff Willens, an audio-restoration specialist at Vidipax in Long Island City, N.Y. "When you finally discover that these things are being listened to on cellphones and through pea-size earphones, it's kind of disheartening."

    So, in addition to short-shrifting consumers with less-than-perfect (to the ear) product, the movers of downloadable music thumb their noses at the collective profession of sound engineers and engineering... pretty rude.

    Granted, a lot of the music out there is crap -- it's no justification for compromise on the medium.

    Oh, and re the subject line of my post... I'd pay a little more for non-DRMed music, not uncompressed music.

  • by tomstdenis ( 446163 ) <tomstdenis AT gmail DOT com> on Sunday December 03, 2006 @11:31AM (#17089296) Homepage
    A properly mixed (re: not super compressed [range wise]) CD has 96dB of SNR in each channel. That's mighty fine given the sensitivity of human hearing isn't that super anyways. SA-CD and DVD-CD can offer a bit more range but honestly the difference is lost on most.

    What you really should get all in a knot about is the continously low quality of shite music being promotoed. Payola's a bitch.

    Tom
  • by Anonymous Coward on Sunday December 03, 2006 @11:35AM (#17089328)
    Ogg Vorbis is lossy, Ogg Flac isn't.
  • Double blind test (Score:5, Informative)

    by theLOUDroom ( 556455 ) on Sunday December 03, 2006 @11:37AM (#17089342)
    The right way to answer this question is with double blind testing.
    "Audiophiles" like to make all sorts or ridiculous claims that lead to things like $2000 speaker cables, gold CDs and just a general proliferation of nonsensical technobabble.

    Psychology simply has too strong of an effect on questions like this to get an actual answer from a forum like this.

    What you'd really find is that as the bitrate of an mp3 goes up, the number of people who can tell the difference goes down. At some point the number of people who can tell the difference becomes a statistically insignificant sample. This would be a good project for some grad student.
  • Re:What's the point? (Score:1, Informative)

    by Anonymous Coward on Sunday December 03, 2006 @11:40AM (#17089366)
    If you're referring to the iPod and the standard earbuds then you're absolutely correct. DAPs with higher quality hardware (like the X5 or the fabulous but discontinued iRiver H100 series) are not bottlenecks. With a quality headphone amp and headphones they become extraordinarily high quality listening devices that need either lossless or high bitrate lossy files.
  • by Overzeetop ( 214511 ) on Sunday December 03, 2006 @11:43AM (#17089404) Journal
    WAV? He uses WAV? Why on god's green earth would you bother using WAV to listed to your music when there are a plethora of lossless codecs out there? You can get roughtly 2:1 compression with any of the codecs - heck he could even use wavpack if he was so stuck on having wav in the name. Heck, most audiophiles worth their $3000 interconnects are appalled at the harsheness and "cold, digital" feel of that 44.1khz/16 bit crap that was forced on the public when we got CDs.

    Lossless is coming soon to most of us. With the 5.5g iPod at 80GB and the Zune hackable to 80GB as well, all but the top 3-4% of all consumers can fit their entire (legal) collection on a single portable device in lossless compression. I've got about 6500 tracks, most as FLAC rips, and I'm right about 81GB (plus about 40GB in books, but those are all low-bitrate). If I jettisoned the extra downloded stuff I have that I didn't like (but didn't get around to deleting), I'd probably drop to 75GB or so. I suspect that my entire family (three of us) buys less than 5GB worth of content each year. There's no reason to expect that the size of the players, in capacity, will not continue to decrease. As for those with bigger collections...well, just get more portables, or learn to live with a smaller subset on your player (or a higher compression).

    As long as the high-qualtiy masters are available, portables can become a calculated compromise. Since my threshhold for accuracy happens to be at about 256kb/s LAME, that's where I transcode my FLAC library for my portable. If I had a car player, it would probably be more like 160kb. Heck, it's practically impossible to hear artifacts at 128kb in my Pilot at 70mph at a normal volume. My wife's 8GB flash player will be encoded in the 160-192 range, becuase I know she doesn't have the gear to hear much more, and she's just not that picky. With good music managers, you can automagically sync and transcode at the same time (I use mediamonkey). Transodeing is a bit slow right now, but as PCs get faster, the sync/transcode process will get better and better.

    I do agree that it is a travesty that the online services will not offer home-archival-quality tracks, but I'm probably a top-10% listening geek. I buy all my music on CD, and rip to FLAC. Okay, okay - I've bought some at AllOfMp3.com, too, but I can get lossless there. The key is that the studios will continue to have qualtiy masters - but will they be willing to sell that quality to the public?
  • by mushadv ( 909107 ) on Sunday December 03, 2006 @11:57AM (#17089550)

    Who the hell uses 128 kbps MP3 anymore? If you use iTunes, like a sizeable group of mainstream consumers, then you're getting 128 kbps AAC, which is indistinguishable from the source when it comes to loud, over-compressed pop music. When it comes to something like classical, that's when you probably need to move up to 160 or 192 (which iTunes doesn't offer, unfortunately). I don't have a clear idea of wma's quality, which is the other mainstream consumer digital music format. My point is that you probably have nothing to worry about concerning MP3 becoming the standard format, at least through official means of distribution. After all, it's too hard to DRM it and lock your customers into one unshiftable format and player.

    That said, I really like Bleep [bleep.com], which distributes music in non-DRM, high-quality VBR MP3 and sometimes FLAC, both of which create sample-perfect representations of whatever's encoded with it.

  • by Yvan256 ( 722131 ) on Sunday December 03, 2006 @12:05PM (#17089616) Homepage Journal
    Okay, show me non-lossy non-PCM digital audio. You can't? Well, too bad. Digital music is usually PCM and most of us refer to CD's as "lossless", being our only "source" to convert to other formats.
  • by timeOday ( 582209 ) on Sunday December 03, 2006 @12:22PM (#17089738)
    Let's not perpetrate the myth that music can be recorded losslessly in the first place. All sampling is lossy. CDDA specifies a certain sample rate, beyond which you lose higher frequencies, and a fixed number of bits per sample, so you lose precision. For the same bitrate, you would get better results by starting with a high-resolution master and using lossy compression down to CDDA bitrate.

    I'm not arguing that a lossy encoding of CDDA is as good as CDDA; it isn't. Just that there's no law of nature establishing CDDA as the gold standard in the first place.

  • by hedronist ( 233240 ) * on Sunday December 03, 2006 @12:39PM (#17089890)
    A while ago I ripped our entire CD collection (about 1200 discs) to FLAC, a lossless codec. Each minute of audio takes approximately 5.5MB, so it lives on a 750GB drive (x 2 because I mirrored that sucker -- don't want to have to go through *that* again). I then did a batch down-convert to OGG/Vorbis to go onto my iRiver player (no, not all of it). I ripped to FLAC so that if/when better lossy codecs come along, I can simply do batch down-convert without reripping. Note: you do *not* want to convert one lossy codec to another lossy codec; all you will get is the worst of both codecs in one file.

    I became curious about just how the various compressions stacked up against each other. I knew Vorbis was better than "normal" MP3 by a long shot, but newer MP3 variations have definitely gotten better. Here are the formats tested: WAV (straight from the CD), FLAC, Vorbis, and about 15 different MP3 variations (VBR, CBR/ABR, 32k to 320K). I tried both down-convert from FLAC and ripped-direct-from-CD (there should be no difference, and I certainly couldn't hear any). This was done on a variety of material, choosing particularly demanding/revealing passages from acoustic guitar, cafe jazz trios, brass ensembles, Beethoven's 6th, piano (jazz and classical), rock and vocalists (Streisand, Baez, Queen - Bohemian Rhapsody).

    I did a few tests and verified that I could not distinguish between WAV and FLAC -- no surprise there -- so for convenience the other formats were compared to FLAC as the baseline.

    I did extensive A-B, B-C, A-C, etc., etc. comparisons using my main system (Marantz A/V amp with Magneplanar MG-IIIa speakers) and also with Sennheiser HD595 headphones. Below 128k, MP3 is complete crap. Starting at 128-CBR, it got more difficult to hear the difference. At CBR/192 or VBR/medium, I could rarely distinguish MP3 from FLAC, although sometimes the high-hat cymbals sounded like they lost a little bit of brilliance.

    Although I'm a fairly discerning listener, I do have high-frequency hearing damage in my right ear. So I brought in a friend who is a serious audiophile. We did a lot of listening and comparing (many hours over several days because your ears get "tired"), both on my system and back at his house.

    The Verdict: Vorbis is good, really good. But MP3's produced by Lame at VBR/Medium to VBR/High are also really, really good, maybe even better. MP3/VBR/Medium is approximately the same size as Vorbis/Normal (-q 4.99) at about 1MB/minute -- 1/5 the size of the FLAC files. Although there are players out there that can handle Vorbis, there are many more that don't.

    Ps. We're not going to throw out the FLACs, because something better *will* come along. By that I mean 'smaller than' MP3/VBR/HIGH.
  • Re:Double blind test (Score:4, Informative)

    by Alcari ( 1017246 ) on Sunday December 03, 2006 @12:45PM (#17089946)
    I recall a test at the university here. The local audiophile group set up their best of the best stuff, which measured up to about a Lexus worth of gear. Insert one CD holding original tracks, 128, 160, 192, 224, 256 kbit/s mp3s, all at 44.1khz. Most people could generally pick out 128kbit as 'not quite as good as the rest' but all the others sounded pretty similar. However, when the platina encrested CD player gest replaced by a generic mp3 player, it all sounds a lot worse.
  • by h2g2bob ( 948006 ) on Sunday December 03, 2006 @12:53PM (#17090016) Homepage

    Ahem, http://flac.sf.net/ [sf.net]

    A used for Magnatune downloads (among others), and supported by decent media player software and a handful of MP3 players
  • by hankwang ( 413283 ) * on Sunday December 03, 2006 @12:54PM (#17090030) Homepage
    Yes, there will be a quality diff between #4 and #1, but it'll be the same miniscule PSNR loss as from #1 to #2. So unless you transcode a dozen times or something it won't really hurt you.

    Every encoder will generate ringing and other artifacts. Every good encoder tries to put those artifacts just a bit below the hearing threshold according to an algorithm that has been tested extensively with normal music. However, encoders are generally not fine-tuned to deal with the unnatural type of noise that results from another encoding process, resulting in the noise ending up above the hearing threshold after the second time.

    You might wish to check some double-blind test results on HydrogenAudio [hydrogenaudio.org]. Short version: reencoding 256 kbps MP3 to 128 kbps MP3 sounds horrible compared to 128 kbps MP3 straight from the lossless source.

  • by Superfarstucker ( 621775 ) on Sunday December 03, 2006 @12:56PM (#17090040)
    I have to disagree. Anyone who really cares to know the truth can see the difference for themselves. Foobar2k is packaged with a plugin called ABX Comparator. http://en.wikipedia.org/wiki/ABX_test [wikipedia.org] You don't need special equipment. Anyone can do it if they actually own a cd.
  • by Yvan256 ( 722131 ) on Sunday December 03, 2006 @01:05PM (#17090126) Homepage Journal
    FLAC and Apple Lossless are both PCM-encoded, which to some people equals with "lossy" (and they're technically right).

    My original post did say "show me non-lossy, non-PCM".
  • Re:Double blind test (Score:5, Informative)

    by Anonymous Coward on Sunday December 03, 2006 @01:08PM (#17090164)
    No need for grad students. Hydrogenaudio [hydrogenaudio.org] regularly does double-blind listening test (there's a new one [maresweb.de] currently [hydrogenaudio.org] underway) and the results are damming for "audiophiles" everywhere.


    Using up to date encoders, for the vast majority of people, for the vast majority of tracks, 128 kbps is indistinguishable from source.

    Link. [maresweb.de]

    Everyone should try to ABX at least once. You'll be shocked how much worse your ears are that you'd believe them to be... ABX Just Destroyed My Ego [hydrogenaudio.org] is a very informative read for any would be audiophiles:

    I think the reason is in large due to the common misconception that audio compression heavily alters the sound. Less dynamics, weaker bass and all those other descriptions "audiophiles" like to throw around, and that in fact are nothing more than just placebo. But in reality, the artifacts are much more subtle, and often require actual training for an inexperienced user to be able to hear them.
  • by Yvan256 ( 722131 ) on Sunday December 03, 2006 @01:11PM (#17090198) Homepage Journal
    I'm well aware of the existence of FLAC and Apple Lossless, thank you.

    But last time I checked, all sampled music was PCM, and that's lossy by definition. You're limited in the sampling rate and the bit resolution, which makes is lossy when comparing with the original (i.e. "real-life") source.

    Then again, like my original post says, audio CDs are what most of us have to use as the "original lossless" source.

    So no, FLAC isn't "lossy" in the MP3/AAC/VQF/WMA sense, but it is PCM, which my original post clearly pointed out (I asked for "non-lossy non-PCM". FLAC is non-lossy but is PCM.

  • Re:Double blind test (Score:3, Informative)

    by SonicSpike ( 242293 ) on Sunday December 03, 2006 @01:20PM (#17090286) Journal
    I am an audio engineer, and the college I graduated from has a MFA program and they do this sort of thing all the time. Check it out:
    http://mtsu.edu/~record/ [mtsu.edu]
  • by Inoshiro ( 71693 ) on Sunday December 03, 2006 @01:23PM (#17090330) Homepage
    Since it seems you fooled your mods with handwaving, I'm going to explain what you mean and why you're wrong.

    Taking an analog signal and representing it digitally is an application of Nyquist-Shannon sampling [wikipedia.org]. The important bit to understand (for those of you who've never heard of it), is that the Nyquist rate [wikipedia.org] is twice that of the sampling rate you want to record.

    A 44.1Khz sampling rate perfectly records a 22.05Khz signal, 48 Khz does 24Khz, etc. Human hearing peaks out at 20Khz for most people, and many people spend a good chunk of their life destroying their upper hearing range with various tools (rock concerts, overly loud headphones, etc) anyway. 48Khz is marginally better, but 44Khz is more than enough to sample anything most people can hear perfectly.

    "Let's not perpetrate the myth that music can be recorded losslessly in the first place. All sampling is lossy." -- so, since we're directly sampling (sector-by-sector) the raw bit values, or sampling a perfect reconstruction of a 22Khz signal, there is no loss either way (although the 2nd one has to deal with cables and other noise in the electrical system, since you pass through DAC -- analog -- ADC). At least, not loss humans can hear.
  • by ucblockhead ( 63650 ) on Sunday December 03, 2006 @01:29PM (#17090396) Homepage Journal
    You are confusing terms. "Lossy" and "Lossless" are terms that apply only to compression. They have very specific meanings that has nothing to do with recording accuracy.

    You are correct that it is impossible (even theoretically) to record music perfectly accurately...but this doesn't have anything to do with "lossless". CDDA encoding uses lossless compression. That means that it is a perfectly accurate representation of what was recorded, though obviously the recording is not a perfectly accurate representation of the sound wave.

    This is an important distinction in that you can perfectly accurately convert between anything compressed in a lossless manner, but you lose accuracy every time you convert between anything compressed in a lossy manner. That is, I can convert CDDA->FLAC->Apple Lossless->CDDA over and over ad infinitum and still get exactly the same CDDA file. On the other hand, if I were to try this CDDA->MP3->WMA->CDDA, I'd end up with crappier and crappier reproductions.
  • by Bluesman ( 104513 ) on Sunday December 03, 2006 @02:02PM (#17090752) Homepage
    You're right, but again we have to consider what people are able to hear. The number of quantization levels that are used insures that the human ear can't tell the difference between two intermediate levels.

    It's funny, I have an audiophile acquaintance who swears that records are superior in every way to "digital," and for the same reasons described above. The funny thing is, because of the large number of quantization levels used in a CD, the CD's dynamic range far surpasses that of any record player. More info here [georgegraham.com]

    Theoretically, yes, analog would always be superior. But in reality, physical limitations of the stylus on a record player limit that medium far more than quantization limits the CD. Those same physical limits exist in the human ear, too.

    So, while digital might not be "perfect" theoretically, it's "perfect enough" allowing for the limitations of the human ear.

  • by spineboy ( 22918 ) on Sunday December 03, 2006 @02:03PM (#17090762) Journal
    I'm a musician in my spare time, and pretty much have noticed that other musicians really don't care much as to the Quality of the recording that they listen to. Recording songs in a studio is an exception, but what I've noticed is that many of my friends just listen to their music on crappy boom boxes, etc. Is it a function of being poor - nope haven't seen that. But what I have noticed is that a majority of "audiophiles" are not musicians. Yes, of course we'll see the few exceptions, to prove a point, but generally musicians are interested in the chord progression, melody, rhythm, instrumentation, etc. The recording quality is the last thing we care about when listening to a song.
  • by baffled ( 1034554 ) on Sunday December 03, 2006 @02:55PM (#17091272)
    The Nyquist rate, or twice the highest frequency, is adequate for a signal that doesn't change. However, audio consists of a set of frequencies that are constantly changing, and this reduces the highest frequency that is accurately represented at a given sampling rate.

    While I don't have any reference to give you, I find it a matter of common sense. If you sample a 1hz signal @ 2hz, you'll see consistent peaks & valleys, and the signal can be assumed almost immediately, after 3 samples (ignoring issues of quantized amplitude sensitivity over time). If you sample a 0.9hz signal @ 2hz, you'll see peaks & valleys alternating as before, but their amplitudes are both approaching zero, then cross zero, approach peak, and repeat. After analyzing this signal for a duration, you could assume it was a 0.9hz signal because of the relationship between the rate of amplitude change and the rate at which those amplitudes cross zero.. although this also assumes that you'd never see a 1hz signal simply increasing and decreasing amplitude at that same rate - considering this condition places stipulations on both frequency AND amplitude over time, whereas a 0.9hz signal only stipulates the frequency over time, we can only make a definitive assumption if we know the frequency doesn't change over time.

    Hence, considering the frequencies are changing over time, we can't possibly accurately reconstruct an audio signal using a sample rate at twice the highest frequency, unless you get very lucky. As we consider a lower and lower highest frequency, our chosen sampling rate becomes more and more accurate, though I don't believe you ever reach perfect 100% reconstruction because of the irrational nature of true time-varying frequencies. One could, theoretically, calculate the accuracy of a given sampling rate for a given maximum frequency - I'm sure someone has at some point.

    In fact you could analyze the typical audio signals that are digitized today, and develop some rough statistical analysis of how often a given frequency changes at a rate that could be interpreted as another frequency. This would likely vary depending on the individual frequency, the relative location within a song, and the musical genre. You could use these numbers to select an appropriate sampling rate to achieve N% accuracy of frequencies up to a X-hz maximum.
  • by Anonymous Coward on Sunday December 03, 2006 @03:30PM (#17091590)

    although this also assumes that you'd never see a 1hz signal simply increasing and decreasing amplitude at that same rate - considering this condition places stipulations on both frequency AND amplitude over time, whereas a 0.9hz signal only stipulates the frequency over time, we can only make a definitive assumption if we know the frequency doesn't change over
    A 0.9hz signal that changes over time is no longer a 0.9hz signal.


    For example an AM radio signal is a signal that has its base frequency, the one you see on your radio, modulated by an audio signal. This means its amplitude changes with the audio signal, the effect on the frequency content of the signal is to spread it out around the original unmodulated frequency. The amount of spreading is related to the frequency content in the modulating signal.

  • by jrockway ( 229604 ) <jon-nospam@jrock.us> on Sunday December 03, 2006 @06:13PM (#17092924) Homepage Journal
    > since almost anyone can hear the differences between different digital formats

    He's not talking about formats, he's talking about the way samples are recorded. Each sample is a number from 0 to 2^16-1. He's saying that human ears can't hear the difference between 2^16-1 and 2^16-2 (and so on, down to 0). This means that there's no point in adding more bits to each sample, since you can't hear the difference anyway. (The only reason to add more bits is if you have a really small signal and you're going to amplify it. Try listening to music through an amp fed by a digital amp, but with your music player digitally reducing the volume. Sounds really weird, because you're reducing the number of bits.)

    > Red Book audio, the standard for CDs, is not the highest quality humans can hear

    Any proof here? I don't have anything to test with personally, but considering that CDDA can sample any sound that your ears can, and that each level is represents is indistinguishable from other levels by your ears, it's probably pretty close to perfect. The output of a CD and a live performance will look different on an oscilloscope, but they'll probably sound the same through your ears.
  • by tepples ( 727027 ) <tepples.gmail@com> on Sunday December 03, 2006 @07:19PM (#17093418) Homepage Journal

    I am very interested in the theoretical basis of your assertion that the quantisation errors in a CD are small enough to be unnoticable by the human ear

    One step of CD mastering involves quantizing a signal to linear PCM at 16-bit. This process introduces quantization error [wikipedia.org], which shows up in the reconstructed signal as a noise floor at roughly 93 decibels below full scale (-93 dBFS). This means if a recording is played with volume set such that full scale = 100 decibels sound pressure (100 dB SPL), quantization noise will be about 7 dB SPL. The human ear cannot hear sounds below the absolute threshold of hearing [wikipedia.org], and this threshold is much higher above 8000 Hz than it is in the region of peak sensitivity (1000 to 6000 Hz). Noise shaping [wikipedia.org] algorithms have been developed that move most of the quantization noise above 10000 Hz. Therefore, at comfortable listening levels, a properly mastered CD moves all quantization noise out of range of the human auditory system.

  • by evilviper ( 135110 ) on Sunday December 03, 2006 @09:11PM (#17094302) Journal
    AAC is not "Apple's".

    No it isn't, but perhaps he is SPECIFICALLY talking about Apple's implimentation.

    WMA is a container, not a compression codec.

    Completely wrong. ASF is the container used by WMA and WMV files.

    WMA is indeed the name of the audio codec, and WMV is a video codec.

    AVI is a container and not a compression codec.

    He didn't say these were codecs. Included in your own quotation, he said: "audio file formats."

    Wav is not a lossless format. It is limited by in it's dynamic range (bits per sample) and sample rate. Compared to analog or a raw sound source, raw wav/pcm data loses a lot of the sound.

    Yes it is. You'll get exactly the bits out that you put in. Your complaints are about DIGITAL SAMPLING OF ANALOG AUDIO AND HAVE NO SPECIFIC RELEVANCE TO WAV.

    FLAC and other lossless codecs produce identical byte-to-byte output when compared to wav/pcm.

    FLAC is not a lossless format. It is limited by in it's dynamic range (bits per sample) and sample rate. Compared to analog or a raw sound source, FLAC loses a lot of the sound.

  • by PyrotekNX ( 548525 ) on Sunday December 03, 2006 @10:39PM (#17094848)
    Even though a CD has a large dynamic range window, most new CDs are mastered too hot and there is significant loss due to over normalization and clipping. Older CDs were mastered for Hi-Fi systems and have a great deal of dynamic range. Newer CDs are mastered to be heard on the radio or a portable CD player. Records on the other hand are still mastered to have more dynamic range and therefore are superior recordings.

    Analog recordings have a soft window so there isn't hard clipping like there is in the hard window of digtial.
  • by Anonymous Coward on Sunday December 03, 2006 @11:28PM (#17095136)
    Amen brotha,
    (posting anym cause I've already modded)
    a good friend of mine is the leading contrabass in one of Germany's most respected symphony orchestra's (NDR.de). When it comes to (classical) music, he knows what he's talking about. He uses an ipod for pop AND classic and repeatedly tells me, when properly ripped (depending on what it is, starting from 128kbit up 196kbit or VBR), he can hear his cello colleague's fart in Bruckner's d-minor symphony. He generally doesn't give a shit about so called overexpensive "audiophile" equipment. Maybe he turned deaf from performing too many Mahler synphonies ;-)

Always draw your curves, then plot your reading.

Working...