Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Music Media

Musical Machines Gain Recognition 136

vena writes "CNN has an insightful article on the increased role of computers in the production of music. While Musikmesse, the world's largest musical instrument show, rapidly increases their support of the computer as a musical instrument, there are still limitations to the power and ability of software synthesizers. However, the ability of a computer to make the everyman a musician could herald a coming age of increased play and experimentation in music. Software such as Reason by Propellerheads Software brings unprecidented power to the hobby musician, and the presence of laptops as part of a live band's performance is becoming commonplace. The days of playing to your sequences off a DAT tape may be numbered, as musicians gain more control of their digital music in a live setting with the aid of new, powerful software and portable computers."
This discussion has been archived. No new comments can be posted.

Musical Machines Gain Recognition

Comments Filter:
  • Question? (Score:2, Funny)

    by 56ker ( 566853 )
    Anyone ever thought of a slashdot theme tune?
    • by Anonymous Coward on Saturday March 16, 2002 @11:15AM (#3173014)
      This is the theme tune to the second series of slashdot, after they introduced advertising:

      First I was afraid
      I was very sad
      Kept thinking I could never read
      a slashdot full of ads
      But I had oh so many posts
      Smacked down for saying jamie's wrong
      I grew strong
      I learned how to carry on..
      So now there's ads
      More of the same
      I just logged on to find them here
      Between the news and all the flames
      I should have changed my fucking hosts
      I should have switched my uid
      If I had known for just one second
      they'd be back to bother me

      So off I go - I'm out the door
      Just turn around now
      'Cause I'm not reading anymore
      Weren't you the one who hit me with $rtbl
      You think I'm quelled
      You think I'd just go to hell --
      Oh no, not I
      I won't subscribe
      As long as I know how to post
      I know I'll be alive
      I've got all my life to live
      I've got all my posts to give
      I won't subscribe
      I won't subscribe

      It took all the strength I had
      Not to read this thread
      Kept trying hard to ban
      slashdot addiction from my head
      And I spent oh so many nights
      Just posting crap at minus one
      Used to be fun ...
      But now I want to cut and run
      And you see me at
      Another site
      I'm not that stupid little user
      Reading every night
      And so you felt like dropping in
      And just expect me to be free
      Now I'm saving all my comments
      For someone who's loving me

      So off I go - I'm out the door
      Just turn around now
      'Cause I'm not reading anymore
      Weren't you the one who hit me with $rtbl
      You think I'm quelled
      You think I'd just go to hell --
      Oh no, not I
      I won't subscribe
      As long as I know how to post
      I know I'll be alive
      I've got all my life to live
      I've got all my posts to give
      I won't subscribe
      I won't subscribe

      --anonymous
    • I suggest "Unlicked Stamps Of Tehigue" [ampcast.com] (scroll to bottom of page) :D

      (since I own copyright to that one, and God knows nobody else will have written anything like that, I COULD actually license it ;) or maybe "Bone Dragon" [ampcast.com] with its perky marimbas and short-circuiting electronic device solo?)

      ...just another actual musician...

  • by tlotoxl ( 552580 ) on Saturday March 16, 2002 @11:00AM (#3172989) Homepage
    If you're talking about all in one solutions, I would think that highly-programmable software such as pure data [ucsd.edu], being free and fairly (?) open would top the list, along with less open, but also powerful, packages like MAX/MSP [cycling74.com]. And if you're talking about Reason, I would think that all-in-one (cheap) packages such asOrion [orion-central.com] would deserve a mention.

    I don't really use (beyo0nd experimentation) any of that software, though - sticking to my own buggy stuff and my hardware synths - so I'm no expert - but next time I update my own (very limited and crash-prone) software synth, it will certainly be a DirectX instrument and maybe a pure data object.
    • Screw Reason - Fruity Loops. http://www.fruityloops.com.

      Try it for a month. Best software I ever bought. Free updates FOREVER.
      • Reason absolutely STOMPS on FruityLoops in *every* single aspect - except the price...

        ...but then again, isn't /. the "Free Software and Open-Source Love-Fest"???

        Samples played via Reason are cleaner...more patching and signal-routing capabilities, as well as patch modification capabilities...Re-Wire support so I can hook-up Reason to Cubase VST/32.

        FruityLoops = Musical Cheeze-whiz.
  • If you think Reason is powerful, you need to be shot. It is a simple sequencer with a few built in samplers, synths and effects. Perhaps you were thinking of Reaktor [nativeinstruments.de], made by Native Instruments. There are countless sequencers, effects and instruments out there that can be combined in any way you can imagine. Here are a few links to get you started:

    Native Instruments [native-instruments.net]
    Cubase VST [steinberg.net]
    K-v-R [kvr-vst.com] (huge VST resource)
    • Shot?!? Somebody NEEDS to be shot? A little fucking overzealous, don't you think?

      I do agree that it's a sequencer with some samplers, but 2 things:

      (1) they're all together in one package with GREAT sounds;

      (2) a computer is just wires and silicon. It's all with how you put them together. Reason is an easy interface with routing capabilities. Did I mention it's easy? Put that in the context of the object of your flame and it's a worthwhile mention.

      Calm down.

      Or join the army.
    • Um, Berklee, one of the nation's most respected music schools, opted to use Reason as the tool of choice to teach electronic music. Maybe you're using some other definition of "powerful" that I was previously unaware of.

      from Propellerheads site: [propellerheads.se]
      "Berklee has chosen Reason's virtual on-screen equipment to teach signal flow, routing, mixing, synthesis, sampling, and sequencing. Never before has one software application been able to provide students with virtual "hands on" experience using so many different pieces of electronic music gear."
      • Probably the reason they have opted to choose it is it is good value for money in that it has all the elements of a software recording studio in one package. If you are interested in music production as a career though, Reason is just a toy. I have never been to a studio that used Reason.
    • Re:Purlease (Score:2, Insightful)

      If you think Reason isn't powerful, then you simply don't know how to use it very well.

      Luckily for me, and plenty of other people who are actually working professionally as electronic music producers, Reason exists as a simple, highly configurable environment for sound design and composition. Furthermore, if you want to use it in conjunction with apps like Reaktor, just use the ReWire protocol to fly the audio and midi data into the sequencer of your choice (Cubase, Logic, Nuendo) and you can use the two side-by-side with sample accurate sync.

      I love the "Reason is a toy" mentality. I really think that it's just GUI-prejudice - if an app or an environment looks to clean and is too easy to use, it must be junk.

      No matter. My last 2 12"s and my upcoming full-length were all made primarily in Reason. The remixes that I've been doing with other people on my label have been greatly simplified by flying Reason files back and forth over the net, because plenty of them are using it as well.
  • I have used computer based composing and synthesizing quite a bit, I have also listened to quite a bit. Sure, some guy might have a $14000 setup, but that doesn't mean he has any talent. There are mountains of bad techno/trance out there. Perhaps we should concentrate on developing sources for talent, instead of synthisizers. I have a crappy 2000 dollar system with an iMac, and its all I need to do techno, classical, jazz, and even some alternative stuff.
    • Sure, some guy might have a $14000 setup, but that doesn't mean he has any talent.

      That's what's great about good software on a laptop!

      Software is consistently cheaper than its hardware counterpart. This means there will be people with talent who couldn't afford the $14,000 synth setup that can now create their own innovative music.

      Sure, there might be more bad musicians, too, but they will get ignored, as always. Cheaper music gear means a larger pool of talent to draw from.
    • Talent is just half of the battle. Careful study and hard work will never replace gimmickey quick-fixes. On this note, I know at my school, the Crane School of Music, we have very few courses dedicated to music technology. I know that it is hard to stay at the cutting edge of technology, but neverless the potential of music technology is starggering, and I feel that at least my school is years behind. It would be nice to have courses in synth programming, sampling, and other basic techniques. The only bright spot is a course on digital audio using ProTools on a Macintosh. There is no mention of software synthesizers though. Do they offer courses like these at other schools?
    • A friend of mine had never touched a drum machine in his life was messing around on Fruity Loops on my PC and produced some of the most delicate loops I've ever heard. Without relatively cheap software packages he'd never have had the opportunity.

      You have to decide whether you want small amounts of high quality music by a few truly talented people, or masses of noise with some really good work that would otherwise remain undiscovered.

      Up to you.
  • Buzz (Score:4, Interesting)

    by embeesh ( 560485 ) on Saturday March 16, 2002 @11:16AM (#3173016)
    If you don't have the $10k to set up a Pro Tools studio, check out Buzz at www.buzzmachines.com [buzzmachines.com]. You can set up virtual versions of your instruments and run them through effects, etc, then run all those into the master. It's very intuitive, give it a try.
    • It's very intuitive, give it a try.

      Utter, utter bullshit. I know it's a powerful and flexible app, but intuitive it is not.

  • Heck, I used computers on stage starting around 1985. My trusty C64 (later a SX64, the luggable version) ran Song Producer, software published by Moog that managed a multi-synth MIDI setup. Later, I wrote my own live performance software for the C64 and Atari ST. I brough the Atari on stage "headless"; text messages were sent via System Exclusive messages to the display window on one of my synths.
  • Helpful Links (Score:5, Informative)

    by laxian ( 174575 ) <digitalstruggle@y a h o o . c om> on Saturday March 16, 2002 @11:24AM (#3173035)
    This is definitely an area which I have devoted almost too much time to in the past year. Here are some links:
    • http://www.kvr-vst.com [kvr-vst.com] - My favorite VST (softsynth and effect plugin) news and discussion site.
    • http://www.em411.com [em411.com] - Another computer music news site.
    • http://www.computermusic.co.uk/ [computermusic.co.uk] - Lovely Computer Music magazine
    • http://www.steinberg.net [steinberg.net] - Steinberg, makers of "Cubase" ... a software sequencer, music work environment and more.
    • http://www.emagic.de [emagic.de] - Makers of "Logic". A lot like Cubase. Sequencer holy warrior fanatics will track me down and rip me apart for mentioning Cubase first.
    • http://www.cycling74.com/ [cycling74.com] - Makers of sound programming thingies Max/MSP and Pluggo. Pretty complicated, but reportedly worthwhile.
    • http://microsound.org/ [microsound.org] - Home of arguably the most snobbiest "experimental music" and computer music mailing list on the net. Plenty of interesting stuff here too. Prepare to listen to various 30 minute plus "masterpieces" of quiet shuffling sounds, only.
    • http://www.nativeinstruments.de/index.php?home_us [nativeinstruments.de] - (English Link) Stylish softsynth and plugin rockstar company. They make some incredible products. Geeks will have hard-ons for Reaktor.
    • http://www.refx.net [refx.net] - Maker of interesting VST plugins, notably "QuadraSID" which is a sound plugin based on the Commodore 64's famous, classic "SID" chip.
    I'm sure I left plenty of stuff out ... so put up your own links! :)
    • Arboretum Systems [arboretu.com] These guys created tons of amazing VST (among others)plugins, noise reduction, click and pop removal software back in the days.They also came up with the (sure, it's obvious now, but back then wasn't)system to enable a user to tweak multiple parameters in 2 dimensions instead of using sliders (1 dimension).

      While I hadn't heard about them in a long time, I got to see them at the AES and NAMM show, they demoed a program for OS X that is supposedly able to make audio and video compositions with effects, etc... all without tracks.

      If there's one thing that has always bothered me with audio programs, it's that they try to emulate physical devices, not taking into account the fact that computers are excellent at presenting information in a different way. Hence my pleasure to see a powerful-looking trackless system in the works for my beloved OS X box.

    • You can't forget the wonderful fruityloops program... http://www.fruityloops.com

      Pretty fun vst and synth stuff.
    • I'm currently working on a final for a computer music class using Max/MSP. There's a fairly large research group at UCSD involved in computer music, and lots of other computer arts things.

      http://www.crca.ucsd.edu/old/crca.html [ucsd.edu]
    • Help me out here. I'm a guitarist and web geek. So, I know shit about keyboards and synths. Pne thing I need to know, do I need to buy Cubase in order to play VST plugins? What other software will run VST's? Oh yea, I just bought a dual 1ghz Mac, so obviously I'd prefer Mac centric software. i'm planning on basing a home studio around my Mac. I've also got 5 or 6 x86 PC's laying around, do I' have no real problem running that kind of stuff also.

      I'm looking for recomendations here gang. Help out a poor, addled, mostly deaf guitarist.

      • What you're looking for is a VST host program. This could, possibly, come in all sorts of forms but the most common is usually a sequencer/editing program that supports the use of VST plugins within. Do NOT buy Cubase just for that. Cubase is overfeatured and expensive. There are a number of free or rather inexpensive host programs out there for the Mac platform. Two that come to mind are VSTi Host [hitsquad.com] and Bias-Inc's Vbox [bias-inc.com]. That one is more impresssive looking, which functions as a plugin managing system, and can function either integrated within a larger host(such as Cubase)or on its own. It retails for 99$.

  • by sam_handelman ( 519767 ) <samuel...handelman@@@gmail...com> on Saturday March 16, 2002 @11:29AM (#3173055) Journal
    When I was a kid, I used to play around with Concertware a lot - it was (as far as I know) the first software that let someone with only the most minimal knowledge of music write something down and hear what it sounded like. This is really neat, I'll grant, and I happily churned out hours and hours of bad chamber music.

    However, after I started really playing with other people (band in school doesn't count) I concluded that computers were not really capable of producing music on their own. The computer plays whatever you type in perfectly, which is not what you want. The other musicians, if they're any good, adapt to what you play (this is particularly true of more improvised sorts of music, of course) which is a very "resource intensive" (in terms of your nervous system) proposition. Even if the players are producing exactly what the composer tells them, they're providing subtle variations in the sound (I don't want to mire myself in music-speak) that Concertware's great-grandchildren still cannot duplicate, at least that I've ever heard (although, to be fair, they can do a lot better than concertware.)

    This is one of the reasons I don't like most electronic music - you can take a recording of tuvan chanting and sing/play along with it, remix it, what have you, but the technology does not successfully duplicate what the monks would do if they were in the room playing with you. When and if it can, I'll call whatever device an AI.

    I suppose the people who actually operate cameras and draw cartoons have the same reservations about CG. As much as they I may love computer assisted editing (which is what most of the toys in the article are about), truely computer generated music still sounds like the stuff that plonked out of my Mac SE30.
    • I'm not entirely clear about what your reservation is. If you're saying that an arrangement, programmed on a computer, can never capture human interactions or unpredictability, then i agree. I would also agree that most programmed music lacks that spontaneity -- but I would even have reservations about that, since I have heard professional arranged midi-based files which (at low bitrates, mind you) rivalled orchestral versions of the same songs. (Never mind that I have also listened to music by the Black Dog which surprised me at every turn)

      Most of all, I wonder why anyone would listen to computer arrangements of the sorts of songs which benefit the most from human inflection; I would hardly hold it against computer compositional and synthesis software that it cannot mimic monks chanting. What computers can do, however, is allow musicians to explore all sorts of other avenues which cannot be created by chanting monks or even gifted jazz players, and in the best of all worlds the chanting monks would take advantage, where appropriate, of interactive software (like pure data/max-msp/nato.0+55) to realize whatever sort of sonic texture they wanted while maintaing whatever sort of spontaneity they felt they needed.

      Moreover, while computers recreate perfectly whatever you enter, there's still plenty of room for serendipitous results in patches that one can never fully predict our results that one could never have completely foreseen until they are programmed; particularly in the more abstract compositional programs, it is rare that what you programmed is exactly what you had heard in your mind beforehand.
    • Utter nonsense. "The computer plays whatever you type in perfectly, which is not what you want." Every instrument I know does that. The instrument always plays perfectly how it's supposed to play. There are things like MIDI which record timing to the 100ths of a second and reproduce what you just played.

      Since your experience with computers is with Concertware only, I assume it's where you input scores and the computer plays it back to you. Then, the computer is not an instrument but a player.

      Important distinction.

    • The problem with synth playing back stuff 'perfectly' has been tackled by Cakewalk [cakewalk.com]. The repetition that you typically hear in synthed music is due to all the notes having the same velocity and exact timing. With Cakewalk, you can use "Groove Quantinize" and manual adjustments to provide more realism (ie, make a note just slightly off, so it's still on the beat, but yet perceptible to the human ear that it's not perfect.) Also, at least with Cakewalk, when you record yourself through the midi port, you'll have it exactly how you played it.

      That all said, I don't think Dream Theater's going to be using software to make their music anytime soon. :-)

    • but the technology does not successfully duplicate what the monks would do if they were in the room playing with you.

      Neither does Johnny Cash's new CD successfully duplicate what you would hear if he were in the room playing with you. Live music and recorded music are two completely different animals, as different as painting and acting. In live music, one or more people play instruments (guitars or samplers or tinfoil--anything that makes a sound) in a unique way. Doing so, they impart something intangible to the audience. An extreme example of this is Son House:

      "I remember seeing Son House at the Gaslight Cafe in NYC. He had just been rediscovered and was still quite nervous to play before people. He slowly rambled up to the stage and took a seat. The lights were bright and made it almost impossible to see the audience. Next, the steel guitar was handed to him and he fumbled to get a brass piece of tubing from his vest pocket. The Cafe was full of noise and excitement. There was little recognition of Son's being on stage. Then, to quiet the place, an announcement was made introducing the "legendary bluesman from the Mississippi Delta." Still noise, as most of the audience were very unfamiliar with Delta music or Son House.

      Then the amazaing part of the night occurred. Son slid the slide down the fingerboard of the guitar. The sound cried out. Everyone stood and looked. Next Son started his singing moan. His eyes rolled, arms shook, sweat quickly rolled down his forehead. Everyone remained standing, amazed at the sound. The song ended and from stunned silence a wave of applause emerged. Son played four more songs. The blues brought tears to people who had never been exposed to this type of sound. Those familiar with Son and his music cried for the joy of seeing him perform and the wailing sounds of the guitar."
      --Stefan Grossman

      Recorded music, on the other hand, is not merely a matter of recording the above performance. Sure, that's what people do, and some will try to convince you that their expensive mikes and high bit rate make it just like being there, but that's impossible. Let me make a tech-oriented sweeping generalization: No recording will ever capture a live performance in full. But here's the thing: recordings are a no less valid art form than performances. Once you accept the fact that you can't duplicate a live performance; once you embrace that fact, then you can use the CD medium to its true potential. No more is making a CD just a matter of getting the band to play one track without screwing up. People have realised that on a CD, that's not a guitar, that's not a voice, it's just a bunch of waveforms generated by 1s and 0s. No matter what you recorded, it's now electronic. So it doesn't make any difference if you loop one sound over and over. It doesn't matter if you apply massive effects to a vocal. And it's not cheating if every sound is programmed and not performed. The computer isn't making the music, it's still the person, just in a different way. You might say that your hard drive is your blank canvas, and when you start recording tracks to it, it's like you're painting, like you're constructing a song. Then when it's finished, through the glory (and I use that word with all seriousness) of technology, you can burn indentical copies of that to a CD as many times as you want, and an unlimited amount of people can enjoy your work in exactly the way it was intended.
    • There are two distinct things you speak of.

      One is how the machine interacts with a human who is also producing music. This seems to me like it would be *very* difficult, but purely computer generated stuff doesn't have this to deal with.

      The other is how precisely the machine generates music in the first place. This is very easy to change. Throw in some suitably tweaked randomness and you can probably have something sounding very "human" but it'd take a while to get just the right tweakedness.
  • Yeah, digital audio has really taken off. I didn't even think that was news anymore. The way a lot of it is implemented, and what with the number of snake-oil salesmen in the music industry, it's still a laborious process though.

    For one, there is the proliferation of formats: you have SCSI, ADAT, SP/DIF (coaxial & optical), USB (1.0 & 2.0), 8/16/32 bits + different sample rates, and you have to buy outrageous (and outrageously expensive) "converter boxes" for every one of them. Then of course you have to pay extra for equipment that does not honor some kind of copyright protection scheme.

    Add to that the fact that the software has become so complex and outlandish that you really need a manual to go with your pirated copy, and you almost start to get the sense that they don't really trust the people to make music.

  • Sure, the computer makes a fine tool for producing musical recordings, and for actually composing certain genres of music as well (particularly electronica, but also "classical," from what I understand). I somehow doubt that the computer will increase creative experimentation, however. First of all, we've already reached the point where anyone can pick up an instrument and learn to play. Second, most people don't "experiment," they just copy what other people have done.

    If you've made music on the computer and you've played a real instrument as well then you should know that only a real instrument gives you true, uninhibited power of expression. So much of music flows from the irrational part of us, and the computer can never help us with that.
    • The computer is now being used to create all new types of genres. Like ones that didn't exist before. Additionally, stuff that was incredibly time consuming before is now a lot easier. Like lots of old "experimental" music ... which involved physically making collages of various tape recordings. While I admire the the patience and vision of the artists that used to do that stuff, I realize that they probably really would have liked digital sampling capabilities.
    • only a real instrument gives you true, uninhibited power of expression.

      What the fuck is that supposed to mean, anyway?

      One thing that's really annoying about making music on the computer is the time-delay factor of it all. Even with a MIDI keyboard input device, it's still "hit some notes, fuck around with some settings on the laptop, eventually hear some results". You don't get the kind of immediate feedback you get on a "real instrument".

      But saying you can't express yourself using a computer to make music is clear, 100% bullshit.
      • While I totally 100% agree with your last sentence ... I'm troubled at the content in your middle paragraph.

        What you're talking about is commonly referred to as "latency". Really cool audio cards and hardware, plus great improvements in audio drivers and what-not have mostly made this disappear. I won't say that it's been made a "non-issue" ... because there are still people with bad audio cards or bad configurations ... but latency of under 10ms is very, very common out there. Personally, I have 10ms latency and I don't even notice. Physical knob turns correspond instantly with on-screen knob turns.

        As a funny side note, I recently read a letter in Computer Music where a guy wanted to know how to get higher latency ... because his computer was giving him less latency than on his real-life piano. This can also be a drums issue as well.

        • I wasn't actually talking about latency.

          What I was talking about was how, when composing on the computer, for me it's a process of "tap in a bass drum rhythm; tap in a snare rhythm; tap in some other percussion; pick out a decent program to use for a bassline; play a simple bassline; pick out a decent program for a melody; layer a melody on top".

          Only once you've done all these things can you really start the process of Making Music. Contrast with "rounding up a bunch of friends and starting to play".

          Perhaps that's unfair, because the computer is actually giving you an opportunity to do something you otherwise couldn't: make an entire band with one person. I just find that when I'm making music on the computer it feels less interactive and less instantaneous than when I'm at a friend's house banging out some crappy rock.

          YMMV, I guess.
          • Only once you've done all these things can you really start the process of Making Music. Contrast with "rounding up a bunch of friends and starting to play".

            Keep in mind that in the former you are able to produce all aspects of the track on your own, without intervening on other peoples' times. This is not necessarily a good thing -- getting immediate feedback from musical peers is a highly creative experience.

            Why not combine the "rounding up a bunch of friends" into your production? I work with another fellow DJ to churn out tracks, and we each work on different portions of tracks on different computers (plus headphones) with one central computer for sequencing it all together.

        • an average human being has a ear definition of 25-35ms [utexas.edu], which means that if you heard a different sounds with 30ms between each of them, they'll appear continuous to you. As you mentioned, most cards achieve less than 10ms latency, which is far enough for an average human.
          • That is certainly untrue. The human ear is in fact sensitive to differences in timing as small as a millionth of a second. This fact has a lot to do with how we resolve the direction of a sound source. Tests of ability to resolve direction demonstrate accuracy down to 1 degree -- work out the trig, thats a pretty small difference in arrival time. (yes, the head-related transfer function is also involved)

            For real time performance the maximum acceptable latency is generally agreed to be about 6 ms.

            To put this into perspective, remember that sound travels about 1 foot in 1 ms. Try playing in time with a drummer who is 30 feet away -- it won't work. This is also why when the audience claps at a concert they always slip out of time.
            • That is certainly untrue. The human ear is in fact sensitive to differences in timing as small as a millionth of a second. This fact has a lot to do with how we resolve the direction of a sound source.

              That number may be applicable in the case of reverb/acoustics, but be careful not to apply it across the board. It's also easy to demonstrate that two notes played within about 35 milliseconds are heard as one sound by the ear.

              A lot more fuss is made about latency (e.g. with respect to MIDI or audio cards) on theoretical grounds than is worth considering in the real world. Capture a recording of a good drummer or virtuoso pianist and you'll see double-digit delays in high-hat hits or the notes of a chord. These delays have not been troublesome simply because the ear/mind doesn't have that kind of resolution... and because they are part of what makes a human performance different from the performance of a sequencer.

              Mix delays of that sort with the playing of a band ... where distance between players and other considerations that drop timing to a secondary consideration ... and real world performances delays of up to perhaps 100 ms are tolerated among even professionals without comment.
              • Why do you think there a conductor is necessary for the orchestra? Yes -- they would fall out of time without one because of latency!

                Also in advanced musicianship there is a phenomena called microtime-deviation where the musician intentionally modifies the time-arc... (this is part of the humans!=robots thing)

                ref. Psychology of Music, Diana Deutsch 1999
    • Classical music does use computers for composition. Many musicians use Sibelius software [sibelius.com] to write down their compositions.

      True, a computer can never compare to a live performance, particularly as far as solo work is concerned. However some of the recent top British musicians have been working to produce purely synthesised classical music. As one whose father owns the main woodwind company and producer of oboes in the UK, (shameless plug for Howarths [uk.com])I know that a top oboist, Malcolm Messiter [messiter.com], produced a totally synthesised orchestra "The Virtual Orchestra". My father brought the cd home one night and put it on. I merely thought it was an poor recording and performance on terrible instruments. We tested it out on everyone we had to dinner- nobody made comment and all were astounded that it was totally synthesised.

      So computers can be used much more than you think in real "classical" music in addition to the obvious uses in popular music.
    • I agree with what you're saying in a sense. Right now the computer is most suitable for making electronic music. But -- I think electronic music is far more experimental than in the "real instruments" world. I think we've covered almost all of that territory, and most of it was done as far back as say, Beefheart.

      For sure, digital instruments are not so good at replicating their analog equivalents. (Except perhaps for a nice digital piano.) I don't think anybody is claiming that.

      One thing that the computer definitely does is make it cheaper for artists to record at home. I recorded about 200 songs last year, almost all were digitally multi-tracked, with effects and editing done on the computer. I used real instruments. But I was able to do it by myself, without a trip to an expensive studio ... THAT certainly increased my ability to express myself.

  • Computer music is really amazing... You can create the most abstract sounds, but you can also create the most beautiful and complex string arrangements.

    Laptops won't replace musicians, they will just aid them. Anyone can pick up a paint brush and paint.... this might just make music available to more people.
  • by Darlington ( 28762 ) on Saturday March 16, 2002 @12:34PM (#3173248)
    A little perspective on Taco's summary and the use of music technology in general.
    • The days of playing sequences off a DAT are not numbered -- they're already long gone. Laptops have been used as sequencers to drive outboard MIDI gear for almost as long as there have been laptops (for me it started in 1992 with an Atari STacy). The new development, as mentioned in the CNN article, is using software synths (usually VSTi's) as live performance tools.

    • I disagree that there are "limitations to the power and ability of software synthesizers". By example, I offer Absynth from Native Instruments [nativeinstruments.de]. From the 68-stage envelopes(!) to the wave fractalization and spectral editing tools, this offers sound shaping tools that no hardware synth can compete with.

    • Up until recently, you could argue that the latency problem with software synths kept them second-class citizens behind hardware boxes -- you'd hit a key and get your note a split-second later. This final limitation has been defeated with the advent of faster computers and cheap professional audio hardware. I use a 1.2 GHz computer and a $300 Emagic [emagic.de] EMI audio interface [emagic.de], and my softsynth latency is about 2.5ms. Not perfect, but it actually beats some of my hardware synths. (Hit a fat chord with layered patches on an Emu Morpheus sometime and you'll see what I mean -- you get a flam, not a unison attack.) And when you play back sequenced software instruments, they're sample-accurate.
    So the story is not laptops on stage, or computers making everyone a musician (if you can't write songs, the computer will not help you), but rather, software synths coming into their own as valid replacements for hardware on stage and in the studio.
    • The days of playing sequences off a DAT are not numbered -- they're already long gone.


      I'd have to offer a dissenting view. I play in two industrial bands in Seattle, and know several others in the same area, and we all play live shows to DAT. It's not that we're opposed to soft synths (I've listened to Reason (har har), and several other bands are selling their gear to move to software based solutions), but it's a matter of expense and reliability. It's a lot cheaper to have a backup DAT tape if something goes wrong with the first one, than to have a backup PC with all your settings on it. If your DAT deck dies, you can find or borrow another one that will play your tape in short notice, relatively quickly and inexpensively, even on the road. If power dies or someone trips on a cord, all you have to do when it comes back on, is skip ahead to the next song on the DAT, which takes at most three seconds. If power dies with a laptop, you have to reboot, reload the software, get back to where you were in the setlist. That could take minutes, and that dead time makes for a nasty crowd. Even for different setlists, it's cheap to make different DAT tapes for different setlists, and it's a matter of spending five minutes before the show programming the DAT to skip around the tape if you want to change the play order on an existing DAT. You never have to worry about hard drives crashing, the LCD display cracking, or optimizing the performance of the DAT player. You can mount a DAT deck in a SKB case and knock it around and know it will still work at the show. And believe me, after several weeks on the road, carrying your own equipment up and down stairs into dark, skanky clubs, the last thing you want is any hassle. If everything just plugs in and works, you can direct all your effort to putting energy into your live performance, and arguing with the club owner about getting paid, instead of trying to diagnose problems with a laptop.


      If you're in a band that can afford a dedicated sound/synth/computer tech on the road, then I say go for it. It certainly could open up vast new ways of doing things live. But until I reach that level and see the kind of rock solid reliability I get from a DAT, I personally won't be changing anytime soon.

    • (* and my softsynth latency is about 2.5ms. Not perfect, but it actually beats some of my hardware synths.*)

      How do you even measure it down that that level? Interference patterns?

      However, I agree that latency is probably directly related to one's equipment and setup and configuration skills. Perhaps that person should argue that it is more complex than physical instruments to get good latency, not that it is not achievable.
      • How do you even measure it down that that level? Interference patterns?

        OK, this is going to sound like I have no life, but I swear I do.

        What I did was set up an electronic drum pad to trigger a percussive sample in Logic (using the EXS24 software sampler). I put a contact microphone on the drum pad and recorded the sound of the stick striking the pad in the left channel, with the triggered EXS24 output in the right channel. Then I loaded the resulting WAV file into Sound Forge, selected the distance between the two peaks, and worked out the math based on the sampling rate and number of intervening samples.

        Maybe not the most scientific method, but there it is!

  • In response to some previous comments about perceived limitations of software synthesis for live performace, or emotional expression:

    Good synth software like Reason (and its synthesizers, drum machines, and samplers), can be controlled realtime via standard MIDI devices. A couple interesting ones (I don't work for Midiman!):

    Tactile interfaces like these allow for a huge range of expression and compensation in a live OR recording environment. A mouse/keyboard can be used too, but I often fid the onscreen controls are not large enough or truly desiged for exacting real-time control.

    Some may not find electronic sounds familiar or comfortable, but I truly believe a GOOD electronic musician has all the tools to add variance and emotion to a musical performance. Do many do it? I dunno, but the capability and potential is there.

    • "synthesizers, drum machines, and samplers), can be controlled realtime via standard MIDI devices"
      Midiman Oxygen8 [midiman.com]
      Midiman Surface One [midiman.net] "
      ------
      ------
      Right, and the market for unique controller interfaces continues to grow (see links below. Frankly, I think as we move toward faster processors and better design, we're going to see some startlingly unique ways to control digital musical events.

      There is a lot of power coming at us in the way of computer-based, music software apps, but control of these computer tools via controller tools that permit maximum and untimate degrees-of-freedom by the body is the real next revolution in musical expression.

      This doean't mean the end of acoustical instruments by the way, but rather their augmentation by tools that permit people who have a cognitive skew for a specific way of movement will be able to express themselves in ways that they would otherwise not be able to.

      Here are a few more general umbrella sites to look for unique controllers and gestural input:
      http://www.cs.sfu.ca/~amulder/personal/web pages.ht ml

      http://www.ircam.fr/equipes/analyse-synthese/wan de rle/Gestes/Externe/

      http://citeseer.nj.nec.com/86255.html

      http://www.lgu.ac.uk/mit/cnmi/

      http://www.infusionsystems.com/

      Some 'wind' controllers:

      Yamaha
      http://www.yamaha-music.co.uk/PRODUCTS/M USIC_PRODU CTION/MIDI_CONTROLLERS/WX5.ASP?sectionid=65

      Synthophone - employs a real sax as the body - elegant
      http://www.softwind.com/

      Akai EWI
      http://www.akaipro.com/defaultF.htm

      Others:
      Don Buchla's superb instruments - with some history
      http://www.buchla.com

      Starr Labs wonderful products:
      http://starrlabs.com

      Wind controller list:
      http://www.windsynth.org/wind_list/index.sh tml

  • the computer is gradually becoming the instrument itself.

    Gee, considering much of my CD collection has been computer music for almost a decade, I'm glad to see a mainstream article about it!

    Computing machines have been used for music for quite some time. Other posters have mentioned the software like Reaktor, Absynth, FruityLoops, Max/MSP, Reason, etc, etc. Here are some random artists you can check out:

    Richard Devine [schematic.net] Uses Reaktor on several computers to create complex industrial electronic beats. His stuff is pretty unbelievable when you listen close to all the detail. He's written music for Nike ads recently so he's fairly accessible.

    kid 606 [brainwashed.com]: An up-and-coming laptop punk. He's written silly stuff and serious stuff too and done at lot for the live electronic scene. He pretty much uses only Reaktor on a laptop as well. Look for the track " Catstep/My Kitten/Catnap Vatstep DSP Remix By Hrvatski" on your favorite music-sharing service, off his "Down With the Scene [opuszine.com]" album, you won't be disappointed! Or at least you'll laugh at the singing robot voice.

    Autechre [autechre.nu] are the masters of abstract electronic music (imho). For the past few albums, they've slowly gravitated toward generative music (i.e., write a program to write the music). They use Max/MSP and other stuff (not entirely computers all the time). Their last album Confield is very abstract and almost unlistenable. But fascinating.

    Taylor Deupree [12k.com] and his 12k label from New York are into the minimalist side of things, very minimal electronic noise, very art-school stuff. Some of 12k's stuff combines very well with the noise a computer makes, which I like to play when working so that my computer's fan noise is "remixed". Pretty cool if you're into the abstract. They use all sorts of software for their art.

    Another Electronic Musician [anotherele...sician.com] is a guy in the California scene who makes nice unpretencious (sp) electronic beats with Reaktor and other stuff.

    Grooves magazine [groovesmag.com] is one of many independent magazines on electronic music. If you see an issue at your local leftfield bookstore, flip through it. They review music software too.

    There's plenty of academics into electronic music too. Paul Lansky [princeton.edu] is one off the top of my head. Several music schools have electronic music programs that use a lot of this software too (Berkeley uses Reason I believe).

    So, there is a pretty huge scene for electronic music. There are plenty of young musicians who have chosen the laptop as their primary instrument, and don't even think twice about it.

    • Also check out LOWRES [lowres.com] which caries some sweet computer generated old school vinyl
      and there sister company rematter [rematter.com] Which has this group called "himawary" which is just the most amazing computer generated / live music show around.
      and also check out another cool computer generated group NPFC [npfc.org]

      All from the Detroit scene.
    • I still find it hilarious that kid 606's most popular song is one that Hrvatski made. Why not just link to Hrvatski? I think he is much more clever...
    • You know, if you listen to some Hrvatski [reckankomplex.com], I think you'll agree that the 606 remix he does was probably more Hrvatski than kid606. Probably some 606 samples thrown in or something, but it sounds much more like a Hrvatski track than a 606 track.

      He's on tour [reckankomplex.com] right now. You should go see him if you get the chance.

      Nice other choices, BTW. Richard Devine and the rest of the Schematic label are all incredible. Unfortunately Richard didn't make it to the Schematic tour when it came to my town last month. Sad.
  • I think people are turned off by the interface, but it is quite capable for the price. It's free to use, though all you can do is export to mp3/wav and you can't save your work and come back later. The cost is a little over $100 IIRC. Should be well worth it though, if you plan to make tons of music. It's actually the fastest pattern-based composer I've found. I was using ModPlug tracker, but that gets extremely tedious.
  • I've had a link to Raymond Scott's [raymondscott.com] web site in my sig off-and-on. He's the guy who wrote so much of the music of looney tunes, although ironically he "probably" died not knowing that he was immortalized because of it! In particular, he wrote "powerhouse" which is the "mechanical, assembly line" music you would know right off if you heard it. He also wrote "The Toy Trumpet" and "Dinner Music for Cannibals".

    But he's also an interesting guy in his own right. He probably developed the first music sequencer, and some of the first synthesizers. In fact, a young Bob Moog was inspired by visits to the his massive music laboratory.

    I highly recommend checking out his site (although he died sometime in the 90s).

  • The open source software and tons of articles online about sound generation and synthesis will give you enjoyable hours of reading and playing. All this material can be very intimidating for the new comer or uninitiated so I would recommend the book published about CSound which is for newbies and experienced people a source of plenty of material written by top experts in the field. And did I mention that it's all open source and running on many platforms?

    PPA, the girl next door.

    PPA, the girl next door.
    • a brief clarification. Csound has source code available, but does not meet anyone's definition of "open source" except perhaps a few corporations trying to abuse the term. You are not permitted to redistribute the source, nor to use it for anything except educational and research purposes. Not that this has stopped lots of cool things from happening with Csound, but it pays to understand licenses, sometimes. its still a cool program, at least for users
      • What didn't you understand in my original post? I said "open source". I didn't mentioned a license type.

        When you get the source code of the software readily downloadable, then it's called open source.

        When you are limited to use the software source code with restrictions (or not) then this is licensing.

        Just for you to know,

        PPA, the girl next door.
    • Csound is still pretty cool. However, it is rather primitive, as far as synthesis languages go. There are no for() loops, nor any other way of looping other than if...goto. This makes constructing complicated instruments a rather tedious cut 'n' paste effort.

      Under the hood, Csound has some inefficient unit generators. A lot of this has to do with the fact that many people learn how to program through working with Csound (I know that my first C coding went into Csound unit generators). Some of the unit generators do things that are really ugly - like having (value/anotherValue) for every sample, where you could have easily precomputed 1/anotherValue at initialization time. I have the feeling that this inner ugliness is why the various ports of Csound to real-time applications always seem to run slower than other software synths.

      Another problem with the Csound source code: Many of the people who have coded the more complicated Csound UGs seem to be allergic to writing comments. You have huge, arcane unit generators, with NO comments whatsoever.

      As far as the Csound book, I was disappointed in it. A lot of the chapters were written by members of the Csound list, who volunteered their time (I have some stuff on the CD ROM). This is great, but the book is often pitched as a good source of audio DSP techniques. Quite frankly, many of the algorithms in the book are sub par, and demonstrate a lack of comprehension of the literature from which they borrow. Nothing is wrong with publishing these algorithms - hey, if it makes sound, it works. My beef is with this book being used as a textbook, as I think it will result in a lot of bad algorithms being taught as state of the art. Many of the authors are not "top experts in the field" - they are enthusiastic newbies, who would be fired if they tried to put their inefficient, unstable algorithms into a commercial product.

      As far as open source programs, check out Miller Puckette's PD. This program was the basis for MAX/MSP, although MSP adds a number of useful extensions. PD is open source, and is designed by a true expert in computer music and real time DSP.

  • It's very interesting to see all the mildy different viewpoints on this.

    Yes, computers making music has been around for decades - listen to some of the music on Forbidden Planet. Yes, it's been commonplace on stage for quite some time. re: power - I see a lot of 'my dick is bigger than yours' posts on that: Reason's not a powerhouse, Reaktor is. Here's my $.02 - Reaktor's got nothing on Csound when it comes to synthesis power. However, I think Reason's a hell of a lot easier to use than both of them :p

    I'm currently debating with my roommate over which recording tool we use - he likes Nuendo, I still like Vegas. He argues that he can get so much more done more efficiently with Nuendo. I show him all the songs I've put together in Vegas and ask him to show me one full song he's done in the past six months ;)

    Electronic musicians 'performing' on laptops is just plain boring. When I went to see Autechre, I didn't expect much more, I just thought it was cool to see Autechre. They turned out all (most) of the lights, and were just a couple of guys with Laptops and Nord Modulars. A little more interesting, Telefon Tel Aviv had a pair of laptops, but also played along with electric bass and guitar, and at least twiddled knobs. Twine did the laptop thing, but had a fantastic video showing to go along with it, and I think that was the most interesting - It's less of a 'lets go see our favorite musician perform' than it is 'lets go see a light and sound show' - unfortunately most of the kids who make electronic music can't afford a good light show ;)

    Getting back to something useful - I think it would be really smart if a program like Reason would be included with computers the way that programs like RealPlayer and MusicMatch are included. I had access to music software in the form of ScreamTracker and ImpulseTracker when I was about 14, and slowly got into making music because of that. What if a kid had access to a program like Reason at age 5? We start them on violins young, why not start them doing full compositions early on ;) The fact that you can mimic a $100,000+ studio to great effect for the meager price of Reason is just fantastic. What if instead of trading MP3s, we were all trading the file formats that contained all the master tracks and sequence information? You like a song, but think it could use just a little more of something else? Open it up, and remix it! (Of course, this would probably just lead to even -more- crappy club-style remixes)

    No, computers won't ever replace traditional instruments. But computers are becoming a factory for new, inventive instruments - and not just bleepy-bloopy stuff - and bringing production, mixing, mastering all into that beige box I'm resting my feet on right now.
  • there is a flamewar out there between music hardware and software, what is the best?
    well, most professional musicians still use hardware (akai samplers, synths etc..), hardware is faster and the sound quality is much better, you cant beat a virus/triton/nord lead synth with just software, but the time will come.

    on professional studios there is a combination between both. G4 macs is the most commun computer in studios running sequencers like cubase/logic/protools.
    • Virus = virtual analog using (I believe) generic Motorola DSPs.

      Nord Lead = virtual analog using (I believe) generic Motorola DSPs.

      Triton = propietary, but still virtual analog (The MOSS card). Or, a simple 2-sample playback engine with some nice modulation options. You can build this in Reaktor in about an hour.

      So, what you're saying is that software running on Motorola DSPs is somehow better than software running on my AMD? Because that's all those hardware boxes are. If you were making your point using, for example, a Minimoog, a Prophet 5, and a Buchla, then you might be on to something.

      But software synths can indeed beat modern hardware synths because they're basically doing exactly the same thing. The difference is simply that the software models are a tenth the price and don't have ridiculous 4 meg memory caps.

  • by jasno ( 124830 ) on Saturday March 16, 2002 @02:25PM (#3173725) Journal

    I'm surprised this one hasn't been mentioned yet(heck, its worthy of a front page story)...

    But what about The Creativity Machine [newscientist.com]? From the article:

    The Creativity Machine's basic design can be used for myriad purposes, says Thaler. One weekend, for example, he showed the machine a smattering of popular songs - actually just short phrases of about 10 notes without any accompanying harmonies - then turned it loose to imagine some new ones. The filtering network selected 11 000 of the best themes and Thaler sent them to the US Library of Congress to be copyrighted. "That makes me technically the most prolific songwriter of all time," he boasts.
  • There's been a very significant underground electronic music movement gaining momentum extremely rapidly for the past six or seven years that has been dubbed intelligent dance music (IDM) by the media and the music generated is almost entirely computer-based. Aphex Twin is the name from the genre that most people have heard, along with somewhat lesser-known but still easily found artists like Squarepusher, Autechre, Boards of Canada, -ziq, etc. This is without any doubt one of the most innovative groups of artists around, and their influence has been noted by and heard in the music of N*Sync, Michael Jackson, and Radiohead, just to name a few bands.

    Computers are definitely the center of musical creation in this genre, to the point where one of the genre's biggest issues at present is the artists trying to figure out how to make their performances more interesting than them just standing in front of a laptop moving the mouse around. In addition to the use of computers, the internet is also a major component of the artists' music-making and distribution processes. There have been numerous collaborations that have been created by sending audio tracks back and forth via ICQ, each artist changing and adding to it and then sending it back. In addition, the labels' web sites and accompanying message boards are freqented very regularly by both fans and artists alike, and much of the genre's direction is discussed and even determined there. Also, because the genre is so mall at present, record pressings rarely exceed more than a few thousand so it is very often difficult to find out-of-print records, so file-sharing tools like AudioGalaxy and SoulSeek come to the rescue. Trading MP3s is far more acceptable to the artists in this genre than it is to those in more popular ones.

    For a taste of the genre, check out these record labels:

    Warp Records [warprecords.com]
    Planet [planet-mu.com]
    tigerbeat6 [tigerbeat6.com]

    Check out the artists Aphex Twin, Squarepusher, -ziq, Kid606, Autechre, Boards of Canada, Venetian Snares, Plaid, and Leafcutter John, just to name a few.

  • Well, although in my opinion, open source music software is not as mature as the windoze counterpart, we still have some really good representation.
    • SpiralSynth [pawfal.org] is becoming an almost self contained music production program. With basic sequencers, good synths, samplers and effects is one of my favorite programs.
    • If you want to play DJ, go check TerminatorX [terminatorx.cx] to get fill all your scratching needs. They even hacked a turntable that works with the program!!
    • Finally, for some real-time guitar effects, check Stompboxes2 [mrbook.org] , which is my own project. (BTW, i'm looking for developers).

    The day that we have a fully functional program that is as good as Buzz or Orion, I'll be a happy man and I'll have to reboot my machine less often.
  • I agree, and this is why I think we can finally do without the RIAA. In fact -- I think we can do without copyright on music entirely; there are now plenty of people who can and do make music just for fun and distribute it at very low cost. In such a world, piracy is a feature, not a bug. While I'm at it, here's my site of freeware plugins that you can use in most of these digital music programs: http://www.smartelectronix.com/~destroyfx/ [smartelectronix.com]
  • I fear a lot of you might be missing the point of my post (yes I, not Taco, wrote it and said "DAT tape" and made everyone mad).

    I'm not saying any of this is new, and I'm certainly not saying Reason is the be-all/end-all of software sequencers and synths. All I'm saying is that mainstreme instrument makers are starting to take notice. This is, believe it or not, a very big deal. The CNN article doesn't do it justice.

  • Use Buzz --> http://www.buzzmachines.com -- It's free, better, free, and community supported.
  • might i suggest some interesting links on where things might be headed in the future:

    NIME [nime.org]

    CCRMA program [stanford.edu]

    Joe Paradiso @ at MIT Media Lab [mit.edu] doing some interesting stuff

    enjoy

  • Soft-synths, improved digital recording software, better sequencers, and the proliferation of digital effects plugins will not turn Average Joe into a virtuoso performer. These things are tools. One still needs to have either innate talent or a basic schooling in musical theory and practice to take advantage of them.

    I have a small digital home studio, comprised of an Alesis keyboard, an Athlon-based audio workstation, and packages such as Reason, ReBirth, Cubase, Reaktor, B4, and a stack of others. None of these made me a better musician. They did, however, provide me with a banquet of options from which to pick and choose as my skills develop.

    The primary benefit of the digital and electronic home recording industry is this: people with talent who couldn't afford to produce professional-quality work can now do so. The hardware and software combinations that I've spend around $8,000 on, rival the capabilities of a $150 an hour studio of ten years ago. In addition, I have full control as musician and engineer.

    Another benefit of software-based synthesizers is the accessibility of their parameters. Though slightly less convenient than the analog beasts of years past, a soft-synth with individual on-screen controls is many times easier to deal with than a digital hardware synth sporting a 4-line LCD in which to do all your parameter editing.

    I use discrete hardware, still, for various purposes. However, digital sound generation, editing, post-production, and mixdown make my life much easier. That is its main appeal.
  • As a professional musician (dance music, specifically trance), let me share my experience. Software synths such as Reason or Acid have a lot of potential for the future, but right now they just don't cut it. Compare the sounds you get out of reason to the sounds that come out of a piece of pro audio equipment, such as a Roland JP-8080, the Novation Nova, or even the two-decade-old Roland TB303. The sounds from Reason are much thinner and lacking in character. If you want thickly layered leads, sweeping pads, or strong, phat bass, you want hardware.

    Why is it that they've yet to duplicate that richness in software synths? I'm not sure - I guess they just haven't been doing it as long. I have no doubt that in a few years - maybe as few as five - software synths will be rapidly outpacing their hardware counterparts.

    But for the time being, if you want to create professional-quality audio, the kind that a top name DJ will spin into their set, forget about software. It's just not good enough yet.
    • "But for the time being, if you want to create professional-quality audio, the kind that a top name DJ will spin into their set, forget about software. It's just not good enough yet."

      Hmmm...ever heard of "BT"???

      BT's Site [btmusic.com]

      BT uses Reason, FruityLoops and DSP software from Spectral Noise [spectralnoise.com] in his productions, as well as ProTools for mixing and Hardware Synths as well.

      Joe Satriani (Joe's site is Satriani.com [satriani.com] used nothing but MIDI hardware and software to provide backing tracks for his "Engines of Creation" CD - totally amazing work, including "The Power Cosmic, Part II", "Borg Sex", and "Attack".

      I am, among other things, a professional musician/guitarist as well, and am working on a solo project with only the hardware and software sitting in my home office.

      So much for "It's just not good enough yet". If you think "digital" sounds thin, run your final output stage through a warm-sounding Tube Preamp (Rotel made a very sweet-sounding one back in the early 60's - if you can find them), and you'll re-capture the supposed "warmth" that's missing from digital.

    • ...obviously something created by Reason's going to sound like crap if you're just running the output straight and uneffected. Thin, lacking character? That's what Professional EQ and multi-band compression are for.

      I had a TR-606. Had a Future Retro 777. Got rid of 'em. Well, the 777 wasn't really mine, but I used it more than the fellow who owned it, and god it was fun. But your argument seems like it could be easily remedied if you knew a little more about engineering and production.
  • I haven't seen anyone mention Supercollider yet. From what I have seen, Supercollider [audiosynth.com] is the most powerful computer music language out there, barring C, C++, and assembly. Supercollider is a real-time, object-oriented computer music system, with a syntax similar to Smalltalk. It is text-based, which is really nice when constructing complicated sounds (unlike Csound, it has the type of control flow you would expect from a modern programming language). The sounds I have heard from Supercollider are beautiful; after working in Csound, it is amazing to hear such complex sounds being generated in real time, using a small fraction of the CPU. The current unit generator list is VERY powerful. It is easy to implement subtractive sythesis, modulation synthesis, granular techniques, frequency domain synthesis - pretty much everything except some physical models, which require finer control over the output vector size than is currently available in Supercollider.

    Right now, Supercollider is Macintosh-only. However, the author of Supercollider is working on an OSX version, which he feels could be the basis of a Linux port.

  • This title was great. You could write music for the nice EMU sound chip in the IIGS. I used it when I was in grade school to have a LOT of fun with music. One had to be able to read music, though, to be able to use it.
  • This article is really just a light summary of something that's been going on gradually for about 20 years. The writing was on the wall when Tangerine Dream and Klaus Schulze appeared with the Crumar GDS and Fairlight CMI in 1980: this is when it became clear that synthesis was really just a software issue, just waiting for cheap (commodity) hardware.

    These days it's a religious issue. My personal religion is that the hardware units are always going to be ahead. (I don't mean keyboards or pure MIDI modules specifically; I'm also counting computer-hosted hardware products like the Creamware Pulsar, the Korg OasysPCI, and souped-up breakout boxes like the Kyma Capybara and the Nord MicroModular.) Sure, latency is getting pretty damn tight on the native software, but if you're sharing a processor with Windows or MacOS it's hard to make it predictable. And the plug-in world is plagued with compatibility and reliability problems. The jury is out over sound quality - again, I hold that hardware units sound better because of the dedicated hardware (my OasysPCI, with its five dedicated DSP's and filter/modelling algorithms to match, sounds fabulous, to a degree which native software is not going to match just yet).

    Having said all that, I'm a firm believe in computers as audio/MIDI processing tools. I've been using Cycling '74's Max for ten years, and am now doing projects with MSP, the digital audio toolkit portion, most recently a high-profile commission for Ballett Frankfurt, so this stuff can be used in professional contexts. (Nano-plug and disclosure: I reviewed Max/MSP for RECORDING magazine last October, so I had to look at these issues quite closely.)

    There are laptop-only performers around, some of whom even write good music, but there's one other area where hardware will win out: laptops have dreadful ADC/DAC hardware so we'll always have external converter boxes.

    Epilog: in the Mac world, none of this stuff works under OS X. OS X has a nice audio/MIDI framework but nobody's using it yet (except perhaps EMagic) so we Mac users are sitting in a MacOS 9.2.2 limbo right now.

  • "there are still limitations to the power and ability of software synthesizers". Wow, really??

    Nowadays, we listen music mostly coming from speakers, being CDs or radio at home or even live bands. I am sorry, but that is POOR. Absolutely poor in comparison with live music directly coming from instruments, even if we use a high end hifi.

    Now. Synthesizers are often a good solution if we are producing music to be played by speakers. They produce nice sounds that "blend" nicely and create a good overall results. Even, talented musicians and composers can make good themes using ONLY synths.

    But if you think that synths can even go close to musicians, go to a small club and listen some live music, hear that jazz trumpet directly from the brass to your ears, listen that cymbal at one side and that snare drum close next to it: No speaker membrane will vibrate as those two. Not trying to convince or anything. I have played electronic drums myself, getting good "recording" quality. But don't go with the electronics to a jazz cafe or anything because anything will sound POOR, even the state of the art Roland Virtual Drums. And I also play trumpet, and that HAS to be played even in recordings..

    So yes, there are STILL some limitations.
    • BTW, somebody invent some virtual latin jazz band: say about 7 "robots" that play traditional drums, conga, trumpet, trombone, sax, acoustic bass, piano. All connected to a sequencer and all well distributed in the concerts room of your house. Now that's electronic music.

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...