Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Communications Open Source Software News

Codec2 — an Open Source, Low-Bandwidth Voice Codec 179

Bruce Perens writes "Codec2 is an Open Source digital voice codec for low-bandwidth applications, in its first Alpha release. Currently it can encode 3.75 seconds of clear speech in 1050 bytes, and there are opportunities to code in additional compression that will further reduce its bandwidth. The main developer is David Rowe, who also worked on Speex. Originally designed for Amateur Radio, both via sound-card software modems on HF radio and as an alternative to the proprietary voice codec presently used in D-STAR, the codec is probably also useful for telephony at a fraction of current bandwidths. The algorithm is based on papers from the 1980s, and is intended to be unencumbered by valid unexpired patent claims. The license is LGPL2. The project is seeking developers for testing in applications, algorithmic improvement, conversion to fixed-point, and coding to be more suitable for embedded systems."
This discussion has been archived. No new comments can be posted.

Codec2 — an Open Source, Low-Bandwidth Voice Codec

Comments Filter:
  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Tuesday September 21, 2010 @01:14AM (#33645996) Homepage Journal
    I'll be presenting on Codec2 at the ARRL/TAPR Digital Communications Conference this weekend in Vancouver Washington, Near Portland. I'll try to get the video online.
    • Re: (Score:2, Funny)

      by Anonymous Coward

      But will you be presenting IN Codec2?
      That would be very impressive.

    • Bruce, have you guys done any testing of performance in the presence of background noise? I know that in the PMR area, there are a lot of firemen who are very unhappy with what happens to AMBE when their is background noise (e.g. saws, Personal Alert Safety System, fire) gets into the mike - while AMBE does ok at encoding just speech, throw the noise of a saw in the background and all you get is garbage.

      While the initial application of CODEC2 is hams in their shacks with their noise-canceling mikes, It Woul

      • by jmv ( 93421 )

        I don't know how codec2 actually does, but noise is a fundamental problem for all low-bitrate codecs. One thing that can sometimes help is applying some (conservative) noise reduction on the input to reduce the effect of noise on the codec.

  • Original Rationale (Score:5, Informative)

    by Bruce Perens ( 3872 ) <bruce@perens.com> on Tuesday September 21, 2010 @01:19AM (#33646022) Homepage Journal
    The original rationale for Codec2 is at Codec2.org [codec2.org]. I've been promoting this issue for about four years, as I was bothered by the proprietary nature of the AMBE codec in D-STAR. But I didn't have the math, etc., to do the work myself. It was really fortunate that David became motivated to do the work without charge. He has a Ph.D. in voice coding. By the way, look over his web site rowetel.com for the other work he's done: two really nice Open Hardware projects - a PBX and a mesh telephony device, an Open Source echo canceler for digital telephony, used in Asterisk and elsewhere, and his own electric car conversion. He'd be my nomination for the MacArthur grant.
    • by Yaur ( 1069446 ) on Tuesday September 21, 2010 @02:04AM (#33646224)
      In a nutshell it looks like the rational for not just using Speex is:
      • better resilience to bit errors
      • better performance at ultra low bitrates
      • by Bananatree3 ( 872975 ) on Tuesday September 21, 2010 @02:10AM (#33646240)
        that is basically it. Speex is built (as I understand it) for lossless transmission methods with little/no error correction needed. Radio, by its very nature is a very lossy medium, so something with better error tolerance is needed. Hence, Codec2 provides a nice route.
        • Re: (Score:3, Informative)

          by adolf ( 21054 )

          (Stating the obvious for those with sufficiently low UIDs and/or those who remember VAXen, or similar, or at least those with a proper beard...)

          that is basically it. Speex is built (as I understand it) for lossless transmission methods with little/no error correction needed. Radio, by its very nature is a very lossy medium, so something with better error tolerance is needed. Hence, Codec2 provides a nice route.

          that is basically it. Speex is built (as I understand it) for lossless transmission methods with l

          • by Yaur ( 1069446 ) on Tuesday September 21, 2010 @03:50AM (#33646610)
            With UDP the typical loss scenario is dropped packets but with radio single bit errors are more likely. This difference means that FEC strategies for one scenario are not directly applicable to the other.

            for UDP in packet FEC data is useless and your error correction scheme needs to be prepared to deal with losing a whole packets worth of data to be useful. For voice this is going to introduce too much latency so instead a typical codec might just try to interpolate the lost data. With radio on the other hand there is value to in packet error correction bits within the stream and in the event of an error you are going to have more data with which to guess what the audio should be like, especially if you know which bits are errored (or possibly errored)
            • by tepples ( 727027 )

              With radio on the other hand there is value to in packet error correction bits within the stream and in the event of an error you are going to have more data with which to guess what the audio should be like, especially if you know which bits are errored (or possibly errored)

              But wouldn't the underlying link just automatically FEC the packets at a lower layer, even if only to get the packet drop rate down?

            • Re: (Score:3, Informative)

              This works on 51-byte frames.

        • by jmv ( 93421 ) on Tuesday September 21, 2010 @05:41AM (#33647094) Homepage

          The fundamental difference is not that much the lossless vs lossy transmission, but the actual bit-rate. I designed Speex with a "sweet spot" around 16 kb/s, whereas David designed codec for a sweet spot around 2.4 kb/s. Speex does have a 2.4 kb/s mode, but the quality isn't even close to what David was able to achieve with codec2.

      • by wowbagger ( 69688 ) on Tuesday September 21, 2010 @06:31AM (#33647476) Homepage Journal

        If you've ever heard AMBE in the presence of bit errors, it doesn't do so well either. It isn't the vocoder's job to deal with bit errors, it is the protocol's job. Over half the bits in a APCO-25 voice frame are forward error correction for the voice payload: Golay encoding, Reed-Solomon, bit order scrambling (interleaving), you name it.

        Putting resistance to bit errors in the codec is the wrong place to do it.

        Now, making the codec use less bits, so the protocol layer has more bits for FEC makes sense.

    • Re: (Score:3, Interesting)

      by slimjim8094 ( 941042 )

      Looks really cool. I haven't messed around with D-STAR since I don't like the idea of being tied into a specific system (seems to contravene the point of amateur radio). I'll definitely be keeping an eye on this to see where it heads.

      I had a really awesome idea just now for transmitting this at 1200bps using AFSK Bell 202 (like APRS) and hacking up live voice using entirely existing equipment (TNCs, etc). But the given example of 1050 bytes/3.75s works out by my math to 2240bps. I guess you could run it ove

    • Re: (Score:3, Interesting)

      by the way ( 22503 )

      By the way, look over his web site rowetel.com for the other work he's done: two really nice Open Hardware projects - a PBX and a mesh telephony device, an Open Source echo canceler for digital telephony, used in Asterisk and elsewhere, and his own electric car conversion.

      I've got one of his little ip01 telephony boxes, and it is quite fantastic - a tiny, cheap, fanless, (embedded) Linux computer with plenty of memory and CPU grunt, and of course telephony hardware on board. It also has a package manager, with a quite a few pieces of software available, and regular firmware updates. It's much more powerful than the various Linux-based consumer routers that are available - it's a great option if you're looking for a small Linux server to run Asterisk, a little web site, DNS s

  • Err Speex (Score:2, Informative)

    by Knee Socks ( 1600375 )
    Speex: Speex is based on CELP and is designed to compress voice at bitrates ranging from 2 to 44 kbps. Some of Speex's features include: Narrowband (8 kHz), wideband (16 kHz), and ultra-wideband (32 kHz) compression in the same bitstream
    • by Bruce Perens ( 3872 ) <bruce@perens.com> on Tuesday September 21, 2010 @01:43AM (#33646134) Homepage Journal
      Jean-Marc Valin is on the project mailing list and David is another Speex developer and the person Jean-Marc recommended to me. We are trying for an improvement over Speex at low rates.
    • Re:Err Speex (Score:4, Informative)

      by Gordonjcp ( 186804 ) on Tuesday September 21, 2010 @01:49AM (#33646162) Homepage

      Speex isn't great in this application, because at low bitrates there is a significant delay through the codec and the output stream requires far too much bandwidth to be useful. Consider that digital speech systems like Mototrbo, TETRA, P25 and Iridium typically have less than 6kbps throughput once you've taken FEC into account.

    • by jmv ( 93421 )

      Sure, Speex does 2 kbps, but if you compare that to codec2, there's a hell of a difference. The 2 kbps Speex mode is something I put together quickly -- mainly to encode comfort noise at low rate. On the other hand, David put a lot of effort into codec2 and it actually sounds decent for voice at that rate (IMO better than Speex sounds at 4 kb/s).

  • what about LATENCY? (Score:4, Interesting)

    by Kristopeit,MichaelDa ( 1905518 ) on Tuesday September 21, 2010 @01:25AM (#33646064)
    why is seemingly the most important aspect of communication technology so often overlooked?

    i assume it's acceptable... but it angers me that someone thought it was relevant to give the exact number of bytes for a seemingly arbitrary 3.5 seconds of audio, but failed to say how long it take to encode that 3.5 seconds of audio, or what average latency can be expected after buffer conditions are met.

    • by Bruce Perens ( 3872 ) <bruce@perens.com> on Tuesday September 21, 2010 @01:32AM (#33646090) Homepage Journal
      Right. Sorry. Real time on the x86 workstation I'm using. Not converted to fixed-point for weaker CPUs yet. Not tested on ARM, Blackfin, AVR, etc. Waiting for you to do that :-) Downloadable code. Reasonably portable. Type make and let fly.
      • Re: (Score:3, Informative)

        by KliX ( 164895 )

        I think he probably means it in a 'how many samples does the codec need before it can send a packet' type of latency.

        • by Bruce Perens ( 3872 ) <bruce@perens.com> on Tuesday September 21, 2010 @02:47AM (#33646420) Homepage Journal
          There are currently 51 bits in a frame. That is the minimum that you can send, and you'd send 40 of those per second as the codec is presently implemented. A real data radio would add bandwidth for its data encapsulation, but would have to meet the time and bandwidth requirements of the codec payload.
          • by sahonen ( 680948 )
            What we're trying to ask is if you pipe a real time stream of samples from a microphone into one end, encapsulate the data in UDP packets, bounce the stream off 127.0.0.1, unencapsulate them, pipe it into a decoder and from there into a sound card and speaker... How much time is there between me saying "hi" into the mic and hearing "hi" out of the speaker? This is by far the most important consideration for modern voice protocols. Low bandwidth is nice. Low CPU is nice. Error tolerance is nice. Latency is c
            • There is no reason that you can't send a packet for each frame. There isn't any important state, so far, that persists between frames. That's 7 bytes (really 51 bits) 40 times per second. CPU speed doesn't seem to be a problem for latency from what we have seen so far.
              • by sahonen ( 680948 )
                So basically, 25ms of encoding latency, plus the latency of your audio hardware input and output buffers, plus network/medium propagation (5-10ms for satellites?), plus any network jitter buffering. That's pretty good. CELT claims 3-9ms but I'd like to hear a comparison of audio quality at 24 kbps, especially considering the differences between their designs.
    • Re: (Score:2, Insightful)

      by Garridan ( 597129 )

      Well, the source is right there on the webpage. Why don't you download & compile it, and see for yourself? It's an alpha release so I'll guess that it's slower than it could be.

      • Re: (Score:2, Interesting)

        it could take 16MB/s and still function in real time over the internet for me... my problem isn't that the latency wasn't shown, it was that the bitrate WAS shown BUT the latency wasn't shown.

        also, considering the advantages of using lower bitrate voice codecs, the ability to implement the encoder and decoder algorithms directly in very low transistor count custom hardware would appeal to the same crowd... so not just latency in terms of x86 instructions per second, but the ability to implement those instru

        • The final destination for Codec2 *isn't* X86 processors, but DSP chips. If, for some reason latency is an issue when it's first shoehorned into a DSP chip, Codec2 will be refined until it works well on a DSP chip, in real real time.
          • Re: (Score:2, Interesting)

            yes, of course... but "refining" a codec for hardware implementation is doing the exact opposite to the quality of the signal.

            why not refine the a DSP chip architecture until it works well with the original codec? i know masks are expensive... but why not do it all the way?

          • Re: (Score:3, Informative)

            by vlm ( 69642 )

            If, for some reason latency is an issue when it's first shoehorned into a DSP chip, Codec2 will be refined until it works well on a DSP chip, in real real time.

            I think you are not using the definition of latency that most in the field would use.

            Latency is how long it takes to process the data. Its a computer science type of thing. If you understand Knuth and his tape drive sorting examples, this is pretty obvious...

            For example, heres a nice, simple, hopelessly useless codec that has almost exactly 100 ms of latency:

            1) Get yerself a buffer that holds 1000 samples.
            2) Run a A/D converter at 10Ksamples/sec until the buffer is full.
            3) Run "gzip" on the 1000 sample bu

        • by Bruce Perens ( 3872 ) <bruce@perens.com> on Tuesday September 21, 2010 @02:33AM (#33646352) Homepage Journal

          It encoded those 3.75 seconds in 0.06 seconds and decoded in 0.04 seconds on my AMD Phenom 9750 2.4 GHz, one core only, compiled with GCC and the -O3 switch. That's all of the overhead of the program starting and exiting, too. It's using floating, not fixed point.

          This, it seems, bodes well for low latency of the final implementation on a DSP chip.

          • Yup, Core 2 Duo P8700 @ 2.53Ghz, compiled with O3:

            time ./c2enc ../raw/hts1a.raw hts1a_c2.bit

            real 0m0.062s
            user 0m0.060s
            sys 0m0.000s

            time ./c2dec hts1a_c2.bit hts1a_c2.raw

            real 0m0.048s
            user 0m0.044s
            sys 0m0.004s

            Thanks for promoting this, it's a fascinating project.

          • Bruce, you've replied to this question several time, but you are not understanding the question. Almost every encoder buffers some data then compresses it. Generally, the larger the buffer, the better the compression, but the greater the delay between starting to put audio into the encoder and starting to get audio out. The same thing happens at the decoder end. The question is how much (in terms of milliseconds of audio) does the encoder need to buffer before it starts compressing and how much does the

            • by Goaway ( 82658 )

              I don't know why you people keep badgering Bruce about this, when I could figure out the answers to all that within minutes of looking at the linked site. How about going and reading for yourself?

              • by vlm ( 69642 )

                I don't know why you people keep badgering Bruce about this, when I could figure out the answers to all that within minutes of looking at the linked site. How about going and reading for yourself?

                Because almost all codecs have a certain inherent fixed latency. And its by far the most important figure of merit in the real world. And no one wants to discuss it, therefore it must be horrifically bad.

                Number one priority for codec designer is always will it fit in the available B/W goal. This is a simple T/F Y/N 1/0 either it fits or it doesn't.

                Number two priority is minimum inherent codec latency. Humans don't talk so well above 100 ms or so (debatable). That doesn't mean you get 100 ms to blow in

            • by fbjon ( 692006 )
              51 bit frames. Now visit the site, spoonfeeding demands are frowned upon.
        • by sjames ( 1099 )

          Keep in mind, this is alpha code that hasn't yet been converted to fixed point. The final performance is just a guess at this point. The intrinsic latency will be 25 milliseconds due to the frame size.

          To put that 25 milliseconds into perspective, I've found that most people won't even perceive it if I drop 25 milliseconds out of an audio stream.

          People who do have a use for it would probably be much better at judging what level of performance is acceptable. People with no use for it have no feel for the trad

    • by jmv ( 93421 )

      Don't worry. The frame size is 20 ms and there's probably (haven't looked at that detail) around 10 ms of look-ahead, so latency shouldn't be an issue. I'd actually argue that it could be increased *if* there's a way to reduce the bit-rate by doing that.

  • Serindipidy. (Score:3, Interesting)

    by firstnevyn ( 97192 ) on Tuesday September 21, 2010 @01:41AM (#33646122)
    As a newly licenced ham in a area where Dstar repeaters are everywhere (VK) and free software advocate I have recently become aware of the issues with Dstar and have been reading about this work so it's quite surreal to have it pop up on /. in the week where I get my licence. I havn't had a chance to read the Dstar specifications but am wondering if the voice codec is flagged in the dstar digital stream. and if it would be possible to create translating repeaters so dual output repeaters with differently coded data streams it'd take more spectrum but would also allow for a migration path (at least for repeater users?)
    • Re:Serindipidy. (Score:5, Informative)

      by Bruce Perens ( 3872 ) <bruce@perens.com> on Tuesday September 21, 2010 @01:52AM (#33646174) Homepage Journal

      Congratulations on the license, OM. We haven't yet explored how to wedge this into D-STAR, but sending it as data rather than voice would be one way. All of the D-STAR radios except the latest one, the IC-92AD, use a plug-in daughter board to hold the AMBE chip, and it might be that somebody could make a dual-chip version of this board sometime. Since AMBE is proprietary we are stuck using their chip if we want to be compatible, unless the repeater does the conversion for us using a DV-Dongle. They sell TI DSP chips with their program burned in, and don't give out the algorithm.

      It may be that on D-STAR the AMBE chip also does the modulation for a data transmission, just doesn't run the codec. But the modulation is known and there is a sound-card software implementation of D-STAR that interoperates with it. I don't have any D-STAR equipment to test. The folks on dstar_development@yahoogroups.com know a lot more about D-STAR.

      73
      K6BP

      • Why does a repeater need to understand the encoding? Can't it just rebroadcast the data, or even the analogue signal?

        • Re:Serindipidy. (Score:5, Informative)

          by Bruce Perens ( 3872 ) <bruce@perens.com> on Tuesday September 21, 2010 @02:21AM (#33646294) Homepage Journal

          The repeater can rebroadcast the data, but that data would be AMBE encoded, and AMBE is both trade-secret in its implementation and patented in some of its algorithms. There may be an AMBE chip in the repeater, I've not played with one. The usual way one converts to and from AMBE on a PC is with a device called the DV-Dongle, which contains the AMBE chip. This costs lots of money and is not nearly so powerful as the CPU of the computer it's plugged into, which is one reason to be fed up with proprietary codecs.

          So, if you had some newer, Codec2-based radios, and some older D-STAR radios, linking repeaters might be a good way to get them to talk to each other.

          This is hand-waving about a lot of issues, like we've not designed the next generation of data radio to put Codec2 into. One might guess that such a thing could use IPV6, and better modulation than just FM, and FEC, etc.

    • Re:Serindipidy. (Score:5, Insightful)

      by __aajfby9338 ( 725054 ) on Tuesday September 21, 2010 @02:18AM (#33646280)

      Congratulations on your new license!

      The proprietary AMBE codec bothers me, too. I think that a closed, license-encumbered, proprietary codec is entirely inappropriate for ham radio use.

  • Great news (Score:2, Informative)

    by Anonymous Coward

    >3.75 seconds of clear speech in 1050 bytes

    That's 2240 bps, 2.19 kbps, quite impressive. Maybe one day they can beat MELP (up to 600bps) and remain open.

    Excellent work.

    • Re:Great news (Score:5, Interesting)

      by Bruce Perens ( 3872 ) <bruce@perens.com> on Tuesday September 21, 2010 @02:10AM (#33646246) Homepage Journal
      I think you could cut the sample rate in half and get acceptable performance, but I've not tried. Currently I think it's 25 microsecond frames, and each frame has one set of LSPs and two sets of voicing information so it's interpolated into 12.5 microsecond frames. Those lower bandwidth codecs do 50 microsecond frames. Go forth and hack upon it if you'd like to see. Also, there are some optimizations that are obvious to David and Jean-Marc (and which I barely understand) that haven't been added yet. One is that the LSPs are monotonic and nothing has been done to remove that redundancy. Delta coding or vector quantization might be ways to do that. I understand delta coding but would not be the one to do VQ. Another is that there is a lot of correlation of the LSPs between adjacent frames, so you don't necessarily have to send the entire LSP set every frame. And there is probably lots of other opportunity for compression that I have no concept of.
      • by Yaur ( 1069446 )
        You mean milliseconds... 25 microseconds is less than one sample at 44khz. Somewhere around 100ms is the lower edge of where its "noticeable" in the flow of the conversion.
      • by jmv ( 93421 )

        Hi Bruce,

        Just a minor correction, the frame size is 20 millisecond, not 20 microsecond :-). As for VQ, the concept is not that hard really. Of course, as for many things, the devil's in the details, many of which I got wrong in the Speex LSP VQ anyway.

  • I use digital almost exclusively and have wondered about when a suitable open source voice project would emerge. I look foreward to seeing it developed further. Tim VK4YEH
  • I hope this takes off. It would be great to have a good OSS voice codec for amateur radio.
  • Packet loss? (Score:4, Interesting)

    by Amarantine ( 1100187 ) on Tuesday September 21, 2010 @02:30AM (#33646332)

    I didn't see it mentioned when quickly scanning TFA, but how does this codec handle packet loss?

    It is all nice and well to develop a codec to cram as much speech as possible in as few bits as possible, but in this case, one lost packet could mean a gap of several seconds. The success of a low-bandwidth codec, at least when it comes to IP telephony, also depends on how well it can handle lost packets. Low bandwidth codecs are usually used in low bandwidth networks, such as the internet, and there the packetloss is the highest.

    Same goes for delay and jitter, by the way. If a stream of packets is delayed, and more voice is crammed in fewer bits, then the delays in the voice stream will get longer too.

    • Re:Packet loss? (Score:5, Informative)

      by Bruce Perens ( 3872 ) <bruce@perens.com> on Tuesday September 21, 2010 @02:43AM (#33646406) Homepage Journal

      We don't know yet, but I don't see how it could be worse than AMBE in D-STAR, which makes various eructions when faced with large packet loss. I did various sorts of bit-error injection inadvertently while debugging yesterday, and right now you still get comprehensible voice with significant corruption of the LSP data. This, IMO, indicates an opportunity for more compression. Handling the problems of the radio link is more a problem for forward error correction, etc.

      • It would be great to be able to get this on phones. I know most VoIP/SIP type applications work fairly well on 3g, but if you don't have 3g coverage (or are on a smaller cell company who only licenses EDGE from the other GSM carriers) then it kinda-sorta works with 3 second delays and the occasional garbled audio. For example, my Nokia N95 on Immix doesn't get 3g (Immix didn't opt for 3g coverage from T-Mobile or AT&T even if the phone supports it and you're on their networks) but does edge at around 35
        • You still need lots of small packets if you don't want high latency. So this is better but still uses lots of networking.
  • English only ? (Score:5, Interesting)

    by Yvanhoe ( 564877 ) on Tuesday September 21, 2010 @02:37AM (#33646374) Journal
    At such high compression rates, one could wonder if the optimizations to transmit clear speech make assumptions about the language used. Does it work well with French ? Arabic ? Chinese ?
    • Re:English only ? (Score:5, Interesting)

      by Bruce Perens ( 3872 ) <bruce@perens.com> on Tuesday September 21, 2010 @02:56AM (#33646452) Homepage Journal
      The basic assumptions are based on the mechanics of the vocal tract, and I suspect not high-level enough to differ across languages, but obviously it would be nice to hear from speakers of other languages who test it. We could also use a larger corpus of spoken samples for testing.
    • by Ecuador ( 740021 )

      The languages you mentioned don't really use much different sounds. If you want a real test try the clicking sounds in Zulu, Xhosa etc.

    • by jmv ( 93421 )

      Actually, this is not low enough for language to really have an effect other than tonal vs non-tonal languages. As long as you "train" quantizers with multiple languages you're fine. I would not expect language-dependencies to actually kick in until you hit something like 100 bps or below (i.e. when you need to do speech-to-text in the "encoder" and text-to-speech in the decoder).

    • by sootman ( 158191 )

      Spanish is spoken so quickly, compressing it is like trying to make an MP3 smaller by zipping it--it just won't work. French, though, with all its mushy pronunciation, compresses very well, like how a blurry image responds well to JPG encoding.

  • Mumble integration ? (Score:4, Interesting)

    by Anonymous Coward on Tuesday September 21, 2010 @02:48AM (#33646422)

    One of the fastest ways to ensure its testing and distribution is to use it in Mumble - the low latency voice chat software (with an iPhone client as well).
    Mumble is typically used by gaming clans for their chat rooms and it Codec2 would be tested in real-life conditions.

  • 1050 bytes for 3.75 seconds of speech is the equivalent of 2240 bits per second- good enough that an old-school 2400 baud modem would be able to transfer speech in realtime. Impressive. But I seem to recall that the speech synthesizer of the TI-99 stored voice audio in as little as 1200 bits per second. It was well-documented enough that TI emulators emulate the speech synthesizer as well. But the sound quality left to be desired, which is probably one area where codec2 shines. I've listened to the exampl
  • by sootman ( 158191 ) on Tuesday September 21, 2010 @08:38AM (#33648802) Homepage Journal

    Who wants to be the first to make a web service based on this codec and 3.75-second messages? :-)

    • At 4.5 letters per word, a text can hold about 29 non-abbreviated words. You'd have to speak at 464 words per minute to do that in 3.75 seconds. The world record is 595 wpm. Normal reading comprehension is in the range of 200-300 wpm.

      However, let's look at this from a different perspective.

      A non-abbreviated text message is about 4.8 bytes per word (Bpw?). At, say, 200 wpm speaking, this codec comes out to about 84 Bpw.

      Honestly, a 17x difference to go to audio is remarkable. Text is probably the most co

  • by briankwest ( 1905914 ) on Tuesday September 21, 2010 @10:10AM (#33650448)
    I have been working on mod_codec2.c for FreeSWITCH, which is committed in a WIP module. The library for codec2 isn't a library at all just yet. I'm working with David and Bruce to make sure we can get a working libcodec2 in place ASAP so we have a real VoIP demo that people can compile, call and test against. /b

"Pok pok pok, P'kok!" -- Superchicken

Working...