Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Television Media

Full-Screen Video Over 28.8k: The Claims Continue 459

gwernol writes "Over at Screen Daily they are claiming that an Australian company has demonstrated a high quality, full-screen video-on-demand service that is delivered over a 28.8k modem. They claim this will 'eliminate the need for broadband.' If this is true, then they'll change the world. Of course, the basic technology has been around for a while, see this article from 1998 or this one from earlier this year. I remain extremely sceptical. If this is real, why won't they allow proper independent testing? But it is interesting that they're getting funding. Could this be the last great Internet scam?"

Several readers also pointed out this brief report at imdb.com as well. We've mentioned this before, but the news here is the reportedly successful demo. It would be a lot easier to swallow if he'd let people test it independently, but video-over-28.8 sure is tantalizing.

This discussion has been archived. No new comments can be posted.

Full-Screen Video Over 28.8k: The Claims Continue

Comments Filter:

  • Yes, and I am able to compress all of Slashdot down to 10 bytes.
    • by istartedi ( 132515 ) on Thursday August 30, 2001 @03:49PM (#2236158) Journal
      "Yes, and I am able to compress all of Slashdot down to 10 bytes."

      FIRST POST
      0123456789

      Well, what do you know, he's right!
    • Remember the DOS trojan that was floating around about 8 or 9 years ago that claimed to be fractal compression program with amazing results? It could compress a 2 megabyte file down to a few hundred bytes. How did it achieve these amazing results? Deleting the file and filling the rest with junk. :-)
    • you probably could...

      compression is more effecient when thare is lots of predictable duplicate data right ?

      Well just think about how many "Linux is great" posts on slashdot there is...
    • hell, it should be pretty easy if you just erase all the Jon Katz articles.
    • Yes, because compression works wonderfully on a corpus of massively redundent data...

    • Yes, and I am able to compress all of Slashdot down to 10 bytes.

      Actually, if you permit the use of a diff against previous stories and postings, it is possible to compress Slashdot down to an average of 3 bytes:)


      News item:

      Late Thursday, researchers announced that one of their Perl modules was supposed to have demonstrated a successful performance of Turing's test for mimicking human intelligence via machine.

      Authors claimed the module was able to replicate Slashdot stories and user postings, including not only classic Trolls, but also the previously difficult to analyze AC postings at Score:-1.

      "It was a hard task to get that last set of Score:-1 set of responses correct, but we knew we had to do it if we were to present a credible emulation of Slashdot."

      Researchers were gratified that all of their hard work to simulate Slashdot paid off, demonstrating such a degree of fidelity that a recent audience of Slashdot users were transparently convinced of its reality.

      However, distribution of any prize monies is pending an appeal from Alan Turing's estate, who claim that Slashdot stories and postings do not represent intelligent life.



  • by Trak ( 670 ) on Thursday August 30, 2001 @03:43PM (#2236108) Homepage Journal
    You click on download, the viewer launches, and the status bar reads "Buffering..." for eight hours, then the full-screen video plays in full detail. It's amazing!
  • by FortKnox ( 169099 ) on Thursday August 30, 2001 @03:44PM (#2236111) Homepage Journal
    Pshah!

    With all the great things I have with broadband (at the same cost of 28.8 service), plus, if you can compress a stream for 28.8, imagine what you can do with broadband!

    This won't eliminate broadband. It'll strengthen it!
  • End of Broadband? (Score:4, Insightful)

    by JanneM ( 7445 ) on Thursday August 30, 2001 @03:44PM (#2236112) Homepage
    Funny, I find a broadband connection incredibly useful, and yet i never watch video over the net...

    The real advantages of a broadband conneciton is that you are always connected; you are accessible to others via mail and messaging at all times (just imagine that you had to explicitly connect your telephone to use it, then disconnect it again afterwards). The speed, while very nice, is actually not as important.

    /Janne
    • by garcia ( 6573 ) on Thursday August 30, 2001 @04:05PM (#2236235)
      umm, how is this any different than having a 24/7 dialup connection (or at least close). PPP on demand would do just about the same thing as well.

      Broadband is great b/c I don't have to wait 5 mins while my porn comes up, I don't have to wait 15 hours for my whatever.tar.gz to download, and I certainly don't have to worry about my roommate stealing all the bandwith downloading MP3s.

      Even if 28.8kbps can support full motion video you can't do much else while it is downloading.
      • Here we go again, Americans assuming that everyone lives in their country.

        Well some of us live in the poor, deprived, third-world old United Kingdom, where speed cameras and CCTV monitor our every move, and dialup access is (mostly) metered! Broadband only covers about 10% of the population (thankfully me :). The USA is AFAIK one of the only countries with unmetered local calls.

        In many cases, broadband is the only unmeterted access.

        --Russ
    • Always connected and broadband are two totally seperate things. Lots of people did always connected over a standard voice line. If you wanted to avoid having to get a second line, it would be very easy to make a device which gave you two channels over a standard copper line, one of them could be used for voice, and one for data. Maybe even a little left over for control purposes. You'd have to give it a catchy acronymn to compare with ADSL, maybe something like ISDN.
  • by omnirealm ( 244599 ) on Thursday August 30, 2001 @03:44PM (#2236114) Homepage
    With the advent of wireless technology, speed is not the only issue at hand. Energy is going to be a major factor to consider. While we may be able to compress video into oblivion, the processing power required to perform the compression/decompression may be too high for handheld wireless deviced with limited battery power. Broadband availability for desktop computers is rapidly becoming a non-issue.

    People are going to want to send and receive video emails from their handhelds. We need a technology that will be able to strike a balance between energy required to transmit the signal (bandwidth) and the energy required to compress and decompress the signal (signal processing).
  • easily done (Score:4, Funny)

    by LocalYokel ( 85558 ) on Thursday August 30, 2001 @03:44PM (#2236115) Homepage Journal
    Lossy compression can do wonderful things. =)
    http://lzip.sourceforge.net/ [sourceforge.net]

    I hope this isn't another Pixelon...

    • by Anonymous Coward
      The only kind of compression for a BOFH. "You want your files compressed, eh? Well, OK - whatever you want."
  • by pogofish ( 514289 ) on Thursday August 30, 2001 @03:45PM (#2236119) Homepage

    Yes sir, full screen video over a 28k connection.


    So what am I seeing? It looks rather blank.


    Well sir, that's a white cow in a snow field. It just scared out some snow hares.


    Over 28k you say? Where do I sign?

  • Smoke and Mirrors (Score:5, Insightful)

    by topham ( 32406 ) on Thursday August 30, 2001 @03:45PM (#2236122) Homepage
    A 28.8Kbps modem delivery good video and sound? Uh-huh. It's the Holy Grail. The last guy I heard demoing it ended up being on a wanted list for fraud. For all we know the machine had a 802.11b wireless card and was receiving multiple transmissions of the datastream. (Assuming any level of auditing was actualy done to verify that any data was over the 28.8 connection.)


    I don't even think it would be that hard to fake.

  • MP3... (Score:2, Informative)

    by Nate Fox ( 1271 )
    I'm just as skeptical as the next geek, but remember: MP3 changed everything in audio. Compressing a 60M song to ~6M?!? 10-12X compression with only minor quality loss? No one believed it when they were told, but once we started hearing it ourselves, we couldnt believe our ears. I hope they have made the next quantum leap in compression. I doubt it, but I hope.
    • Re:MP3... (Score:3, Informative)

      by CaseyB ( 1105 )
      but remember: MP3 changed everything in audio.

      People shouldn't have been that impressed with MP3. The concept of lossy compression algorithms was already in common use, in the form of JPEG compression of image data. (Now, I recall how impressed we were with JPEG back in the GIF days...) Getting 10:1 compression was pretty much the expected result of applying the same principles to audio data.

      Today, we would be just as skeptical of a new audio algorithm advertising 50:1 compression over MP3 -- which is effectively what these people are asking us to believe, since their ratios are versus existing compression schemes, not raw data.

    • Re:MP3... (Score:3, Interesting)

      by shepd ( 155729 )
      Video already compresses surprisingly better than any audio format I know of.

      For example, take a 10 second clip of 640x480 24-bit, RGB, 29.97 fps video (no audio). The math sez its:

      640 x 480 x 3 x 29.97 x 10 = 263.41 MB (approx).

      Yet 10 seconds of 10 Mbits MPEG-2 video (very high quality) takes up just 10 Megabytes of space. That's a compression ratio of over 26:1!

      Over a 28.8kbps modem over the internet we are looking at about 2.6kbps of data (headers and other overhead removed). This means the above 263 MB video is supposed to compress down to less than (don't forget about the sound!) 26 k. That's a compression ratio of 10374:1!

      I can believe a leap of 10x, *maybe* 50x. But a leap of 400x is just something I have to try on my own terms before I believe it.
      • This means the above 263 MB video is supposed to compress down to less than (don't forget about the sound!) 26 k.


        Nah, you can pretty much leave out the audio, get the words by "reading" the lips of the actors, then using text to speech in the already downloaded database of actor voices. Splice in a few "" messages, and you have your special effects.


        As for the video, you just put a 3D image of all the actors into your distribution DVD, then you can just send position vectors of their movements. Add in a couple "" messages, and you have a full action-feature. The advantage being that you don't even have to hire the actors any more. You just feed the screenplay into the decompression algorithm, and out pops the movie. As an advantage you can change the actors on the fly, in case you'd rather see certain ones during the nude scenes, for instance.


        I'm joking of course, but one question to ponder is whether that would even be enough, then you can just use information theory to see if it would be possible. Certainly a screenplay could be sent in 28.8kps, but you'd have to include the information coding all the decisions made by the actors/directors/DPs/etc. too.

    • Re:MP3... (Score:2, Funny)

      by Carmody ( 128723 )
      "I hope they have made the next quantum leap in compression. I doubt it, but I hope."

      Recall that a "quantum leap" is by definition the smallest possible leap. The next quantum leap in compression will be just compressing one more bit. Why hope for that?

    • The one thing that sets VOD appart from the normal uses of MP3 or MPEG is that the VOD can require a cray to compress, as long as the decompression is easy. Just like fractal compression, which is theoretically possible, just REALLY time consuming to compress.

      One way might be to send executable code: stored procedures that manipulate image regions. Think of it as motion prediction on steriods. Now, I don't have a clue as to how the compresssor would figure out these code snippets. Exhaustive search? Mind you, it doesn't necessarily have to be turing complete, but perhaps a very advanced command langauge.

      In fact, VOD compression doesn't have to be 100% automatic. You can very well justify having an operator select regions where the compressor should try harder to optimize, and what kind of optimization is likely to be needed. Say, you mark _this_ as background for the next X frames, _this_ is foreground, _that_ is a repetetive element, so store that in our image cache -- see _there_ it is again.

      These sort of human driven superoptimisations might be able to achieve a very high level of compression with acceptable results.
  • Sound good (Score:2, Funny)

    by bomek ( 63323 )
    As long as i can put The Matrix at dvd quality on a floppy disk.
  • by Whyte Wolf ( 149388 ) on Thursday August 30, 2001 @03:47PM (#2236140) Homepage
    These 'secret' proprietary processes always seem to generate a lot of hyp, investment/funding, whatever and never seem to generate the proposed technology. A good example's a Calgary company that hyped its 'new' large-scale flat screen (non morticed screens) technology. It ended up that the founder had fraudulently demonstrated 'their' tech to shareholders using a compeditor's equiptment.

    I can't help but think of 'The Spanish Prisoner.'

  • by rjamestaylor ( 117847 ) <rjamestaylor@gmail.com> on Thursday August 30, 2001 @03:49PM (#2236156) Journal
    Remember Pixelon [google.com]?

    The above is all that in necessary to say on this subject, but due to the postercomment compression filter, I have to add this meaningless paragraph.

  • Could this be the last great Internet scam?

    Of course not. It's obviously a scam, but it's equally obvious that scams are not going anywhere. Human nature hasn't changed. As long as there are people who are desperate to belive there will be people willing to tell them what they want to hear. As long as the net is less than people want it to be- which is to say as long as it exists- there will be snake oil salesmen promising that they can make it into what people want.

  • Pixelon (Score:2, Informative)

    by Ioldanach ( 88584 )
    For anyone who's ever heard of Pixelon [redherring.com], we'll believe it when we can test it ourselves.
  • Here's Slashdot's last article about a company like this:
    http://slashdot.org/articles/00/06/27/1156210.sh tm l

    Good thing he doesn't have it patented, tho. As soon as he releases software, the algorithms will be available to everyone.
  • The problem isn't just the bandwidth. The problem with the idea of video-on-demand is that, unlike broadcast, your costs scale with your audience. The technological problem of fitting high-quality signal over a tiny pipeline is a great one to solve, but video-on-demand's real problem is that the cost scales.

    It's like choosing an O(exp) algorithm when you know an O(1) algorithm is available.

    See, if I start broadcasting a signal, the more people that tune in, the more I can charge for pay-per-view or advertising. But the neat thing is that my cost is fixed; no matter how many people tune into that signal, it costs me the same amount to spray EM waves all over the place.

    But with VoD, every new viewer means new bandwidth. Meaning that my costs go up with each new customer. And since the cost of additional bandwidth is not a linear equation, at some point there's diminishing returns, regardless of how small the stream is. My profit margins wither and die if there's enough demand for my video stream.

    The only real solution for this from a business perspective is...get this...distributed file sharing, such as Napster or Gnutella. With tools like these, I'm able to avoid the added demands on my server by making the folks who want the service into servers themselves.

    So the real technical problem to solve with VoD is not to make the streams smaller, although that certainly doesn't hurt, but to make money off of folks' file transfers. Obviously a direct tax on each transfer is going to cause problems, but an advertising-based model, where each transferred file has an advertisement attached with it, could work wonderfully.

    Too bad for the RIAA and MPAA that they're too busy suing file-sharing users and pushing unsuccessful VoD goose-chases to figure this out, eh?

    This is a cool technology if it's real. I wouldn't be surprised if it is real. But it won't make the internet into the great media-delivery tool the media corporations want it to be.
    • This is why caching proxies need to become more widely used.

      • Caching proxies only ameliorate the problem slightly, and aren't effective for true Video-on-Demand. There are a lot of things you can do for scheduled, live events, but for watching (say) a movie when I want to, multicasting, caching proxies, and even really tiny 28.8kbps video streams don't solve the problem of video-on-demand for millions of users.

        The only way to do it is to build a system that's distributed, just like the internet is, such as Gnutella or Napster, where each person who downloads a movie then becomes a distributor for the movie.

        A great advantage for a napster-like system aside from the distributed bandwidth is for the people like us who actually watch the movies: Once we get a movie, we own it. We don't have to download it a second time. We can watch it as many times as we want, bandwidth-free.
  • It's certainly possible that they can compress video/audio data this much. There are types of compression available far greater than what are commonly used... the reason being that they demand way too much computing power to encode and decode. For example, neural networks have been used to compress data like pictures to tiny, tiny size. But if you've ever seen neural network algorithms, you know that there's a lot of computation going on.

    That said, assuming they have the compression, nobody probably has a cpu for decoding it.

  • Which will probably be never. Though the claims in the article are so vague as to be impossible to evaluate meaningfully. What does "excellent picture quality" mean, really? Are we talking "looks better than a crappy antenna on a 13-inch set," or "you could play it on a 10-foot screen and people would think you had a movie projector?" How about "CD-quality audio?" To some people, a 64kbps MP3 qualifies; others claim any existing lossy audio compression sounds unacceptably bad to them.

    But the outfit's complete unwillingness to do anything but canned demos is what really makes me think the guy in charge is doing more than just feeling like a snake-oil salesman.

    If it's for real, they'll file for patent protection and we'll all get to see how it works. And if it's for real, they deserve a nice solid patent or three, but my guess is it's just a scam.

  • I don't believe the claims of the story are even remotely posible, but what about using wavelet lossy compression (eg. jpeg2000) for video? any experts know what kinds of compression it would be able to achieve? as far as I know, all current video compression still uses discrete cosine transformations for the lossy portion of compression.
    • real compression uses frame-to-frame correlation for compression. The dct is merely to
      transform the residual difference into something simpler to code. It's also used in mpeg
      for key frames, but those only are usually inserted once every 1/2 second and are
      generally coded at resonably high quality.

      During the mpeg4 competition, people proposed wavelets for the key frames, but in practice
      it didn't look much better since most of compression came from inter-frame motion
      compensated prediction, the difference wasn't high enough to justify changing things...

      In jpeg2000, you generally don't have multiple frames to compress, so using wavelets makes
      a bunch of sense. Wavelets didn't perform very well on coding the residuals so it isn't
      used for that... (residual is mostly noise)
    • I tried this.... (Score:2, Informative)

      by SealClubber ( 260672 )
      I experimented with this last year. I was trying to prototype a client-server system on which graphics were rendered on a central server then compressed and piped to clients.

      I played with some wavelet video compression/decompression cards based on the analog devices ADV601 chip (you can google it). It can achieve high compression ratios on grayscale images working on a frame by frame basis (kinda like MJPEG but with wavelets).

      After playing with the server a bit (it was a Beowulf cluster :) I wrote a software wavelet codec which I then tried to integrate with MPEG2 interframe compression. This turned out to be very tricky because a lot of the interframe motion vector compression relies on the DCT blocks from the JPEG-style intraframe stage (you've probably seen the obvious 'boxes' of pixels when viewing a very highly compressed JPEG image).

      Anyway, the results I was getting (for grayscale) *sound* impressive. 200:1 was possible for most images but only pictures with smooth contrast changes looked any good after decompression. Any sharp edges (e.g. graphical overlays) were completely destroyed at any compression rate over 10:1. Throwing the MPEG interframe stuff into the mix didn't really help much (partly due to the problem outlined above), although I can't say I explored all the possibilities along this route.

      After becoming more interested in coding proper parallel apps for Beowulves rather than hacking the MPEG's source I let the project drop. Code available if you'd like a look.

      My personal opinion on this fullscreen video with CD-quality sound over 28.8 is that it's complete tosh. It's absolutely impossible to compress that much information into such a small pipe. Unless this guy has discovered something that makes an awful lot of our current mathematical thinking invalid then this claim is nonsense.

  • There are SO many ways to rig an evaluation without resorting to such lame techniques as showing a completely rigged video. ;)

    For example, if you know the exact paramaters of a data set, you can optimize your compressor for just that data set. Like, for example, allocating a lot of bits to pink in a pr0n pic.

    You can get insane compression with fractal/wavelet algorythims if you sit down and figure them out by hand or brute force.

    And then, there of course is a question of what's on the system running things.

    I mean, seriously, you could store four mini-streams and composite them to form the "real" stream. If you think of it that way, Flash already gives you streaming full-screen video over a 28.8 modem.

    Oh yeah, and I forgot about doing really high-quality resizing to make less pixels look like more.. ;)
  • Think about it for a minute. Video CD and Super VideoCDs compress MPEG at anywhere between 1.15 and 2.25 Mbit/s. With transport encoding, that's between 1.25 and 3.0 or so (give or take).

    Now, bear in mind that they're not transmitting over the net - so there's no lag, no reassembly - they're just squirting a continuous packet stream.

    28800 is about 26400 bits per second, with overhead - which is 0.03Mbit/s.

    So that's a factor of 100 difference. With some clever algorithms (eg. Div-X), making use of the fact that NTSC is generally lossy (and thus letting you throw away a lot more of the signal than a videophile would like), you might get away with it. You could just about squeeze VideoCD quality down that pipe. Not bad.

    Simon
    • So that's a factor of 100 difference. With some clever algorithms (eg. Div-X), making use of the fact that NTSC is generally lossy...


      Umm, wait a sec -- in the VideoCD compression phase you've already taken all the advantage you can of the slop in the original NTSC. You're talking about 100X compression of an already tightly-compressed data stream, which is to say that you're going to find sufficient redundancy in the data to remove 99% of it.


      Pull the other one.

  • Well, even with all of the advancements in video compression, I still HIGHLY doubt that we're at the point were decent, broadcast-quality video can be streamed at ~2.5k/sec. Unless they have some "magic" means of compressing something that nobody will even come close to for at least a decade, I remain doubtful. "Broadcast-quality video can go anywhere from 26+MB/sec (uncompressed NTSC) to ~3.7MB/sec (DV/DVCam/etc) for a decent compression. But a decent comparison at approx. .000676 the size? I'll believe it when I see it. Besides, there's only talk so far, no REAL proof that outside people can test, review, and confirm or deny.

    This all reminds me of a friend who thought he could compress his whole hard drive onto a floppy by just zipping his files up hundreds of times. You know how that goes...

    But there's no doubting how cool something like this will be once the technology in compression advances to this point. Screw MPEG-4 or MP3, if someone could successfully do this, it would change how TV and the Internet are seperated (or combined in some cases) forever.
  • ...substantial compression. Right now, I'm pleased watching (while I work) a 2inch x 2inch video of the Simpsons in the corner of my screen, which is allegedly at 350kbps, but that's still not Amazing Quality at Low Low Rates - unless I have my figures wrong (i.e. bits/bytes, which is entirely possible,) this would be an additional compression of, what, 12 times? That would be groundbreaking, but wouldn't we have seen some intermediate steps?

    Hell, maybe not. Maybe it is genuine. But I'm with gwernol - let's see some independent testing.

    ...also, does anybody else remember that April Fool's joke about lossy data compression, where it actually just deleted the files? Sure, you get 100% compression - but it's lossy.

  • However, they're not exactly on the cheap side. Any sane (but greedy) manufacturer would be advised to hide such costs, for as long as humanly possible, to rake in the investors, before fleeing across the border.


    First, you'd need hardware fractal compression. It's the only compression system capable of the sorts of compression ratios required, for the type of information being delivered. However, it's PAINFULLY slow, which is why it's not in general use, and the only companies touching it are using ultra-powerful dedicated hardware.


    Second, "full-screen" is a bit of a suspect term, when it comes to video. Television uses interleaved frames. In principle, this means that you only really need to send over half the information, and do simple interpolation for every other scan-line.


    Third, that the modem couldn't be checked is itself a bit suspect. It really wouldn't take much to conceal a DSL circuit, especially if it was an internal modem. At which point, your 28.8k suddenly becomes 28.8m. A somewhat more plausable speed.


    Lastly, although I doubt it was done this way, if you run -enough- 28.8k modems in parallel (say, a thousand of them) and stripe the data across them, you could easily reach high enough speeds, AND "legitamately" claim that you had video over a low-speed modem.

  • Okay, sounds pretty bogus, huh? I mean, take full quality video and stereo cd sound, you're talking about 310 Megabits of data every second at the sizes they talk about.

    Even if you take lossy compression such as DivX and reduce the video size, you're still talking about 100 k for decent video and 1 Mbit for anything close to full screen quality.

    But we're talking data here... what about information? Data is bits. Information is the meaning of the bits, and a lot of information is highly redundant. Take english. I heard once that there are 1.2 bits per character in the english language; that's why text files get such good compression rates with gzip.

    Video is not so highly compressible, mainly because the codec doesn't understand images. Codecs generally just split the image up into smaller and smaller blocks and look for exactly repeating patterns. Lossy compression allows them to look for roughly repeating patterns, and pretend they're exact. Not exactly rocket science.

    Take a scene; any one. Like the one from the Matrix. Where Keanu Reeves is in his trench coat, black t-shirt, and black jeans, and an evil computer agent is standing in the background firing at him. You see Keanu bent over at the knees and there's 5 bullets coming at him with a particular trajectory pattern, with cool spiral air deformations coming off the back. Know the one I'm talking about?

    Guess what? I just described it in 312 characters. About 400 bits. Through in another 100 to precisely place everything and another 500 to describe background scenery, etc. Sure, it was REALLY lossy compression, but that's an example of the kind of thing you can do if you have an understanding of what's in video. At the very least, you can decide WHAT you can ignore and focus on preserving the really important stuff.

    Like, most people won't notice if the sky isn't the exact same shade of blue. Or if the flat blue areas of the sky have a slightly different texture applied to them.

    Okay, this is all so far pure pie-in-the-sky theorizing so far... I just wanted to set all that up to point out that this seems possible. HOW could it be done? Well, this is pure speculation but...

    A few years ago lots of people were looking at using various types of fractals to compress images down. This flourished briefly as the IFS file format (c. 1995), but the patents on the algorithm allowed the author to charge an exhorbitant royalty, so it never got off the ground other than for a few high-end video conferencing systems. These systems used (you guess it!) regular phone lines. Sure, maybe not 28.8 modems and maybe not full screen (though I distinctly remember that the frame rate was between 24 and 30 fps, depending on what kind of processor you used), but from there it's just process improvements.

    Plus, I imagine that MP3 has taught us a lot about lossy compression that could be applied to this sort of thing. I don't personally know anything about the details of MP3, but assume that its methods can be applied to fractal compression with approximately the same rate, e.g. at 3x-6x compression at negligible quality loss and 12x at maximal compression... and that would be enough to take this technology to the levels this guy is talking about...

    Ok, I'm done dreaming. Anyone have any comments? Does anyone remember this IFS format or have any more info on it than my hazy recollection?
    • If (somehow) we've never seen "The Matrix" or have no idea who Canoe Reeves is, you description doesn't do much. I mean, what does an "evil computer agent" look like? I could imagine the MCP or one if its guards, and think "The Matrix" is some lame Bill and Ted rip-off of "Tron."
    • Congratulations, you have just invented vector-based graphics. It's already possible to make streaming cartoons of decent quality in Flash and related programs, so all we need now is a scene-description language capable of generating Keanu Reeves from a small file.

      (Alternative objection: You have merely passed me a query to my brain's database which happens to contain a large amount of preprocessed information regarding The Matrix. Had I not seen the movie, I would not be able to decompress your scene.)
  • by ArcadeNut ( 85398 ) on Thursday August 30, 2001 @04:13PM (#2236281) Homepage
    Ok,

    Lets assume a video frame size of 320x240x16bit. We can scale this up fairly well, however, its no where near TV quality.

    Each frame takes 153,600 bytes per frame uncompressed. Now lets say you can get 80% compression on each frame. That would bring us down to 30,720 bytes per frame.

    A typical 28.8K modem is going to see 2800 bytes a second (on a good day, more like 2400 bytes in the real world). Note: This is a 28.8K modem and not a 56K modem.

    Based on these numbers, it would take about 10.9 seconds per frame (30,720 / 2800 = 10.9).

    Obviously there are tricks that one can do such as deltas between frames rather than actual frames, etc...

    However, in order to get 24FPS (3,686,400 bytes)in real time, they would have to get a compression rate of 99.93% (for the 24 frames).

    It just doesn't add up. I think they are full of it and this product will never go beyond vaporware.
    • by Dr. Zowie ( 109983 ) <slashdotNO@SPAMdeforest.org> on Thursday August 30, 2001 @05:01PM (#2236566)
      80% (factor of five) compression is unreasonably
      inefficient. Even without frame-to-frame similarities, wavelet image compression schemes can achieve 50x compression with no visible degradation (I know, I did experiments last year as part of a spacecraft proposal effort). That's a factor of 10 from your figures -- 1.9 seconds per frame. Using the similarities between frames, it's not unreasonable to think that another factor-of-10 applies (MPEG achieves factor-of-100
      compression where JPEG only gets factor-of-10), bringing the frame count up to 10/second.

    • Comment removed based on user account deletion

      • Suppose, for example that the camera is slowly panning across a static image. MPEG would see that as the *whole frame* differing from its predecessor, where a location-independent approach like fractal compression would still be able to take advantage of the redundancy.


        No, MPEG would not. Do you think it was designed by a group of monkeys? MPEG would see this as a simple translation and code the correct motion vectors into the B-frames of the stream. There is more to MPEG than simple DCT blocks. You're talking about MJPEG.

  • by color of static ( 16129 ) <smasters&ieee,org> on Thursday August 30, 2001 @04:16PM (#2236289) Homepage Journal
    Back around 1990, there was a similiar thread going around Usenet about a company called Web technologies. They claimed to have some fantastic compression ratios, and to be able to compress compressed data again. They got a lot of press, but on Usenet it was quite obvious that they were full of &%$#.
    In fact someone came up with a mathematical statement that said the only way their claims would hold water was if they just gave out 64 bit serial numbers and stored the data somewhere else. Not to different from what we call Freenet now.
    Needless to say these guys ended up going under after the investors figured out they were not only full of it, but 10 lbs of it in a 5 lbs bag.
  • There is a few articles where people claim stuff, but they do not really say anything. Okay they say that they have full screen video over 28.8. Where's the evidence? What are they using to do this? And if this really can be done, then why is it NOT being done anywhere???? I think its crap.

    Show me the money baby!!!

  • University backstep (Score:5, Informative)

    by DHartung ( 13689 ) on Thursday August 30, 2001 @04:21PM (#2236322) Homepage
    The new article as well as the earlier one both say that the technology is "backed by a report from Monash University" {in Melbourne}, but back in April, Monash vigorously disputed claims of their support [mycareer.com.au]. They conducted an independent review but the compression algorithm was black-boxed. The company may be misrepresenting the purpose and parameters of the review, from the university's point of view.
  • Much like this one ...

    http://lzip.sourceforge.net/

    :-)
  • The current state of the art in compression technology is benchmarked by Jeff Gilchrist at his site [compression.ca] which includes current benchmarks in image compression technology too.
  • Last year I did some work on image compression
    using wavelet transforms. We were able to get
    50:1 compression on scientific image data, with
    12-bit dynamic range. That compression ratio was
    without any use of interframe similarities --
    a movie compression algorithm could probably
    get another 20:1 compression without much trouble.
    At 30 fps, 0.33 MB per frame, that's 10 MB of
    image data per second. Compressed 1000 to one,
    you're only talking about 10 kilobytes
    per second. If you're willing to suffer with
    less dynamic range around spike bits of data,
    it's not unreasonable to think that another
    factor of four could come out of that, giving 2.5
    kB/sec or 20 kbps -- leaving 8kbps for the sound.
    • We were able to get 50:1 compression on scientific image data, with 12-bit dynamic range.

      Ok, what is "Scientfic Image Data"? Pictures of planets?

      What is "12-bit dynamic range"?

      At 30 fps, 0.33 MB per frame, that's 10 MB of image data per second. Compressed 1000 to one, you're only talking about 10 kilobytes per second.

      Ok, what is your source resolution and color depth? How did you come to .33MB per frame?

      Even assuming you could get that down to 10K, a 28.8K modem runs at about 2.8K a second. It would take you 3.5 seconds to download those 30 frames. That would bring your frame rate down to 8.5FPS. This doesn't even include Audio.

      If you're willing to suffer with less dynamic range around spike bits of data, it's not unreasonable to think that another factor of four could come out of that...

      So now you are talking about a 4000:1 compression ratio? Sign me up! The highest I've read about is between 10:1 and 20:1 compression for MPEG4!

      Even if you had a typo and meant 100:1 then another factor of four would put the compression ration at 400:1. That is hardly realistic.

      • We were looking at images of the solar corona. It's a distributed object with faint gradations in intensity. The biggest problem we had in general with compression was that cosmic ray spikes and stars in the field of view tended to cause "ringing" with JPEG and similar Fourier-type compression schemes.

        I figured 0.33 MB per frame because 640x480 is about a half-megapixel, and you'd probably be happy binning it down to 320x240 (more typical of VHS video), yielding an eighth of a megapixel.
        Putting in three color planes takes you back
        up to something like a third of a megapixel. Eight bits per color plane gives you a third of
        a megabyte. (Note that that's not really a good
        way to think about it -- usually there's a LOT more information in the luminance signal [the RGB common mode] than in the hue and saturation signals -- so you might need fewer initial bits...)

        Our 50:1 figure came from a single, noisy image plane with the criterion that 99% of the pixels had to be within 1 DN (12 bit DN) of the original value, after compression and restoral. The test image on which we applied 50:1 compression was from the TRACE [lmsal.com] satellite -- click the link for some sample images.

        • I'm not sure if you example is relevent then. You see, all the images you were working with were very similar. It is of little surprise that you were able to find a wavelet codec that worked very well for these images. However, if you took the same wavelets and applied them to a wide range of image types, do you really expect your compression to work as well.

          This is a common mistake that people make. Someone designs a compression scheme that works really well for specific cases and thinks that it will work in the general case. Hell, I once designed a custom lossless scheme for handling certain classes of bitmaps that beat lzw by a factor 5:1, but I guarentee you if you applied it to bitmaps that we were not interested in, it would have been very unimpressive. I suspect the same can be said for the wavelets your group was using.
  • These guys have been listening to their own snake oil pitch too long.


    Even assuming that they can produce great full screen video with a 28.8 connection, there is no evidence that broadband will no longer be needed. They seem to AssUMe that the only thing broadband is used for is streaming video.


    How will this miracle technology help me download the latest Linux Kernel in a few minutes over 28.8. It will not. Speed up my binary newsgroups downloads so I can get gigs of possibly copyright infringing binaries every day? No. Will it even speed up my web browsing so that I don't have to wait 30 to 60 seconds for CNN.com to show up? No, not that either.


    Broadband is safe whether or not their claims are real.

  • What's the frame rate? Sure, I could do HDTV over 28.8 -- if I had 1 frame per minute.

    This is pretty absurd. Let's say 10 frame / second, which I think is probably minimum for a decent experience. 28.8 = 3600 bytes / second (yes, it's 8 bits, not 10 bits). That's only 360 bytes per frame! Full screen? 320x240x24b = 230KB uncompressed. That's 640/1 compression -- without sound. With sound??

  • Could this be the last great Internet scam?

    Surely not the last...

    • And its really not that great. Everyone here realizes it is a scam. I would loved to be proved wrong, but something tells me I won't be.
  • Romans had been using a system of compression that it still unrivaled today (in terms of compression ratio, unfortuntly not in terms of speed). Very simply, you have two men on each side of the valley, one with a flag and one with a bowl and a jug of water. On the sides of the bowls are little notches.

    The sender raises his flag, and both sides start pouring water into thier bowls at the same rate. When the sending side's bowl is filled high enough, he stops pouring and his flag man raises the flag again to signal the other side to stop pouring as well.

    So what wa sthe point of this? Well, now both sides of the valley now have the same number of notches filled in thier bowels. Each notches, of course, was a particular battle plan that was to be carried out. But for out purposes, it could be an ascii byte of information.

    This kind of "compression" is essentially one with an infinite compression ratio, i.e. any amount of data can be "sent" using only two bits of information (the start and stop bit). The only real problem with using this kind of system is one of time. Clocks are just not accurate enough to make this kind of system any faster than just sending data in the normal way.

    Anyway, I'll leave it up to the rest of you to figure a way to make this into the "next big thing", but I just wanted to note that, while 99.99999% of these claims are fradulent, there is a basis for such a scheme to exist.

    • Right, the clock accuracy implies bandwidth limitation. If you can only accurately clock to 1/10 of one second, your method can maximumly send ~10 pieces of data per second (flag up, flag up, flag up, etc.)

      Indeed, bandwidth limitations are usually due to clocking accuracy due to intersymbol interference and clock skew.
    • Reminds me of an SF story I read about (haven't actually read it myself) where an alien comes to earth, learns a load of stuff from us and decides to take back to his home planet a copy of our entire knowledge base. So he gets our Encyclopeida Terra (or whatever), and converts it into a computer format.

      This is, of course, just a single really big number, if read as such. Which our alien takes the reciprocal of, giving him a number between 0 and 1. He then marks this point on a rod of some type, where the fraction along the rod from one end that he makes the mark is the same as this reciprocal number he's got.

      Voila! He has encoded our entire bank of knowlege as a single mark on a rod. And he could easily put other marks on the same rod as well, indicating other civilisations' banks of knowledge. All he has to do is work out how far along the rod the mark is as a fraction of the length of the rod, take its reciprocal, and he's got it all back.

      (Now for a bonus point, why won't it work? :)
  • ...AAlib, perhaps? :-)

    -Karl
  • by Saeger ( 456549 ) <farrellj@nosPAM.gmail.com> on Thursday August 30, 2001 @05:17PM (#2236669) Homepage
    A 28.8 link can do 3KB/s at best. Even with some super-duper-10X-better-than-DivX-codec, there's only so much data you can cram down a pipe that thin without resorting to tricks.

    My first guess it that these aussies have impressed clueless execs with ordinary tech.

    My second guess is that maybe someone finally got around to applying foveation [nyu.edu] in a way that works really well.

    Perhaps these aussies are hooking up test audiences to eye-tracking devices, and recording their average gaze during a film so that they can get even higher compression [stanford.edu] by throwing out what's outside most peoples field of view?

    *shrug*

  • It may kind of "work", but just good enough to lure investors. Black box demo's lie this are very suspicious.


    Did the auditors get to pick a movie of their own choice?


    Did the auditors supply the test HW, to ensure no tricks could be done?


    If their compression is as efficient as they claim, they could patent it and submit it to the MPEG group. If it blows the competing codecs out of the water, they'll make a bundle on licensing. Instead they are staging suspect demo's hoping to lure investors. The same kind of investors who will buy stuff from ads with the "seen on TV" logo.

  • I've looked at the articles - and while it seems to be likely a scam (such as a 5GB player application), one possibility does not seem to have occured to any of the other posters.

    Just because he's using a modem doesn't mean that he's actually transmitting digital data over the phone line. What sort of video compression can be achieved when you don't need (or get) bit-perfect transmission, but rather encode video properties directly in the analog signal? Errors then show up as slight inconsistencies from the original color or position - but on motion video, this would be irrelevant.

    The compression would still need the common video codec functionallity to remove redundancy, and send the changed areas more frequently than static images, but if the modem link mapped QAM data directly to position and color signals, it might just be possible to paint a fairly high quality picture.

    For that matter, some fractal compression techniques are quite tolerant of minor errors in their probability and/or mapping factors - combine this with sending color information as analog data, and now you might be able to have a link that is unidirectional (the whole audio bandwidth can be dedicated to the video stream without need for a reverse channel) and error tolerant (no re-transmit on error or dropouts due to transient line noise).

    Maybe it isn't a scam.
  • This is all from memory from many years ago, and another lifetime....

    Back in the good ole C64/apple days we wanted to stream gfx over a modem. With ASCII and reprogramming the characters into 8x8x2 bitmaps. Using characters mappings you could make little guys run, little cars drive, etc.

    Then someone came up with Megabignum (no joke), used A-Z,a-z,0-9,!@#.,etc to have a large set of characters for use.

    Then there was RLE type gfx which was black and white bitmaps. (I think 4 bits actually).

    You map a 320x200 RLE into 40x25 ASCII type characters. So 1000 characters per frame or lets round up to 1K per frame. I don't think anyone did anything this big, maybe on some demos.

    Using this character set mapping conversion was a simple trick, but it worked.

    I don't see why you couldn't take this character set idea and expand it with compression and do larger 640x480 b/w 30fps images over a 56K modem.

    Maybe someone smart could come up with a way to add color.
  • Hacker1: Wow, what kind of modem is that?

    (cool graphics coming from another machine over modem are on the screen, yes, this modem is definately broadband, otherwise it would be impossible to show such neat graphics)

    Hacker2: It's an 28k8 !!!

    Hacker1: Amazing, marvellous, etc. etc.


    (forgive me for not remembering the names, the wasn't that good :-)
  • MPEG works by sending a stream of key frames interspersed with a number of delta frames.

    Persistence of vision becomes really flakey at under 25 frames per second. With the overhead of stop bits, start bits, PPP protocol etc 28.8Kbits/sec is actually more like 22,000 bits/sec. That means that there are less than 900 bits to encode the delta between one frame and the next.

    There might be something to be had out of using second order derivatives, a delta encoding of the delta encodings. There might be something to be had out of more powerfull delta encoding techniques, more complex transformations from one piece of screen to the next.

    However the law of diminishing returns applies here and however good the delta encoding is, there is still the need to send key frames from time to time. At the very minimum once per scene change. In practice very much more often. It is quite likely that a scheme substantially better than MPEG is possible, but the scheme claimed is just too close to the fundamental limits.

    There are two ways to cook a compression demo. The first is to pre-load the cached data, the second is to chose the content to be compressed very carefully. For example Larry King Live compresses quite well because the video shows only two talking heads from fixed camera angles. Star Trek TNG would be much harder because the camera is often moving.

    Einstein reported that he was often acosted by people who would say something like 'how do we get to the next solar system if we can't go faster than the speed of light?', to which he would reply 'I don't set the laws of physics, I am just telling you what they are'.

    Seems to me that the reason that so many people invested so much in Pixelon was that they believed that because they needed the solution so baddly, it had to exist, even if Shanon's law dictated otherwise.

    Similar thinking runs rampant in the GOP mania for ABM technology. There has not been a single successful test that has not been cooked, in their last test the target had a radio beacon sending out its GPS measured position to the interceptor. But because they want to believe in the technology they will believe their own cooked figures and threaten MIT Professors that try to tell them they are being had with jail [washingtonpost.com].

  • It is real, it was developed in my home town...

    Let's Get Skase, the film he produced based on...

    So they want me to believe that a film producer in a small town woke up one day and developed video over 28.8k when nobody else in the world could do it?

Receiving a million dollars tax free will make you feel better than being flat broke and having a stomach ache. -- Dolph Sharp, "I'm O.K., You're Not So Hot"

Working...