Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Television Media

Full-Screen Video Over 28.8k: The Claims Continue 459

gwernol writes "Over at Screen Daily they are claiming that an Australian company has demonstrated a high quality, full-screen video-on-demand service that is delivered over a 28.8k modem. They claim this will 'eliminate the need for broadband.' If this is true, then they'll change the world. Of course, the basic technology has been around for a while, see this article from 1998 or this one from earlier this year. I remain extremely sceptical. If this is real, why won't they allow proper independent testing? But it is interesting that they're getting funding. Could this be the last great Internet scam?"

Several readers also pointed out this brief report at imdb.com as well. We've mentioned this before, but the news here is the reportedly successful demo. It would be a lot easier to swallow if he'd let people test it independently, but video-over-28.8 sure is tantalizing.

This discussion has been archived. No new comments can be posted.

Full-Screen Video Over 28.8k: The Claims Continue

Comments Filter:
  • by rjamestaylor ( 117847 ) <rjamestaylor@gmail.com> on Thursday August 30, 2001 @03:49PM (#2236156) Journal
    Remember Pixelon [google.com]?

    The above is all that in necessary to say on this subject, but due to the postercomment compression filter, I have to add this meaningless paragraph.

  • by Tensor ( 102132 ) on Thursday August 30, 2001 @03:53PM (#2236185)
    Yes but the article is extremely short on technical data.

    What is this FULL-SCREEN video ?? 320x200 ? 640x480 ? true NTSC @ 29.97 FPS ? DVD resolution ? HDTV ?

    Or is it 1/4 tv resolution zoomed to fit the screen ? with 1/2 the fps ?

    Maybe all they did was improove the zoom, iterpolation and anti-aliasing algorithms in the player. So they send a crappy video and it ends up looking ok.

    Anyway its all hot air until we get some technical data.

  • by zealot ( 14660 ) <xzealot54x@yahoo ... m minus language> on Thursday August 30, 2001 @03:57PM (#2236200)
    It's certainly possible that they can compress video/audio data this much. There are types of compression available far greater than what are commonly used... the reason being that they demand way too much computing power to encode and decode. For example, neural networks have been used to compress data like pictures to tiny, tiny size. But if you've ever seen neural network algorithms, you know that there's a lot of computation going on.

    That said, assuming they have the compression, nobody probably has a cpu for decoding it.

  • by color of static ( 16129 ) <smasters@iee e . o rg> on Thursday August 30, 2001 @04:16PM (#2236289) Homepage Journal
    Back around 1990, there was a similiar thread going around Usenet about a company called Web technologies. They claimed to have some fantastic compression ratios, and to be able to compress compressed data again. They got a lot of press, but on Usenet it was quite obvious that they were full of &%$#.
    In fact someone came up with a mathematical statement that said the only way their claims would hold water was if they just gave out 64 bit serial numbers and stored the data somewhere else. Not to different from what we call Freenet now.
    Needless to say these guys ended up going under after the investors figured out they were not only full of it, but 10 lbs of it in a 5 lbs bag.
  • Re:MP3... (Score:1, Interesting)

    by Anonymous Coward on Thursday August 30, 2001 @04:25PM (#2236339)
    Remember too though, that MP3 was only possible due to the advent of new technology from sound cards and different Pentium Instruction sets. MP3's came about due to a change in mainstream technology that allowed the compression. There's no mention of a new instruction set, or type of hardware relating to this 'new' video compression.
  • by ArcadeNut ( 85398 ) on Thursday August 30, 2001 @05:09PM (#2236610) Homepage
    We were able to get 50:1 compression on scientific image data, with 12-bit dynamic range.

    Ok, what is "Scientfic Image Data"? Pictures of planets?

    What is "12-bit dynamic range"?

    At 30 fps, 0.33 MB per frame, that's 10 MB of image data per second. Compressed 1000 to one, you're only talking about 10 kilobytes per second.

    Ok, what is your source resolution and color depth? How did you come to .33MB per frame?

    Even assuming you could get that down to 10K, a 28.8K modem runs at about 2.8K a second. It would take you 3.5 seconds to download those 30 frames. That would bring your frame rate down to 8.5FPS. This doesn't even include Audio.

    If you're willing to suffer with less dynamic range around spike bits of data, it's not unreasonable to think that another factor of four could come out of that...

    So now you are talking about a 4000:1 compression ratio? Sign me up! The highest I've read about is between 10:1 and 20:1 compression for MPEG4!

    Even if you had a typo and meant 100:1 then another factor of four would put the compression ration at 400:1. That is hardly realistic.

  • Re:MP3... (Score:3, Interesting)

    by shepd ( 155729 ) <slashdot.org@nOSpAm.gmail.com> on Thursday August 30, 2001 @05:23PM (#2236714) Homepage Journal
    Video already compresses surprisingly better than any audio format I know of.

    For example, take a 10 second clip of 640x480 24-bit, RGB, 29.97 fps video (no audio). The math sez its:

    640 x 480 x 3 x 29.97 x 10 = 263.41 MB (approx).

    Yet 10 seconds of 10 Mbits MPEG-2 video (very high quality) takes up just 10 Megabytes of space. That's a compression ratio of over 26:1!

    Over a 28.8kbps modem over the internet we are looking at about 2.6kbps of data (headers and other overhead removed). This means the above 263 MB video is supposed to compress down to less than (don't forget about the sound!) 26 k. That's a compression ratio of 10374:1!

    I can believe a leap of 10x, *maybe* 50x. But a leap of 400x is just something I have to try on my own terms before I believe it.
  • by Liquor ( 189040 ) on Thursday August 30, 2001 @05:39PM (#2236800) Homepage
    I've looked at the articles - and while it seems to be likely a scam (such as a 5GB player application), one possibility does not seem to have occured to any of the other posters.

    Just because he's using a modem doesn't mean that he's actually transmitting digital data over the phone line. What sort of video compression can be achieved when you don't need (or get) bit-perfect transmission, but rather encode video properties directly in the analog signal? Errors then show up as slight inconsistencies from the original color or position - but on motion video, this would be irrelevant.

    The compression would still need the common video codec functionallity to remove redundancy, and send the changed areas more frequently than static images, but if the modem link mapped QAM data directly to position and color signals, it might just be possible to paint a fairly high quality picture.

    For that matter, some fractal compression techniques are quite tolerant of minor errors in their probability and/or mapping factors - combine this with sending color information as analog data, and now you might be able to have a link that is unidirectional (the whole audio bandwidth can be dedicated to the video stream without need for a reverse channel) and error tolerant (no re-transmit on error or dropouts due to transient line noise).

    Maybe it isn't a scam.
  • Re:Uh, no (Score:2, Interesting)

    by Rothron the Wise ( 171030 ) on Thursday August 30, 2001 @06:22PM (#2236989)
    Maybe _your_ eyes are 10-12fps

    Actually, our eyes don't have a fixed fps as so many of you nerdlings tend to think. There IS a limit to how rapid changes we are able to see, but they are very dependent on brightness. We have problems seeing dark changes that happen in tenths of a second, but noone will miss a bright flash even if it lasts 200ths of a second.

    In normal lighting, 10-12fps is not even in the
    same ballpark as our vision. 75 fps is more like it.

  • by trixillion ( 66374 ) on Thursday August 30, 2001 @06:32PM (#2237034)
    I'm not sure if you example is relevent then. You see, all the images you were working with were very similar. It is of little surprise that you were able to find a wavelet codec that worked very well for these images. However, if you took the same wavelets and applied them to a wide range of image types, do you really expect your compression to work as well.

    This is a common mistake that people make. Someone designs a compression scheme that works really well for specific cases and thinks that it will work in the general case. Hell, I once designed a custom lossless scheme for handling certain classes of bitmaps that beat lzw by a factor 5:1, but I guarentee you if you applied it to bitmaps that we were not interested in, it would have been very unimpressive. I suspect the same can be said for the wavelets your group was using.
  • by Anonymous Coward on Friday August 31, 2001 @02:55AM (#2238031)
    Let's see.

    Let's take 640x480x16. It's around 614K. We try
    fractal compression approach here, so we divided
    a picture into 8x8 blocks with x,y,offs,scale,rot coords.
    We end with 9600 blocks.
    An arithmetic coder allows us to achieve
    415Kbit=9600*(log2(640)+log2(480)+16+7+2) or 52K
    bytes per frame. For 320x240x16 this going down
    to 6.2K (320/8*240/8*(log2(320)+log2(240)+16+7+2)).

    This is about twice as big as 2.8K, yes? And we get
    nasty side effects because of big blocks (4x4 is far
    better for small resolution pictures).

    I've seen (a long time ago) a paper on 3D fractal
    compression. Let's see. The bit count for this scheme
    will be

    BC=X*Y*Z/(N^3)*(log2(X)+log2(Y)+log2(Z)+O+S+R),
    where X,Y,Z - sizes on X,Y and time axis, O is
    bit count for offset (16), S is bit count for scale
    factor (7) and R is bit count needed to encode
    rotations (to swap or not to swap direction around
    appropriate axis, which are three).

    For 320x240x24 (24 frames of 320x240 pictures) and
    N=4 we get BC=1.2 Mbits. But for 3D compression
    bigger (than 2D compression) N size does not introduce
    same nasty looking artifacts. As far as I can remember,
    even 16x16x16 block looks pretty well. So let's choose
    N=8 and we get (don't take your breathe) - 157Kbits.
    You still breathe? Ok, for 1 second and N=16 we get
    an estimate BC=21Kbit, well within 28800. For four
    seconds encoded we get an estimate throughput
    of 22000 bits per second. For 4 sec 640x480 N=16 we
    get throughput of 91Kbit/sec.

    The other side is the memory requirements. We will
    hold RGB (or YCrCb) in three separate bytes. We have
    to double our buffer because we have scaled and
    original version of picture. So we get X*Y*Z*3*2 bytes
    for each compressed block. It's 44M for 240x320x4sec.

    Two links:
    http://inls.ucsd.edu/y/Fractals/ - for uninitiated;
    includes reference to "Three-Dimensional fractal video
    coding" paper.
    http://www.cse.sc.edu/~culik/ - Karel Culik invents
    Weighted Finite Automata transform, which is more
    efficient than fractal-based, but uses similar approach.
    This page includes links to several WFA software
    systems to experiment with.

If a train station is a place where a train stops, what's a workstation?

Working...