Full-Screen Video Over 28.8k: The Claims Continue 459
gwernol writes "Over at Screen Daily they are claiming that an Australian company has demonstrated a high quality, full-screen video-on-demand service that is delivered over a 28.8k modem. They claim this will 'eliminate the need for broadband.' If this is true, then they'll change the world. Of course, the basic technology has been around for a while, see this article from 1998 or this one from earlier this year. I remain extremely sceptical. If this is real, why won't they allow proper independent testing? But it is interesting that they're getting funding. Could this be the last great Internet scam?"
Several readers also pointed out this brief report at imdb.com as well. We've mentioned this before, but the news here is the reportedly successful demo. It would be a lot easier to swallow if he'd let people test it independently, but video-over-28.8 sure is tantalizing.
If you believe that... (Score:3, Funny)
Yes, and I am able to compress all of Slashdot down to 10 bytes.
Re:If you believe that... (Score:4, Troll)
FIRST POST
0123456789
Well, what do you know, he's right!
Re:If you believe that... (Score:2, Insightful)
Fractal compression (Score:2, Funny)
Re:If you believe that... (Score:2, Funny)
compression is more effecient when thare is lots of predictable duplicate data right ?
Well just think about how many "Linux is great" posts on slashdot there is...
Re:If you believe that... (Score:2, Funny)
Re:If you believe that... (Score:2)
Re:If you believe that... (Score:2)
Yes, and I am able to compress all of Slashdot down to 10 bytes.
Actually, if you permit the use of a diff against previous stories and postings, it is possible to compress Slashdot down to an average of 3 bytes:)
News item:
Re:If you believe that... (Score:2, Funny)
However, the decrypt key is 4.5 GB long
Re:I can watch me (Score:2, Funny)
It's all in the buffering (Score:3, Funny)
Re:It's all in the buffering (Score:2, Funny)
Re:It's all in the buffering (Score:3, Funny)
Eliminate Broadband? (Score:5, Insightful)
With all the great things I have with broadband (at the same cost of 28.8 service), plus, if you can compress a stream for 28.8, imagine what you can do with broadband!
This won't eliminate broadband. It'll strengthen it!
Re:Broadband for the same price as Dialup? (Score:2, Informative)
I have 2 x 100Mbit FullDuplex switched Ethernet to my appartment (currently only using one).
The Area of (about) 100 connected users is connected through a Gigabit connection to our local ISP that has ha (today) 96Mbit connection to the internet, 155Mbit connection to the Swedish University Network and 100-1000Mbit connections to the other networks in our Town.
End of Broadband? (Score:4, Insightful)
The real advantages of a broadband conneciton is that you are always connected; you are accessible to others via mail and messaging at all times (just imagine that you had to explicitly connect your telephone to use it, then disconnect it again afterwards). The speed, while very nice, is actually not as important.
/Janne
Re:End of Broadband? (Score:4, Funny)
Broadband is great b/c I don't have to wait 5 mins while my porn comes up, I don't have to wait 15 hours for my whatever.tar.gz to download, and I certainly don't have to worry about my roommate stealing all the bandwith downloading MP3s.
Even if 28.8kbps can support full motion video you can't do much else while it is downloading.
Re:End of Broadband? (Score:2, Informative)
Well some of us live in the poor, deprived, third-world old United Kingdom, where speed cameras and CCTV monitor our every move, and dialup access is (mostly) metered! Broadband only covers about 10% of the population (thankfully me
In many cases, broadband is the only unmeterted access.
--Russ
Re:End of Broadband? (Score:2)
Re:End of Broadband? (Score:2)
No you dweeb (Score:2)
The rub is if you could only receive incoming calls while the phone is in your hand. Oh, and
everytime you hang up, your phone number would change.
Thats what hes trying to get at. Thats also what your typical dialup ISP user has to deal with.
I got a static IP address from telocity, and i could never go back. The download speed is ok, and the upload band& latency are horrendous.
Still, the availability makes it all worth it.
I can log into my box from the net at large, with full confidence that itll be there. (theres no way dialup on demand can open up a PPP connection from the outside, that i know of)
It's Not Only About Speed... (Score:5, Insightful)
People are going to want to send and receive video emails from their handhelds. We need a technology that will be able to strike a balance between energy required to transmit the signal (bandwidth) and the energy required to compress and decompress the signal (signal processing).
easily done (Score:4, Funny)
http://lzip.sourceforge.net/ [sourceforge.net]
I hope this isn't another Pixelon...
Re:easily done (Score:2, Funny)
What's that? (Score:4, Funny)
Yes sir, full screen video over a 28k connection.
So what am I seeing? It looks rather blank.
Well sir, that's a white cow in a snow field. It just scared out some snow hares.
Over 28k you say? Where do I sign?
Smoke and Mirrors (Score:5, Insightful)
I don't even think it would be that hard to fake.
Re:Smoke and Mirrors (Score:4, Funny)
Perhaps a GPS beacon as well.
Re:Smoke and Mirrors (Score:2)
Re:Smoke and Mirrors (Score:2)
Nah, use a Powerbook or one of the newer laptops with 802.11b built in. A PCMCIA card with an antennae hanging out he side would be a dead giveaway.
MP3... (Score:2, Informative)
Re:MP3... (Score:3, Informative)
People shouldn't have been that impressed with MP3. The concept of lossy compression algorithms was already in common use, in the form of JPEG compression of image data. (Now, I recall how impressed we were with JPEG back in the GIF days...) Getting 10:1 compression was pretty much the expected result of applying the same principles to audio data.
Today, we would be just as skeptical of a new audio algorithm advertising 50:1 compression over MP3 -- which is effectively what these people are asking us to believe, since their ratios are versus existing compression schemes, not raw data.
Re:MP3... (Score:3, Interesting)
For example, take a 10 second clip of 640x480 24-bit, RGB, 29.97 fps video (no audio). The math sez its:
640 x 480 x 3 x 29.97 x 10 = 263.41 MB (approx).
Yet 10 seconds of 10 Mbits MPEG-2 video (very high quality) takes up just 10 Megabytes of space. That's a compression ratio of over 26:1!
Over a 28.8kbps modem over the internet we are looking at about 2.6kbps of data (headers and other overhead removed). This means the above 263 MB video is supposed to compress down to less than (don't forget about the sound!) 26 k. That's a compression ratio of 10374:1!
I can believe a leap of 10x, *maybe* 50x. But a leap of 400x is just something I have to try on my own terms before I believe it.
Re:MP3... (Score:2)
This means the above 263 MB video is supposed to compress down to less than (don't forget about the sound!) 26 k.
Nah, you can pretty much leave out the audio, get the words by "reading" the lips of the actors, then using text to speech in the already downloaded database of actor voices. Splice in a few "" messages, and you have your special effects.
As for the video, you just put a 3D image of all the actors into your distribution DVD, then you can just send position vectors of their movements. Add in a couple "" messages, and you have a full action-feature. The advantage being that you don't even have to hire the actors any more. You just feed the screenplay into the decompression algorithm, and out pops the movie. As an advantage you can change the actors on the fly, in case you'd rather see certain ones during the nude scenes, for instance.
I'm joking of course, but one question to ponder is whether that would even be enough, then you can just use information theory to see if it would be possible. Certainly a screenplay could be sent in 28.8kps, but you'd have to include the information coding all the decisions made by the actors/directors/DPs/etc. too.
Re:MP3... (Score:2, Funny)
Recall that a "quantum leap" is by definition the smallest possible leap. The next quantum leap in compression will be just compressing one more bit. Why hope for that?
Re:MP3... (Score:2)
One way might be to send executable code: stored procedures that manipulate image regions. Think of it as motion prediction on steriods. Now, I don't have a clue as to how the compresssor would figure out these code snippets. Exhaustive search? Mind you, it doesn't necessarily have to be turing complete, but perhaps a very advanced command langauge.
In fact, VOD compression doesn't have to be 100% automatic. You can very well justify having an operator select regions where the compressor should try harder to optimize, and what kind of optimization is likely to be needed. Say, you mark _this_ as background for the next X frames, _this_ is foreground, _that_ is a repetetive element, so store that in our image cache -- see _there_ it is again.
These sort of human driven superoptimisations might be able to achieve a very high level of compression with acceptable results.
Sound good (Score:2, Funny)
Bad news for the MPAA - Here comes DVDster! (Score:3, Insightful)
It's still fairly difficult for users to encode, post and/or download entire DVD movies. Most computer users wouldn't have a clue of where to being.
If this codec does what it proclaims to do, however, can you see this company *not* licensing encoders one way or the other? Real's Mpeg2-based compressor was pretty revolutionary at the time, yet they still offered a 'free' version.
DivX, which is free, but questionable, is even more revolutionary in terms of quality and filesize.
Both these codecs have drawn people into the whole movie/video trading scene.
If this codec *does* allow for compression of videos to make them the same size as the average MP3, (and think about that comparison... For this to work, they'll have to reliable encode video at a lower rate than MP3 audo), the movie trading scene will take off in a way that will make Valenti's asshole shrivel up.
Of course, this company can try to keep the codec and/or encryption secret. To that I have this to say... Jon Johansen and DeCSS
Proprietary 'secret' Technologies (Score:3, Insightful)
I can't help but think of 'The Spanish Prisoner.'
The last? It was one of the first and best (Score:4, Interesting)
The above is all that in necessary to say on this subject, but due to the postercomment compression filter, I have to add this meaningless paragraph.
Not by a long shot (Score:2)
Of course not. It's obviously a scam, but it's equally obvious that scams are not going anywhere. Human nature hasn't changed. As long as there are people who are desperate to belive there will be people willing to tell them what they want to hear. As long as the net is less than people want it to be- which is to say as long as it exists- there will be snake oil salesmen promising that they can make it into what people want.
Pixelon (Score:2, Informative)
shades of pixelon? (Score:2)
http://slashdot.org/articles/00/06/27/1156210.s
Good thing he doesn't have it patented, tho. As soon as he releases software, the algorithms will be available to everyone.
Neat, but it still doesn't solve The Real Problem. (Score:3, Insightful)
It's like choosing an O(exp) algorithm when you know an O(1) algorithm is available.
See, if I start broadcasting a signal, the more people that tune in, the more I can charge for pay-per-view or advertising. But the neat thing is that my cost is fixed; no matter how many people tune into that signal, it costs me the same amount to spray EM waves all over the place.
But with VoD, every new viewer means new bandwidth. Meaning that my costs go up with each new customer. And since the cost of additional bandwidth is not a linear equation, at some point there's diminishing returns, regardless of how small the stream is. My profit margins wither and die if there's enough demand for my video stream.
The only real solution for this from a business perspective is...get this...distributed file sharing, such as Napster or Gnutella. With tools like these, I'm able to avoid the added demands on my server by making the folks who want the service into servers themselves.
So the real technical problem to solve with VoD is not to make the streams smaller, although that certainly doesn't hurt, but to make money off of folks' file transfers. Obviously a direct tax on each transfer is going to cause problems, but an advertising-based model, where each transferred file has an advertisement attached with it, could work wonderfully.
Too bad for the RIAA and MPAA that they're too busy suing file-sharing users and pushing unsuccessful VoD goose-chases to figure this out, eh?
This is a cool technology if it's real. I wouldn't be surprised if it is real. But it won't make the internet into the great media-delivery tool the media corporations want it to be.
Re:Neat, but it still doesn't solve The Real Probl (Score:2)
This is why caching proxies need to become more widely used.
Re:Neat, but it still doesn't solve The Real Probl (Score:2)
The only way to do it is to build a system that's distributed, just like the internet is, such as Gnutella or Napster, where each person who downloads a movie then becomes a distributor for the movie.
A great advantage for a napster-like system aside from the distributed bandwidth is for the people like us who actually watch the movies: Once we get a movie, we own it. We don't have to download it a second time. We can watch it as many times as we want, bandwidth-free.
Re:Neat, but it still doesn't solve The Real Probl (Score:2)
Doesn't work for Video-on-Demand.
No, it's not linear. It gets cheaper per unit as you by more, so there is not point of diminishing returns (as far as delivering the same stream to more people goes. More bandwidth on one physical link, yes, there are diminishing returns).
That's the problem with VoD, because with VoD, each stream connection is a different program, started at a completely different time.
And to think, with that great business model, the RIAA and MPAA still have all the money and your[sic] posting it on slashdot.
Yeah, well, I'm a nice guy that way.
Re:Neat, but it still doesn't solve The Real Probl (Score:2)
Yes, but the beauty of it is different pipes. Once someone has the content, they not only don't need to download it again, but they become a distributor as well! So the second person to download can get it from me, or from the first person. The third person can get it from me, the first person, or the second person, etc.
How do you get revenue? Advertising, embedded within the stream, so that it's not easy to remove. Just like a TV show that's been recorded off of the TV. Porn sites (always ahead of the curve with new media technologies) have been doing this for years with great success.
Re:Neat, but it still doesn't solve The Real Probl (Score:2)
Decoding, not compression (Score:2, Interesting)
That said, assuming they have the compression, nobody probably has a cpu for decoding it.
I'll believe it when I see it (Score:2)
But the outfit's complete unwillingness to do anything but canned demos is what really makes me think the guy in charge is doing more than just feeling like a snake-oil salesman.
If it's for real, they'll file for patent protection and we'll all get to see how it works. And if it's for real, they deserve a nice solid patent or three, but my guess is it's just a scam.
Any compression experts know.......? (Score:2)
Re:Any compression experts know.......? (Score:2)
transform the residual difference into something simpler to code. It's also used in mpeg
for key frames, but those only are usually inserted once every 1/2 second and are
generally coded at resonably high quality.
During the mpeg4 competition, people proposed wavelets for the key frames, but in practice
it didn't look much better since most of compression came from inter-frame motion
compensated prediction, the difference wasn't high enough to justify changing things...
In jpeg2000, you generally don't have multiple frames to compress, so using wavelets makes
a bunch of sense. Wavelets didn't perform very well on coding the residuals so it isn't
used for that... (residual is mostly noise)
I tried this.... (Score:2, Informative)
I played with some wavelet video compression/decompression cards based on the analog devices ADV601 chip (you can google it). It can achieve high compression ratios on grayscale images working on a frame by frame basis (kinda like MJPEG but with wavelets).
After playing with the server a bit (it was a Beowulf cluster
Anyway, the results I was getting (for grayscale) *sound* impressive. 200:1 was possible for most images but only pictures with smooth contrast changes looked any good after decompression. Any sharp edges (e.g. graphical overlays) were completely destroyed at any compression rate over 10:1. Throwing the MPEG interframe stuff into the mix didn't really help much (partly due to the problem outlined above), although I can't say I explored all the possibilities along this route.
After becoming more interested in coding proper parallel apps for Beowulves rather than hacking the MPEG's source I let the project drop. Code available if you'd like a look.
My personal opinion on this fullscreen video with CD-quality sound over 28.8 is that it's complete tosh. It's absolutely impossible to compress that much information into such a small pipe. Unless this guy has discovered something that makes an awful lot of our current mathematical thinking invalid then this claim is nonsense.
Yah... (Score:2)
For example, if you know the exact paramaters of a data set, you can optimize your compressor for just that data set. Like, for example, allocating a lot of bits to pink in a pr0n pic.
You can get insane compression with fractal/wavelet algorythims if you sit down and figure them out by hand or brute force.
And then, there of course is a question of what's on the system running things.
I mean, seriously, you could store four mini-streams and composite them to form the "real" stream. If you think of it that way, Flash already gives you streaming full-screen video over a 28.8 modem.
Oh yeah, and I forgot about doing really high-quality resizing to make less pixels look like more..
This could work... (Score:2)
Now, bear in mind that they're not transmitting over the net - so there's no lag, no reassembly - they're just squirting a continuous packet stream.
28800 is about 26400 bits per second, with overhead - which is 0.03Mbit/s.
So that's a factor of 100 difference. With some clever algorithms (eg. Div-X), making use of the fact that NTSC is generally lossy (and thus letting you throw away a lot more of the signal than a videophile would like), you might get away with it. You could just about squeeze VideoCD quality down that pipe. Not bad.
Simon
Re:This could work... (Score:2, Insightful)
Umm, wait a sec -- in the VideoCD compression phase you've already taken all the advantage you can of the slop in the original NTSC. You're talking about 100X compression of an already tightly-compressed data stream, which is to say that you're going to find sufficient redundancy in the data to remove 99% of it.
Pull the other one.
Doubt it, but... (Score:2)
This all reminds me of a friend who thought he could compress his whole hard drive onto a floppy by just zipping his files up hundreds of times. You know how that goes...
But there's no doubting how cool something like this will be once the technology in compression advances to this point. Screw MPEG-4 or MP3, if someone could successfully do this, it would change how TV and the Internet are seperated (or combined in some cases) forever.
This would require... (Score:2)
Hell, maybe not. Maybe it is genuine. But I'm with gwernol - let's see some independent testing.
There are ways to do this... (Score:2)
First, you'd need hardware fractal compression. It's the only compression system capable of the sorts of compression ratios required, for the type of information being delivered. However, it's PAINFULLY slow, which is why it's not in general use, and the only companies touching it are using ultra-powerful dedicated hardware.
Second, "full-screen" is a bit of a suspect term, when it comes to video. Television uses interleaved frames. In principle, this means that you only really need to send over half the information, and do simple interpolation for every other scan-line.
Third, that the modem couldn't be checked is itself a bit suspect. It really wouldn't take much to conceal a DSL circuit, especially if it was an internal modem. At which point, your 28.8k suddenly becomes 28.8m. A somewhat more plausable speed.
Lastly, although I doubt it was done this way, if you run -enough- 28.8k modems in parallel (say, a thousand of them) and stripe the data across them, you could easily reach high enough speeds, AND "legitamately" claim that you had video over a low-speed modem.
How it works (pure speculation) (Score:2, Insightful)
Even if you take lossy compression such as DivX and reduce the video size, you're still talking about 100 k for decent video and 1 Mbit for anything close to full screen quality.
But we're talking data here... what about information? Data is bits. Information is the meaning of the bits, and a lot of information is highly redundant. Take english. I heard once that there are 1.2 bits per character in the english language; that's why text files get such good compression rates with gzip.
Video is not so highly compressible, mainly because the codec doesn't understand images. Codecs generally just split the image up into smaller and smaller blocks and look for exactly repeating patterns. Lossy compression allows them to look for roughly repeating patterns, and pretend they're exact. Not exactly rocket science.
Take a scene; any one. Like the one from the Matrix. Where Keanu Reeves is in his trench coat, black t-shirt, and black jeans, and an evil computer agent is standing in the background firing at him. You see Keanu bent over at the knees and there's 5 bullets coming at him with a particular trajectory pattern, with cool spiral air deformations coming off the back. Know the one I'm talking about?
Guess what? I just described it in 312 characters. About 400 bits. Through in another 100 to precisely place everything and another 500 to describe background scenery, etc. Sure, it was REALLY lossy compression, but that's an example of the kind of thing you can do if you have an understanding of what's in video. At the very least, you can decide WHAT you can ignore and focus on preserving the really important stuff.
Like, most people won't notice if the sky isn't the exact same shade of blue. Or if the flat blue areas of the sky have a slightly different texture applied to them.
Okay, this is all so far pure pie-in-the-sky theorizing so far... I just wanted to set all that up to point out that this seems possible. HOW could it be done? Well, this is pure speculation but...
A few years ago lots of people were looking at using various types of fractals to compress images down. This flourished briefly as the IFS file format (c. 1995), but the patents on the algorithm allowed the author to charge an exhorbitant royalty, so it never got off the ground other than for a few high-end video conferencing systems. These systems used (you guess it!) regular phone lines. Sure, maybe not 28.8 modems and maybe not full screen (though I distinctly remember that the frame rate was between 24 and 30 fps, depending on what kind of processor you used), but from there it's just process improvements.
Plus, I imagine that MP3 has taught us a lot about lossy compression that could be applied to this sort of thing. I don't personally know anything about the details of MP3, but assume that its methods can be applied to fractal compression with approximately the same rate, e.g. at 3x-6x compression at negligible quality loss and 12x at maximal compression... and that would be enough to take this technology to the levels this guy is talking about...
Ok, I'm done dreaming. Anyone have any comments? Does anyone remember this IFS format or have any more info on it than my hazy recollection?
Not a good analogy (Score:2)
Re:Not a good analogy (Score:2)
You're trying to tell me it wasn't?
Re:How it works (pure speculation) (Score:2)
(Alternative objection: You have merely passed me a query to my brain's database which happens to contain a large amount of preprocessed information regarding The Matrix. Had I not seen the movie, I would not be able to decompress your scene.)
Lets do the math... (Score:5, Insightful)
Lets assume a video frame size of 320x240x16bit. We can scale this up fairly well, however, its no where near TV quality.
Each frame takes 153,600 bytes per frame uncompressed. Now lets say you can get 80% compression on each frame. That would bring us down to 30,720 bytes per frame.
A typical 28.8K modem is going to see 2800 bytes a second (on a good day, more like 2400 bytes in the real world). Note: This is a 28.8K modem and not a 56K modem.
Based on these numbers, it would take about 10.9 seconds per frame (30,720 / 2800 = 10.9).
Obviously there are tricks that one can do such as deltas between frames rather than actual frames, etc...
However, in order to get 24FPS (3,686,400 bytes)in real time, they would have to get a compression rate of 99.93% (for the 24 frames).
It just doesn't add up. I think they are full of it and this product will never go beyond vaporware.
Re:Lets do the math... (Score:4, Informative)
inefficient. Even without frame-to-frame similarities, wavelet image compression schemes can achieve 50x compression with no visible degradation (I know, I did experiments last year as part of a spacecraft proposal effort). That's a factor of 10 from your figures -- 1.9 seconds per frame. Using the similarities between frames, it's not unreasonable to think that another factor-of-10 applies (MPEG achieves factor-of-100
compression where JPEG only gets factor-of-10), bringing the frame count up to 10/second.
Re: (Score:2)
Re:Lets do the math... (Score:2, Informative)
Suppose, for example that the camera is slowly panning across a static image. MPEG would see that as the *whole frame* differing from its predecessor, where a location-independent approach like fractal compression would still be able to take advantage of the redundancy.
No, MPEG would not. Do you think it was designed by a group of monkeys? MPEG would see this as a simple translation and code the correct motion vectors into the B-frames of the stream. There is more to MPEG than simple DCT blocks. You're talking about MJPEG.
Re: (Score:2)
Sounds like WEB technologies (Score:3, Interesting)
In fact someone came up with a mathematical statement that said the only way their claims would hold water was if they just gave out 64 bit serial numbers and stored the data somewhere else. Not to different from what we call Freenet now.
Needless to say these guys ended up going under after the investors figured out they were not only full of it, but 10 lbs of it in a 5 lbs bag.
it says nothing (Score:2)
Show me the money baby!!!
Re:it says nothing (Score:2)
University backstep (Score:5, Informative)
No doubt they use a lossy compression scheme (Score:2, Redundant)
http://lzip.sourceforge.net/
:-)
Re:No doubt they use a lossy compression scheme (Score:2)
Compression Tech Link (Score:2, Informative)
Claim is not unreasonable... (Score:2, Informative)
using wavelet transforms. We were able to get
50:1 compression on scientific image data, with
12-bit dynamic range. That compression ratio was
without any use of interframe similarities --
a movie compression algorithm could probably
get another 20:1 compression without much trouble.
At 30 fps, 0.33 MB per frame, that's 10 MB of
image data per second. Compressed 1000 to one,
you're only talking about 10 kilobytes
per second. If you're willing to suffer with
less dynamic range around spike bits of data,
it's not unreasonable to think that another
factor of four could come out of that, giving 2.5
kB/sec or 20 kbps -- leaving 8kbps for the sound.
Re:Claim is not unreasonable... (Score:2, Interesting)
Ok, what is "Scientfic Image Data"? Pictures of planets?
What is "12-bit dynamic range"?
At 30 fps, 0.33 MB per frame, that's 10 MB of image data per second. Compressed 1000 to one, you're only talking about 10 kilobytes per second.
Ok, what is your source resolution and color depth? How did you come to
Even assuming you could get that down to 10K, a 28.8K modem runs at about 2.8K a second. It would take you 3.5 seconds to download those 30 frames. That would bring your frame rate down to 8.5FPS. This doesn't even include Audio.
If you're willing to suffer with less dynamic range around spike bits of data, it's not unreasonable to think that another factor of four could come out of that...
So now you are talking about a 4000:1 compression ratio? Sign me up! The highest I've read about is between 10:1 and 20:1 compression for MPEG4!
Even if you had a typo and meant 100:1 then another factor of four would put the compression ration at 400:1. That is hardly realistic.
Re:Claim is not unreasonable... (Score:2)
We were looking at images of the solar corona. It's a distributed object with faint gradations in intensity. The biggest problem we had in general with compression was that cosmic ray spikes and stars in the field of view tended to cause "ringing" with JPEG and similar Fourier-type compression schemes.
I figured 0.33 MB per frame because 640x480 is about a half-megapixel, and you'd probably be happy binning it down to 320x240 (more typical of VHS video), yielding an eighth of a megapixel.
Putting in three color planes takes you back
up to something like a third of a megapixel. Eight bits per color plane gives you a third of
a megabyte. (Note that that's not really a good
way to think about it -- usually there's a LOT more information in the luminance signal [the RGB common mode] than in the hue and saturation signals -- so you might need fewer initial bits...)
Our 50:1 figure came from a single, noisy image plane with the criterion that 99% of the pixels had to be within 1 DN (12 bit DN) of the original value, after compression and restoral. The test image on which we applied 50:1 compression was from the TRACE [lmsal.com] satellite -- click the link for some sample images.
Re:Claim is not unreasonable... (Score:2, Interesting)
This is a common mistake that people make. Someone designs a compression scheme that works really well for specific cases and thinks that it will work in the general case. Hell, I once designed a custom lossless scheme for handling certain classes of bitmaps that beat lzw by a factor 5:1, but I guarentee you if you applied it to bitmaps that we were not interested in, it would have been very unimpressive. I suspect the same can be said for the wavelets your group was using.
Eliminate need for broadband? (Score:2)
Even assuming that they can produce great full screen video with a 28.8 connection, there is no evidence that broadband will no longer be needed. They seem to AssUMe that the only thing broadband is used for is streaming video.
How will this miracle technology help me download the latest Linux Kernel in a few minutes over 28.8. It will not. Speed up my binary newsgroups downloads so I can get gigs of possibly copyright infringing binaries every day? No. Will it even speed up my web browsing so that I don't have to wait 30 to 60 seconds for CNN.com to show up? No, not that either.
Broadband is safe whether or not their claims are real.
The missing question (Score:2)
What's the frame rate? Sure, I could do HDTV over 28.8 -- if I had 1 frame per minute.
This is pretty absurd. Let's say 10 frame / second, which I think is probably minimum for a decent experience. 28.8 = 3600 bytes / second (yes, it's 8 bits, not 10 bits). That's only 360 bytes per frame! Full screen? 320x240x24b = 230KB uncompressed. That's 640/1 compression -- without sound. With sound??
Re:The missing question (Score:2)
No, I believe it's a hold-over from serial ports. The normal serial port is configured with 1 start bit and 1 stop bit, so for serial ports, you do normally divide by 10 to convert to bytes. For modems, most everyone assumes that modems are just dumbly transmitting every bit you send to them (and that might have been the case, back in the old days). Modern modems, on the hand, are much more sophisticated in their encodings. As one modem engineer put it to me, "do you really think we're going to waste 20% of the bandwidth for stop and start bits?"
There is some overhead for CRC checks, but it's not nearly that large. I don't know what the packet size is, but it's at most 4 bytes for every 1024.
Priority! (Score:2)
Surely not the last...
Re:Priority! (Score:2)
Thought expierement. (Score:2)
The sender raises his flag, and both sides start pouring water into thier bowls at the same rate. When the sending side's bowl is filled high enough, he stops pouring and his flag man raises the flag again to signal the other side to stop pouring as well.
So what wa sthe point of this? Well, now both sides of the valley now have the same number of notches filled in thier bowels. Each notches, of course, was a particular battle plan that was to be carried out. But for out purposes, it could be an ascii byte of information.
This kind of "compression" is essentially one with an infinite compression ratio, i.e. any amount of data can be "sent" using only two bits of information (the start and stop bit). The only real problem with using this kind of system is one of time. Clocks are just not accurate enough to make this kind of system any faster than just sending data in the normal way.
Anyway, I'll leave it up to the rest of you to figure a way to make this into the "next big thing", but I just wanted to note that, while 99.99999% of these claims are fradulent, there is a basis for such a scheme to exist.
Re:Thought expierement. (Score:2)
Indeed, bandwidth limitations are usually due to clocking accuracy due to intersymbol interference and clock skew.
Re:Thought expierement. (Score:2)
This is, of course, just a single really big number, if read as such. Which our alien takes the reciprocal of, giving him a number between 0 and 1. He then marks this point on a rod of some type, where the fraction along the rod from one end that he makes the mark is the same as this reciprocal number he's got.
Voila! He has encoded our entire bank of knowlege as a single mark on a rod. And he could easily put other marks on the same rod as well, indicating other civilisations' banks of knowledge. All he has to do is work out how far along the rod the mark is as a fraction of the length of the rod, take its reciprocal, and he's got it all back.
(Now for a bonus point, why won't it work?
And they can do this by using... (Score:2)
-Karl
My guess: Foveated Imaging... (Score:3, Insightful)
My first guess it that these aussies have impressed clueless execs with ordinary tech.
My second guess is that maybe someone finally got around to applying foveation [nyu.edu] in a way that works really well.
Perhaps these aussies are hooking up test audiences to eye-tracking devices, and recording their average gaze during a film so that they can get even higher compression [stanford.edu] by throwing out what's outside most peoples field of view?
*shrug*
I think it's a scam for sure. (Score:2)
Did the auditors get to pick a movie of their own choice?
Did the auditors supply the test HW, to ensure no tricks could be done?
If their compression is as efficient as they claim, they could patent it and submit it to the MPEG group. If it blows the competing codecs out of the water, they'll make a bundle on licensing. Instead they are staging suspect demo's hoping to lure investors. The same kind of investors who will buy stuff from ads with the "seen on TV" logo.
Is it actually DIGITAL compression? (Score:2, Interesting)
Just because he's using a modem doesn't mean that he's actually transmitting digital data over the phone line. What sort of video compression can be achieved when you don't need (or get) bit-perfect transmission, but rather encode video properties directly in the analog signal? Errors then show up as slight inconsistencies from the original color or position - but on motion video, this would be irrelevant.
The compression would still need the common video codec functionallity to remove redundancy, and send the changed areas more frequently than static images, but if the modem link mapped QAM data directly to position and color signals, it might just be possible to paint a fairly high quality picture.
For that matter, some fractal compression techniques are quite tolerant of minor errors in their probability and/or mapping factors - combine this with sending color information as analog data, and now you might be able to have a link that is unidirectional (the whole audio bandwidth can be dedicated to the video stream without need for a reverse channel) and error tolerant (no re-transmit on error or dropouts due to transient line noise).
Maybe it isn't a scam.
Old school compression for Video over Modem. (tm) (Score:2, Insightful)
Back in the good ole C64/apple days we wanted to stream gfx over a modem. With ASCII and reprogramming the characters into 8x8x2 bitmaps. Using characters mappings you could make little guys run, little cars drive, etc.
Then someone came up with Megabignum (no joke), used A-Z,a-z,0-9,!@#.,etc to have a large set of characters for use.
Then there was RLE type gfx which was black and white bitmaps. (I think 4 bits actually).
You map a 320x200 RLE into 40x25 ASCII type characters. So 1000 characters per frame or lets round up to 1K per frame. I don't think anyone did anything this big, maybe on some demos.
Using this character set mapping conversion was a simple trick, but it worked.
I don't see why you couldn't take this character set idea and expand it with compression and do larger 640x480 b/w 30fps images over a 56K modem.
Maybe someone smart could come up with a way to add color.
Movie 'Hackers' predicted it (Score:2)
(cool graphics coming from another machine over modem are on the screen, yes, this modem is definately broadband, otherwise it would be impossible to show such neat graphics)
Hacker2: It's an 28k8 !!!
Hacker1: Amazing, marvellous, etc. etc.
(forgive me for not remembering the names, the wasn't that good
900 bits per frame. (Score:2)
Persistence of vision becomes really flakey at under 25 frames per second. With the overhead of stop bits, start bits, PPP protocol etc 28.8Kbits/sec is actually more like 22,000 bits/sec. That means that there are less than 900 bits to encode the delta between one frame and the next.
There might be something to be had out of using second order derivatives, a delta encoding of the delta encodings. There might be something to be had out of more powerfull delta encoding techniques, more complex transformations from one piece of screen to the next.
However the law of diminishing returns applies here and however good the delta encoding is, there is still the need to send key frames from time to time. At the very minimum once per scene change. In practice very much more often. It is quite likely that a scheme substantially better than MPEG is possible, but the scheme claimed is just too close to the fundamental limits.
There are two ways to cook a compression demo. The first is to pre-load the cached data, the second is to chose the content to be compressed very carefully. For example Larry King Live compresses quite well because the video shows only two talking heads from fixed camera angles. Star Trek TNG would be much harder because the camera is often moving.
Einstein reported that he was often acosted by people who would say something like 'how do we get to the next solar system if we can't go faster than the speed of light?', to which he would reply 'I don't set the laws of physics, I am just telling you what they are'.
Seems to me that the reason that so many people invested so much in Pixelon was that they believed that because they needed the solution so baddly, it had to exist, even if Shanon's law dictated otherwise.
Similar thinking runs rampant in the GOP mania for ABM technology. There has not been a single successful test that has not been cooked, in their last test the target had a radio beacon sending out its GPS measured position to the interceptor. But because they want to believe in the technology they will believe their own cooked figures and threaten MIT Professors that try to tell them they are being had with jail [washingtonpost.com].
Film producer? (Score:2)
Let's Get Skase, the film he produced based on...
So they want me to believe that a film producer in a small town woke up one day and developed video over 28.8k when nobody else in the world could do it?
Re:Fullscreen, but... (Score:2)
Re:Well if its full screen over 28.8 (Score:2, Interesting)
What is this FULL-SCREEN video ?? 320x200 ? 640x480 ? true NTSC @ 29.97 FPS ? DVD resolution ? HDTV ?
Or is it 1/4 tv resolution zoomed to fit the screen ? with 1/2 the fps ?
Maybe all they did was improove the zoom, iterpolation and anti-aliasing algorithms in the player. So they send a crappy video and it ends up looking ok.
Anyway its all hot air until we get some technical data.
Re:Broadband and video... (Score:2)
I can get 30 fps video no problem on my megabit DSL...
'course, I'm using a proprietary transmission protocol... [apple.com]
Re:Broadband and video... (Score:2)
Yeah - but it's not full-screen.
Re:Uh, no (Score:2, Interesting)
Actually, our eyes don't have a fixed fps as so many of you nerdlings tend to think. There IS a limit to how rapid changes we are able to see, but they are very dependent on brightness. We have problems seeing dark changes that happen in tenths of a second, but noone will miss a bright flash even if it lasts 200ths of a second.
In normal lighting, 10-12fps is not even in the
same ballpark as our vision. 75 fps is more like it.