Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Mega Bandwidth Acheived 107

PDG writes "The german engineering firm Siemans has produced the rate 1.2 tbs (YES, tera bits per second) over a SINGLE strand of fiber, thus proving the limitless power of fiber. " One step closer to the ultimate goal for humanity: infinite bandwidth. Or maybe thats just my ultimate goal. Nevermind.
This discussion has been archived. No new comments can be posted.

Mega Bandwidth Acheived

Comments Filter:
  • by Anonymous Coward
    Correct! I did some research on these Erbium amplifiers during my undergrad work, and here's the skinny: It turns out that the most efficient wavelength in fiber optics can be reproduced by a certain energy level delta in erbium (since its higher atomic number provides many deltas to work with). What they do is excite the erbium atomes with a laser. The fiber is looped along the doped section of fiber--bent fiber=loss of signal. As the signal is lost, it collides, in phase, with all the excited Erbium atons, which all release this effifcent wavelength. This method provides an amplification on the order of 40dB, which is immense. It also vastly cleans up any bandwidth spreading that would be present.
    Interestingly enough, it sounds like this multiplexing method, while drastically increasing bandwidth, may make this elegant method unusable. Different wavelengths would need different energy levels, all of which probably could not be produced by erbium. I would be interested in seeing how they get around this, if they can. IF they actually have to go to an electric amp, they will probably loose most of the bandwidth they are trying to create! I suppose they could dope the fiber with multiple atom types to simultanously amplify all bandwiths. However, the entire fiber would need to be amplified at a rate consistant with the most inefficient wavelength in the signal, making the entire process more expensive.
  • Of course, you'll need something a bit faster than PCI to keep up with that.. :)
  • they were using it to try and saturate IEGs SGI server cluster and there by set a world record for downloading p0rn!

    Ahem.. .. That's " pr0n [] " to you.

  • by jandrese ( 485 )
    They probabally aren't using a drive at all. They've probabally just designed some program that sends out a specific stream of bits (maybe incrementing numbers) and another end that reads them and then forgets them. A HD would just slow down the test, besides you'd need a Solid State disk with Fibre just to keep up :)
  • I don't know about 100BaseT, but I KNOW your garden variety Pentium II can't fill a Gigabit Ethernet. Heck, an Ultrasparc or a MIPS R10k can only fill an Gigabit to about 50% before topping out. In fact on the large machines were you see Gigabit installed, you generally find a processor devoted to handling the gigabit.
  • Posted by tdibble:

    Not quite. This works for a while, but you will always reach a limitation in the underlying media. Copper's limit was very low by today's stanrards (although I can tell you that 28.8 was pretty damned pie-in-the-sky astonishing when most of us had 300, 1200, or 2400 bauders!)(and we went to I believe double the previously accepted theoretical upper limit anyways, but still there was a limit). Optical's limit will be a result of imperfections in the fiber causing signal degradation at ultra-fine tunings. Boy, doesn't that sound exactly like the problem with copper? Well, it should.

    It's absolute basic signal transmission theory here. As someone else said, when you get right down to it, eventually you always come back to the basics.

    Fiber is limitless bandwidth only so far as, to paraphrase badly, it is sufficiently advanced to look like magic. Once it becomes commonplace, we'll hit the limit. Call that the first law of bandwidth and write a book about it :->

  • Posted by tdibble:

    A few off the top of my head:

    • Internet backbone (duh!)

    • Cable/interactive TV (uncompressed HDTV at 1920x1024, 24-bit color, 30fps, would take up 1.4 Gbit/channel; think 847 channels of beautifully rendered pure drivel! Add compression and a resolution more in tune with todays machines, and think more along the lines of tens of thousands of channels, 90% of which could be video-on-demand, etc, type of channels)

    • Massively parallel systems

    Aw, heck, why am I even trying?!? Today's systems are so incredibly bandwidth-constrained it would be a pleasant breath of fresh air to have to worry about something besides how many bits can physically be fit into the pipe for once!

  • Posted by tdibble:

    Now, if we could create an Alcubierre warp drive, or even an ordinary wormhole, run this fiber through *that* ... faster than light transmission ... hmmmmm ... physics be damned! :->
  • Posted by Lord Kano-The Gangster Of Love:

    Fingle fiber bandwidth may not be limitless, but the fact that the data may be sent along MULTIPLE optical fibers makes the potential bandwidth limitless.

    1.2 THz on a single strand. How about 100 strands? 1.2 PHz (petahertz)? How about 1000 strands?

    Many of us will probably live to see EHz, either in network or CPU speeds.

    Through a conduit the size of a pepsi can you could have more bandwidth than all of the networks currently in the world.

  • I haven't read the article itself yet, but I'm almost positive it uses some form of WDM (Wavelength division multiplexing). So it's not a single 1.2 tbps channel, it's many slower channels, all on one fiber. Total is 1 tb/s. I know Lucent has what is essentially an optical router on a chip that is used for splitting something like 20 channels from a single fiber.
  • kilo 10^3
    mega 10^6
    giga 10^9
    tera 10^12
    peta 10^15
    exa 10^18
    zetta 10^21
    yotta 10^24

  • by sjames ( 1099 )

    Unfortunatly, that is true. Still, it's much cheaper than burying new fibre.

    Somewhat off topic, I wonder how much bandwidth is being wasted now either in the voice networks or in data networks that could stand a hardware upgrade?

  • It's easy to acheive mega bandwidth. Achieving it is a bit more difficult.
  • Of course, the bottleneck would then be loading
    the files off of the server's hard disk, so
    unless you've got a really slow hard disk, it
    still wouldn't be faster.
  • So now from what I understand they split the fiber virtually into 60 channels... as tech becomes better they'll be able to split into more and more channels and the receivers will be able to correctly differentiate between these channels... Similar to how modern modems operate... (i.e. the better you can synthesize and detect different phase and amplitudes in a medium the more dense your constellation patterns become and therefore the more bits per second you can shove through it)

    Interesting how everything always comes back down to the basics... :-)
  • I'm pretty sure this isn't a world first. I'm pretty sure I read people getting over 1Tb over a year ago...

    Whatever. Most people won't see this stuff for real for ages. Though I happen to know that SGI are investigating using fibre instead of PCB for ludicrous speed main memory connections - for high end SMP machines of course. Apparantly they have stuff running in the lab, running at several 100 GBytes/sec. Most of todays PCs have 0.8GByte/sec main memory.

  • Fiber does have a specific bandwidth limit - if we assume that blue (4000 Angstrom) light is the highest frequency the fiber can carry with acceptable loss, the total bandwidth of the fiber works out to about 7.5e14 Hz. In practice, this will limit the data rate to somewhere on the order of 2e14 bits per second, which while a lot is only about a hundred times the rate posted in the story.

    Further advances in optical fiber technology may push the data rate up to 1e15 bits per second, and on-the-fly compression will help get even more out of them.

  • In any case, given the resource hungry nature of today's applications, I'm sure we'll smack up against the bandwidth limit of fiber RSN.

    The future of Microsoft Office... One copy located on a microsoft server being shared out to anyone on the internet with a license.
  • Unfortunately, the 1.2 tb/s connection filled solid within minutes of being hooked up. Further research found that all the engineers at Siemens were trying to use it to access Slashdot.

  • According to the article:
    The 60 separate channels were created using Siemens' Electronic Time Division Multiplexing process...

    Well, let's see, multiplexing is time division, and we can assume digital vs. analog...

    In other words, it's doing pretty much what you'd expect it to?

  • What can push that kind of bandwidth?

    Nothing we have just now.

    However it is "just" a bunch of 20Gbit/sec links we need to fill, so "all" we need is something to make use of 20Gb/sec, and to buy a whole buttload of them.

    Let's see, I think Juniper's current product is (see 0-brochure.htm []) capable of 2*8*2.5Gb/sec, or 20Gb/sec as a theoritical max. So in thery you could use a few racks of the highest capacity/highest density routers to drive one of these monsters. In practice I expect it would take at least another spin of Juniper's hardware to do it, but in realiaty they have time for another spin or three before this stuff is likely to be for sale anyway.

    I guess we have just solved the "what can we build the backbone out of to support upgrading all the current modem connections to DSL" question...

  • It isn't 2:30 yet, even in the eastern time zone. Where are you posting from?
  • How about using it for a true merging of voice, data and video? I'ld love to fire up my computer and have it deliver me whatever movie I want a the moment I want it and feed it off to my TV.

    ** Martin
  • Unless, of course, you replace the fiber with a tube made of MIT's new "perfect reflector" material. Which is, unfortunately, even more involved.

    (Remove "x"'s from
  • It doesn't work this way. Fibers have different transmission characteristics for different frequencies, and increasing modulation rates (barring clever encoding schemes) increases the maximum frequencies of the sidebands. Eventually, the frequencies become high enough that they begin to be attenuated, and data is lost. For fiber, 'though, the limit is "pretty damn high" (as someone else said here). I've heard estimites of the theoretical maximums on typical fiber ranging from 25 Tbps on up. MIT recently developed a "perfectly reflecting mirror" that could be spun into tubes, i.e. a replacement for fiber that wouldn't need repeaters, etc. I'd like to know what the bandwidth would be on such systems.

    (Remove "x"'s from
  • by Kythe ( 4779 )
    This is really cool. Now if we can just improve memory access times, life will be wonderful.

    (Remove "x"'s from
  • I'd love to have a terabit pipe running into my home... now I just need a terabit interface for my brain. Yay!

  • Yeah, but /. isn't out there, it's in here.
  • More bandwidth wouldn't make a world-wide cluster possible - the killer with a big cluster is the latency, which increasing the bandwidth doesn't help much with.

    Check out the globus project, who are actually trying to build something like this []

  • Carefull with that word, "limitless." Even fiber has a finite bandwidth, even if it is very large. I don't know for sure what it is, but since we can propagate femtosecond optical pulses in fiber, I would guess on the order of 10's of THz.

    A week or two ago I posted an estimate on this based on signal processing; the ultimate limit for all techniques using visible light is on the order of 1.0e15-1.0e17 bps, which leaves plenty of breathing room.

    In practice, the optical properties of the fiber will impose a more strict limit. Another person has posted an estimate in this thread (2.0e14 bps and up, IIRC).

    For a more detailed description of where my estimate comes from, read through the posts on the "chaotic laser" thread (or select "User Info" above).

  • Ok, seeing as there are now several posts giving different estimates, I'll explain where my estimate comes from, and what some of the limits to bandwidth for fiber and for optical data transmission come from.

    The theoretical upper limit to data transmission using visible light can be estimated by considering the properties of the visible light beam that is carrying the signal. Treat the beam as a stream of photons. We'll call the "amplitude" of the modulated signal in any time slice the number of photons that arrive in that time slice. Due to the nature of light, the shortest timeslice that it is meaningful to define is the time required for the light to propagate one wavelength. Picking 600 nm for simplicity of calculation, that gives 5.0e14 time slices (and hence samples) per second.

    Now, we have to figure out how many amplitude levels are available to us in each sample. The short answer is that we can stuff in as many as we want, but at an ever-increasing power cost. The measurement of the number of photons in a given sample isn't perfect. Even under the best conditions possible, the error will be roughly on the order of the square root of the number of photons transmitted. So, in order to get n data levels, we'll need about n squared photons per sample. The energy of each photon is equal to Planck's constant times the frequency of the photon, or about 3.3e-19 J. As we need 2^n levels to transmit n bits of information, the energy required per sample is (in the worst case) 3.3e-19 * 2^(2n).

    Let's say that we want no more than about 10 watts of power dissipated in the worst case. This gives us 2.0e-14 J/sample, which means that 2^(2n) must be equal to about 60600. For simplicity, we'll bump the power up slightly and call this 65536 (2^16). This gives n=8. So, at something like 11 watts, the maximum data rate that can be achieved using a visible light carrier is somewhere in the realm of 4.0e15 bps.

    You can get higher bandwidth by increasing the power, but this gets very ugly very quickly. Therefore claims of anything greater than this over a single fiber or single laser beam should be taken with a very large grain of salt.

    In practice, this is not what limits the maximum data rate over fiber. As you modulate a carrier, you spread out its frequency spectrum. This means that your 600 nm laser beam, after being chopped up into sample elements and modulated, winds up not being purely 600 nm any more. For relatively low data rates, this isn't much of a problem. However, when the frequency of modulation approaches that of the frequency of the carrier itself, it starts becoming significant. An optical fiber, like any other optical medium, transmits different wavelengths at different speeds. This causes signals that are time-domain modulated to smear out, limiting the data rate that can be used. Similarly, a fiber's transmitting properties only apply over a certain frequency range. No matter what the modulation method used, the optical properties of the fiber will place limits on what can be reliably transmitted. Further, signal boosting for transmission over long distances is performed by feeding the signal into an erbium-doped fiber configured to act as a laser. This will have an even narrower range of operating frequencies than the fiber itself has.

    I am not an expert on the optical properties of fibers or on erbium-doped fiber lasers. People with more knowledge re. this than myself have posted on slashdot already, and have given estimates in the range of 1.0e11 and up. However, the fundamental limits to optical data transmission remain very high, as illustrated above.

  • You've got a cable capable of sending a whole spectrum of light, so why wouldnt you divide it up into the different colors? The fact that dividing it up into different areas of the spectrum is a new idea and hasnt been utilized for years now almost disgusts me.

    This is called "frequency domain multiplexing" and has been used for years with analog transmission. Some schemes of fiber transmission use it too.

    However, it doesn't matter whether you transmit at a low data rate on several frequencies or at a high data rate on one frequency, because the physical effect is the same. When you modulate data on to a carrier, you blur out the carrier's frequency spectrum. The amount by which the carrier spreads out is directly related to the bandwidth/sampling rate of the data being modulated on to it. If you have a beam of light at, say, 600 nm (frequency 5.0e14 Hz), and modulate data on to it at 100 THz (1.0e14 Hz), your resulting beam will actually have its spectrum spread from (roughly) 4.0e14 Hz to (roughly) 6.0e14 Hz (about 500 nm to 750 nm.

    So, in summary, you _do_ use a range of frequencies even when you are doing time-domain multiplexing on a single-frequency carrier.

  • No. Any wavelength of light is possible. Energy is quantized, not wavelength. Therefore, if I have a photon of 632 nm light, it's energy is given by E = hv, where h is Plank's constant, and v is its frequency (c/632nm). I can still have a photon with wavelength 632.0000001 nm.

    How are you going to produce your arbitrary photons, though? Photons emitted from collapsing electrons will only come out at fixed wavelengths, determined by the electron energies. And, my understanding of lasers is that all the light from one is produced by the same compound, and by the same electron transition by that compound. So, you're not going to be able to pick any aribtrary wavelength, because you probably won't be able to find just the right compound and be able to excite it perfectly to produce your desired wavelength.

    Of course, I grew to hate physics in college, so go ahead, everyone, and point out just how I'm completely wrong and stupid on all this. I won't mind a bit. :-)

  • Quite frankly, I'm extremely disappointed to see the plain lack of ingenuity in the entire computer industry.

    You've got a cable capable of sending a whole spectrum of light, so why wouldnt you divide it up into the different colors? The fact that dividing it up into different areas of the spectrum is a new idea and hasnt been utilized for years now almost disgusts me. It seems almost common logic for this to be the next logical step. I honestly expected technology to utilize potentials like this. Do modems do the same, utilize only on or off pulses or do they take advantage of the possiblity of changing amplitude and frequency to allow for potential increase?

  • It's Siemens, not Siemans.

  • Tera not Mega! What is this, the 80s? C'mon!
  • by jwriney ( 16598 )
    My only question is, what in the heck were they feeding this connection with? That's a bunch of data to either read or generate in one second. Their test system must have a stupidly fast hard drive. :)

    John Riney III
  • Siemans is now Oce, but they are still a cool company.

    They make great giant laser printers. I worked on a couple in my Computer Operator days. We had a couple old IBMs we got rid of in favor of another Siemans and had very little problems. The IBM were mich higher maintenance (chuckle chuckle).
  • So this means fiber lines that already exists can be terabitized just by switching routers on both ends (once they exist)

    Cool :)
  • You wouldn't use this thing for one machine. You would have a dedicated box at each end of the line to divide it into many, many DSL speed channels, or a very good number of gigabit connections. Even still, a 1000 gigabit channels would be great to have. A normal machine these days can handle gigabit ethernet, provided you have a gigabit ethernet card. Luckily, all even slightly newish or even a bit old Macs have built in gigabit ethernet. My 180 MHz 604e PPC Mac has gigabit ethernet built in, and it's more than a year old!
  • NTT ( had about the same demonstration in 1995 (! - search for 'soliton'): 1 TBit with 10 GBit-sources x 10TDM x 10WDM, but they also claim to have these lines in the market this year (see c't-Magazin i. 1/98 p. 64). And, yes, together with optical routers, switches and the other stuff.

    To have something working in the labs does not necessarily mean, you can deliver a fair implementation that is useful to your customers.

    But I do not believe it is really true, before I see it on their list of offers. Even NTT is better in announcing than implementing.

    And by the way: a Terabit per second even exceeds memory-Bandwith (RAM) by at least some degrees of magnitude!
  • and I don't mean the 80k reach...
    What's the most home users get right now? cable, ADSL, etc. they are cheap, but they are also shared bandwidth (local loop or at the central office). There are some other access methods, like ATM, but they are not widely used (i think in New Brunswick, canada, eh!)
    For a guarantied bandwidth, business pays big bucks! $800/mo for a T1 $2k/mo for a T3 (~50M), don't know for a SONET OC3 (155M) or OC12 (620M) but only the very big companies can afford/justify leasing that type of bandwidth...
    I'll be happy when hook up a T3/DS3 to my basement gateway/firewall (or when i can afford one)
  • I'd love to have that kind of speed, but I must do what with? What kind of redicilous storage system will be used to store say..the data collected if this link was running at full speed for 24 hours?

    Maybe there will be uses for long distance with use of some multiplexing, but otherwise I cannot think off what it could be used for in the private/small business sector.
  • check out they have built an external optical modulator to increase the bandwidth. then, by multiplexing oc-48's together, they have demonstrated 200Gb/s over 200 miles and boast figures in the 10Tb/s range.
  • Just to let you know, 2.5 Gbps is easily achievable currently, 10 Gbps is a little more difficult but do-able. By the time you get to 20 or 40 Gbps dispersion is pretty bad and you run into other attenuation problems, that's why it's best to break the stream up into multiple wavelengths.

    Also, current commercially available capacity on a fibre is about 320 Gbps, 32 x 10 Gbps.

  • I'm in the east, why? Is 2:30 too early for stories?

    PDG--"I don't like the Prozac, the Prozac likes me"
  • Imagine the routers!

God made machine language; all the rest is the work of man.