Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
News

Future I/O Standards 87

hardave writes "Here's an interesting article from Performance Computing about future I/O protocols and standards." This piece talks about the most recent gathering of the minds about I/O. In the end, it means what we've expected all along; faster throughput, and the benefit of creating open standards.
This discussion has been archived. No new comments can be posted.

Future I/O Standards

Comments Filter:
  • Jeez... soylent and then the alpha/omega bit... put the #@(*&#@ing Xenogears away already. =p
  • by Money__ ( 87045 ) on Sunday January 02, 2000 @06:04AM (#1417875)
    Some of the results of switching to a serial I/O architecture will include:

    the implementation of external, non-shared, non-blocking

    switched connections lower latency communications between multiple channel entities, particularly between systems in a cluster

    dynamic system configuration and hot swapping virtual controllers implemented in software, eliminating many host adapters

    smaller system dimensions due to the elimination of host adapters and the reduction in power and cooling requirements

    new memory and memory controllers for connecting to the serial channel

    an increase in the number of host-based storage and data-management applications

    the blurring of the distinction between I/O and networking

    _________________________

  • by tzanger ( 1575 ) on Sunday January 02, 2000 @06:07AM (#1417876) Homepage
    Serial is taking over. Practically anybody could have predicted that. Firewire, USB, etc...

    Two very intersting points, however, are a) They're considering Fibre in a consumer application and b) they're very seriously considering security of the link.

    I haven't worked terribly much with fibre but just how sturdy is it? They're claiming up to 10kft which is a long long way... people are gonna run this under carpet, trip over it, the cat's gonna chew it... I thought that fibre was a pretty resilient technology from an EMI point of view, but what about the Home Factor? Copper wires are usually pretty good about being tripped over and ripped out of sockets. What of fibre? If you kink a fibre cable, what happens to it?

    The other point was security. Basically they're arguing over two methods. One is "closed-source" and switched, while the other is "open-source".
    They go as far as to say that a closed implementation is a big flashy waving sign for hackers, as it's an irresistable challenge. They're bang-on there. I mean a fast standard that doesn't need 200+ connections? I'd be all over that in a heartbeat! It's refreshing to see a gathering of industry leaders actually see that aspect.

    ... I think it's neat... the philosophy for years has been more parallel connections. Transfer more per clock and you up your bandwidth and therefore your throughput. What's next? Serial processors? A couple megs of cache on the chip, maybe a serial bus to system memory, antoher to system I/O and one to the video subsystem? I mean they're talking throughput greater than PCI 2.1 here... Why NOT reduce the CPU to a dozen pins?
  • by Rupert ( 28001 )
    Gigabit ethernet from system memory to the disk? Or to the NIC? Sounds like a bunch of highly specialized NCs in a single box.

    Now if only they'd concentrate on this rather than the pointless MHz race we could actually see some real improvement in performance.
  • A noticed people are mention firewire, usb, and etc. I think the article is centering on the memory bus in the computer, and not so much an external bus. Of course, if they are considering moving the memory bus to a serial fashion, then it would be possible to simply place your digital camera straight on the memory bus, ignoring your usb connector. My guess is, however, by moving to a serial memory bus, it makes it easier to do multiprocessor processing, since you don't have to connect almost a hundren wires to another processor to share memory, timing signals, etc.

    ---
  • There are many times when quoting something in a post at slashdot can be very informative. Particularly when the discussion has some mistaken ideas in it. I wonder why it's necessary to cut and paste parts of the article in the 5th and 7th posts with nothing to say of your own. Slashdot linked the article. They could have easily included the text.
  • In ten years I wouldn't doubt seeing a CPU with just a handful of pins (more then a dozen, but not much more). We've just about pushed parallel buses to the limit of DC signalling. Hell, my motherboards now have a spread spectrum signalling option to reduce RFI. We can maybe double the clock speed, but that would require a good bit of work.

    With serial we can actually use RF. Layout the motherboard in stripline to filter the signal between components. The IC runs parallel to an interface section, which is a high speed shift register. If you wanted parallel busses you could then adda pin or two, modulate the signal up to another frequency, and/or spread it out with a Direct Sequence spreader.

    Then when we max out the viable speed of the serial bus we will aggregate them together having learned as much about serial bus implementation as we had parallel. In the end a happy mix between the two will be found.
  • just a handful of pins (more then a dozen, but not much more)
    How much do you pay for gloves?

  • Even the most die-hard S-bus or PCI fan will have to admit that the plug-and-play functionality provided by USB and IEEE1394 is pretty neat, and that high-end alternatives like fibre channel offer some pretty cool stuff as well. It's only logical that a next-generation I/O bus will combine the best of both worlds, and although using fibre-optic cables might sound excessive, prices will soon be down to about the same level as copper, so again: why not use it?

    One not-so-nice thing about all this high speed local connectivity is that it worries the Copyright Mafia to no end. The MPAA and others already see people copying entire DVDs in the privacy of their own homes, and are proposing draconic control schemes (like 5C does for IEEE 1394 -- see http://www.dtcp.com/ [dtcp.com] -- in short: how would you like your TV to send a message to your cloned DVD player in order to disable it remotely??).

    But fortunately, the same technology can also be used by sane people to implement flexible certificate-based link-level security. Using IPv6, for example, would automagically enable IP-sec, and there should be enough address space left there (~85%) to give manufacturers a way to do autoconfiguration...

  • Serial protocols are useful for anything long-range, but when you need to deliver data between few devices located in the same box, and have to do it fast, one wire instead of 64 means theoretically 64 times less bandwidth, and in relaity at least twice less, no matter what. Only when the length of the line is enough to cause distortion/desynchronization of the signals serial protocol becomes superior to parallel one, and even that isn't true in all cases.

  • Your Russian sucks so much, I have made a CRT using it.
  • Two months ago I attended the Linux University Road Tour, sponsored by Silicon Graphics. One of the presentations was made by Intel, who emphasized their support for Linux, and presented a road map for the next few years. The Intel presenter briefly spoke about SIO System I/O, showing a slide which showed how SIO was at least partly inspired by the IBM System/390 I/O architecture.

    As we all know, processor speeds have been going through the roof year after year. I/O performance, on the other hand, has improved at a slower pace. Perhaps now we can look forward to an increased rate of I/O performance improvement.

    By the way, Intel said that 2003 would be the earliest that a "cheap" version of the Itanium would be available, cheap enough for desktop or home use. Deerfield is the name of this home version of Itanium. 2003 is a long way off; perhaps that will give Compaq enough time to produce an equivalent Alpha.

  • Less than ten dollars is cheap for a card. I wonder if USB isn't cheaper still to add to peripherals than adding a 10/100BaseT interface. Think about keyboards and mice. The interface for those needs to be pretty cheap. Also what kind of power consumption does a PCI lan card use? I doubt power consumption was a very critical requirement on the PCI bus. This isn't to say it's a bad idea, because I think most of these issues could be worked out to create a peripheral standard for a lan type connection.
  • They were discussing this new bus architechture as an alternative to PCI, Sbus, MCA, etc. I get the impression that they were discussing something -different- to firewire/usb or the memory bus.

    The architechture they described for a SMP system looked something like this:

    (See Figure 3 [performancecomputing.com] from the article. I tried to do it as ASCII art, but preview says "that doesn't work")

    The difference between this and the current layout for a PCI system is that the memory/channel controller is replaced by the PCI controller, and the switch is replaced by the PCI bus.

    Personally, I see USB as a controller hanging off the switch, converting between the (high speed) I/O bus serial protocol and the (lower speed) USB protocol. The same would be true with most existing protocols: IDE, SCSI, Firewire, etc, if for no other reason than to take advantage of existing storage media.

  • Your Russian sucks so muchE

    It's not Russian. Does "Mehanicheskij Apel'sin" mean anything to you?

    Kaa
  • Vendors who's products were wrongfully disabled en masse will catch HELL from consumers. Fir this reason, "features" like this will never make it to the market. THANKS TO CRACKERS!

    Oh, some people will be infuriated by this, but you can count on this becoming mandatory for most devices anyway (hey, ever read the Digital Millennium Copyright Act?). Most consumers simply could care less, even if you managed to explain the issues to them. And while I agree that crackers will find a workaround to this right away, the control issue is interesting even for non-insane applications

    It basically comes down to: who do you trust to have any kind of authority on your serial bus? Your hardware manufacturer? (5C shows this might not be a good idea...) Do you purchase your own $125 VeriSign certificate for I/O purposes? Questions, questions, questions...

  • just a handful of pins (more then a dozen, but not much more)

    How much do you pay for gloves?


    Twelve pins, not 12 hands... <whack> :-)
  • I think ethernet would have been a better peripheral bus choice. It's already a platform independent standard. Ethernet controllers have already shrunk to single chip solutions (10/100Base-T PCI cards for less than $10! So don't say it's expensive.)

    The Java crowd would love it. Finally they could do systems programming without having to grok pointers .. :-)

  • Re: your comments: Serial protocols are useful for anything long-range, but when you need to deliver data between few devices located in the same box, and have to do it fast, one wire instead of 64 means theoretically 64 times less bandwidth, and in relaity at least twice less, no matter what. Only when the length of the line is enough to cause distortion/desynchronization of the signals serial protocol becomes superior to parallel one, and even that isn't true in all cases.

    I would agree with your assesment of the limitations listed above, but I would point out that what's changing is the definition of 'the box'. Lines are incresingly being blured between where the box ends and the network begins. Highspeed External I/O is proving to be a nesesity in this networked world of ours.

    There was a time when a user was happy with just an isolated box. Then LAN funtionality became increasingly needed (got boxen without a NIC? no?).

    Today, without massive conectitity, the 'puter will quickly become a doorstop of funtionality. That is why these new standards make logical sence for today, and into the near future.
    _________________________

  • by tzanger ( 1575 ) on Sunday January 02, 2000 @07:00AM (#1417898) Homepage
    Then when we max out the viable speed of the serial bus we will aggregate them together having learned as much about serial bus implementation as we had parallel. In the end a happy mix between the two will be found.

    exactly! massively parallel serial busses. It almost sounds oxymoronic but wow... You could even distribute data across the various parallel serial channels in order to help bottleneck issues. Each channel could have its own throttling / pritorizing management. Redundancy is kind of a lame concept here but the other aspects are sweet.

    We're headed into ATM-style busses for intra-system connections!
  • I am not an Electrical Engineer, but I have a couple of friends who are. The problem with parallel buses, as stated in the article, is that signal degradation occurs when the signal paths get too long. The problem is that at bandwidths that will be needed in the future, the bus must be either 128-512 bit parallel, or must run at extremely fast speeds. The problem with being massively parallel is that the bus is now physically very wide, it is difficult to build and difficult for the 1st bit and the 512nd bit to be set at the same time. Running at higher speeds means shorter paths before degradation occours. The PCI spec is right about a 15cm bus length before repeaters now, increasing the speed significantly will lower this to the order of centimeters, not long enough for a peripheral or I/O bus, but fine for a memory or CPU bus, which is what the article said.

    A wealthy eccentric who marches to the beat of a different drum. But you may call me "Noodle Noggin."
  • It seems intuitively obvious that parallel communications should be faster than serial, but actual real-world implementations are showing that the opposite is the case.

    This is undoubtably a gross oversimplification, but parallel communication systems must ensure that the signals from the many channels arrive synchronized, whereas serial systems get this for free. Maybe some EE's can provide a more rigorous explanation of this, or at least some good links.

    Single mode fibre optics, where there is one and only one path along the fibre, can provide throughput which is not physically achievable by other means. C Novom Godom.

  • by Effugas ( 2378 ) on Sunday January 02, 2000 @07:25AM (#1417903) Homepage
    OK, something is seriously, hardcore, balls-out to-the-mat bugging me about this article. It's as if two people wrote it--one with a clue(and an impressive amount of such at that--lots of very fascinating stuff embedded within this article!), and then, the one who went without.

    I'm not kidding--I've actually never read an article that on certain levels provided a fascinating glimpse at things to come, but on others rang so wrong that I was left in shock.

    Bottom line: Somebody's agenda is leaking. Lets look at the Parallel v. Serial chart:


    Parallel I/O Bus Serial I/O Channel
    Max Physical Bus Length 1 meter 10,000 meters
    Conductors/Pins 90+ 4 to 8


    Grantable.

    Conductor Materials Copper Copper, fiber optic

    What? You can't deploy a fiber solution with multiple cables? None exist?

    Given the range on fiber cabling, a rather intriguing method of avoiding data interception is rotating your bits through the available transmission lines, then routing each line through a different path. Now, you could always have the same bit travel over the same cable, or you could use a pseudorandom algorithm with a shared secret seed(see spread spectrum), but you'd most assuredly have a parallel architecture that was fiber optically based.

    Slots/Fanout 3 to 16 slots for adaptors Hundreds of channel addresses

    Uhm, really? Serial doesn't necessarily possess hundreds of channel addresses any more than parallel must necessarily not be implemented over fiber lines. RS-232, HSSI, pretty much any serial standard outside of USB/Firewire/That funky serial PCI replacement that was hangin' around the last Linuxworld is strictly point to point.

    The fact that Serial is much, much less tricky to physically handshake is the reason we've seen so many R&D development dollars poured into it. Make no mistake--Serial may be awesome, but this is a new thing. The general attempt has been to spooge parallel design style into a serial interface. The sheer fact that you have more channels to deal with generally means that it's far, far simpler to design for(how many of these serial systems just have a "magic chip" that expands the incoming serial stream into the parallel bus everybody knows and loves?). But, there's no conspiracy going on here; the advantages one gets from ridiculous quantities of theoretical bandwidth and easier hardware development are rather offset by the advantages of flexible cabling, smaller devices(ever seen those minimodems that aren't even the full size of the slot?), and a blurring between internal and external interfaces. Lets not forget the ability to Kill The Beige Box ;-)

    Power Supplied Yes No

    Gee, small problem, you have twenty cards in your machine, now you have twenty more wires...anyway, this is ridiculous. They're pitching a specific implementation and calling it the architecture as a whole. You can power hard drives off of Firewire, which last I checked wasn't 90 pins in a fanned slot formation.

    Addressing Scheme Physical address bus Network addressing

    There's a mantra embedded in this that screwed USB rather royally for all sorts of reasons. Turned out USB provided no way to verify which instantiation of a device is which--in other words, if I plug two Super Nintendo controllers into a Super Nintendo, the console knows that the controller plugged into the "Player 1" slot is the 1st controller, and the controller plugged into the "Player 2" slot is the second controller.

    You can't do that with USB--every time you boot up, the order randomly shifts. They were so keen on network centric addressing, and so loathe to demand addressing be physically built onto every single device, that they completely broke multiplayer gaming on the same system.

    Again, a flaw with the implementation, not the overall architecture.

    Total Bandwidth Single session, unidirectional Multiple session, bi-directional

    Oh my. Is that so. I would have thought it was easier with those aforementioned 90 pins of parallel joy to have quite a few streams of data traveling over physically independent traces, as opposed to a multiplexed, time lagged, two wire system, which incidentally has no requirement to be bidirectional at all thank you very much.

    I'm not one to go ballistic--check my posts, this is rather out of character. But reading something like this pretty much just forces me to go a bit out of character and post the following, care of Richard Heritage, Circa 1995:


    God is this [stupid]. I mean, this is rock-hard stupid. Dehydrated-rock-hard stupid. Stupid so stupid that it goes way beyond the stupid we know into a whole different dimension of stupid. It is trans-stupid stupid. Meta-stupid. It is stupid collapsed on itself so far that even the neutrons have collapsed. Stupid gotten so dense that no intellect can escape. Singularity stupid. It is a blazing mid-day sun on Mercury stupid. It emits more stupid in one second than our entire galaxy emits in a year. Quasar stupid. This has to be a troll. Nothing in our universe can really be this stupid. Unless this is some primordial fragment from the original big bang of stupid. Some pure essence of a stupid so uncontaminated by anything else as to be beyond the laws of physics that we know. I'm sorry. I can't go on.


    That being said, lets take a look at the rest of the article, which appears to be quite good:

    the blurring of the distinction between I/O and networking

    This is significant. There's an artificial distinction between networking and system I/O, propogated by belief that all the essential components that a system requires should be held as physically close and as accessably fast as possible. As individual device speeds fail to scale in comparison with available bandwidth(how many megs a sec are we pulling off of hard drives nowadays...now how fast can UDMA66 go? How fast can PCI 2.1 go?), aggregation of large numbers of individual devices becomes the primary design goal. The difference between multiprocessor boxes and Beowulf style clusters will blur, as systems literally become able to blob together--individual cache space for local processing, but it will end up no slower accessing the hard drive of a neighbor than accessing your own.

    (Incidentally--I did some experiments a while back with two computers having their external SCSI adapters connected, thus appearing to make a single CDROM show up on both machines. Fascinating stuff, but it's not usable--one computer would freeze as the other initiated SCSI connectivity to the CD drive. Of course, this was on a friend's pair of Windows machines...)

    Without adapters full of hardware providing a barrier to access for incompetent or wayward coders, device-level hackers will have unprecedented access to system internals. Obviously, this is a technology direction that needs to take security very seriously.

    Somebody's trying to sell hardware that provides a barrier to access against incompetent or wayward coders. What, are they saying that device driver writers right now can't embed trojans in a mouse driver that send data from sensitive blocks of the hard drive to a drop point on a remote network? Give me a break--device drivers have low level system access. There are schemes to address limiting a given driver to a given range, but the entire concept of a driver(the segment in kernelspace that directly interfaces with some hardware) bristles pretty harshly at the reality of being unable to issue calls to given hardware addresses.

    Actually, a general design where a driver must declare what bus addresses it plans to use--and is then held to that by the operating system--is a pretty good way to prevent faulty drivers from taking down excessive amounts of hardware.

    No, the real thing to worry about isn't so much untrustable drivers as untrustable hardware. What happens when your network bus is your keyboard bus is your hard drive bus is your memory bus? Answer: You've suddenly got lots and lots of meaningless, inconsequential hardware on the same bus as mission critical, highly secured equipment. Imagine a rootmouse that, upon being plugged in, was able to query the harddrive for the contents of /etc/shadow, completely independant of the directives from the underlying operating system. This must remain a top priority of I/O designers, and actually stands as a reason for separating heavily trafficked interfaces from less traveled, more justifiable to lock off ports.

    It'll be interesting to see what comes out of the whole SIO gambit. As long as it isn't utterly bungled by Firewire style licensing, it should be interesting.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
    • What does the cable cost?
    • how long can it be?
    • is it flexible enough to get around the room?
    • can it move data at (or close to) DRAM transfer speeds?
    • can you hot-pug things into it?
    • how many things can you plug into it?
    • how is it clocked? - probably the most important - clock skew between multiple wires on a bus severely limits max clock speed over long distances - while a single self-clocked data stream can go on for almost forever [or untill the bits smear together]
  • Rambus is strictly limited in distance it can cover (the article is talking about 10,000 ft buses, rambus is really limited to a few inches). For this reason while it maymake a great memory bus (esp for big memory systems with lots of banks) it's not going to make a good I/O bus.

    It also doesn't support multiple masters so the host would have to poll I/O devices to get data, or have other external pins to control data flow

  • I was recently reading about bluetooth. A vast number of vendors are supporting its development. It has got a good speed also. (cant remember it right now).Bluetooth is a very viable technology for home users and the speed is sufficient for printers, scanners, keyboards, mouse etc devices.

    I am not sure if firewire/usb can be useful for home users or for anyone else for that matter.

    What do you fellas think?

    CP
  • by taniwha ( 70410 ) on Sunday January 02, 2000 @08:03AM (#1417907) Homepage Journal
    Maybe it's time we get a bit proactive about demanding better thought out I/O solutions for our systems.

    In particular I want to make sure that future I/O controllers handle scatter-loaded pages well - this means some sort of MMU/TLB type structure in the I/O interface (either a fully fledged page table walker or a unibus-adaptor style software managed mapper) - these always seem to get added on after the fact (for example AGP's GART that doesn't handle cache coherency well). The problem is that such an object isn't part of a bus interface protocol - it's part of an interface chip and it's going to be a different, complex register interface for every manufacturer - the manufacturers are going to provide drivers for WinXX - Linux (and other OSs) are going to have to write drivers for all of them - we need either a standard piece of hardware (register interface) or a BIOS flexible enough to be used by all potential client OSs.

    On the OS side we need to be thinking ahead too - I'm also looking forward to seeing closed-box computers - they're going to get smaller and cheaper, there's no reason why I should have these monster computer boxes all over my room - what it costs to make an enclosure EMI proof is amazing - I want sealed ones - a whole bunch of little ones that I can plug new stuff in to upgrade - want a faster CPU - replace the old one - it's just a box with a CPU and memory - want more disk - buy a new box, drop it on the desk and plug it in (don't reboot - why would we want to do that), want to watch DVD? bring the TV from the other room, plug it in. We're starting to see some of this with USB - we're going to see more in the coming year with Bluetooth .... devices that appear while they are close to their hosts and disappear as they move away - I suspect that this technology is going to become ubiquitous for things like headsets, laptops, PDAs, maybe even printers.

    Up untill now we've require people to shut off the power and open a box in order to stuff a card into a slot when we add new functionality to a computer - I think that in the future that will be the exception (maybe only for memory upgrades) rather than the rule.

  • From reading the article, it seems that the author
    does not have a firm grasp of the concepts of computer architecture.

    1) System I/O(SIO) was renamed to Infiniband in October.

    from the article
    "agreed to merge their technology initiative with the Compaq-led FIO (future I/O) group supported by Hewlett-Packard Co., IBM Corp., and others."

    2) This was an IBM led effort.

    3) I thought MCA had a top clock rate of 40MHz?

    from the article:
    "But PCIx, which is backward-compatible with existing PCI cards, does not provide balanced throughput and is otherwise saddled with the same
    restrictions that existing PCI implementations have."

    4) By utilizing multiple peer PCI busses(2 to 3 slots per bus) the I/O does indeed become balanced. The only major restriction is on overall PCI bus length which is not a major concern if all your devices are in the same box.

    from the article:
    "Parallel buses are also restricted in the number of system board slots allowed for connecting I/O adapters. Serial channels have no slots and are limited more by protocol design than by physical signal-transfer capabilities."

    5) So this means all Infiniband busses are point-to-point or in a star configuration. Also, signal quality plays a huge role in the max capabilities of any bus, parallel or serial.

    from the article:
    "Most importantly, parallel I/O buses are a shared resource that implements some kind of prioritization scheme and interrupt processing that determines which controller is transferring data on the bus. In contrast, serial channel cables extend directly from the system without needing intermediary adapters, or interrupt processing, to establish communications between entities on the channel."

    6) This may be true from an external sense but, in a serial channel system, you have just moved the location of all these activities. They still occur(inside the host controller) just not on an external bus.

    from the article:
    "The integration of network switching technology in the channel allows simultaneous multiple data transfers or commands in the channel. In other words, the aggregate transfer rate of a serial I/O channel can be several multiples of the single-link transfer"

    7) Huh? How can you exceed the max transfer rate of the physical medium. If the author is writing about multiple delayed transactions improving efficiency of the bus I would agree. However, PCI and to a greater degree PCI-x, support multiple delayed transactions as well.

    from the article:
    "smaller system dimensions due to the elimination of host adapters and the reduction in power and cooling requirements

    new memory and memory controllers for connecting to the serial channel.
    "

    8) Elimination of host adapters? I don't believe that elimination of host adapters was the intent of Infiniband. A complete system will always have to convert data from one format to another. I imagine Infiniband will be used to mainly cluster machines and to connect to remote I/O boxes that are full of... what... yes... PCI slots.

    Also what new memory is the author talking about. Apparently he knows something that JEDIC doesn't.

    from the article:
    "How Much Data Can a Data Chip Chuck?"

    9) This whole section was confusing. Most system performance will not be limited by the memory bandwith but by the processor bandwidth. Since most memory transactions are cacheable, the host controller must snoop the processor bus on a large portion of I/O transactions thus slowing things down. In order to get a significant speed up the operating system must mark I/O buffers as non-cacheable so snoops don't occur(ala AGP).

    10) To get efficiency up the packet size must be pretty large. I would not expect the Infiniband protocol to follow any existing protocol. The overhead in current protocols is just too much to get effective MMIO performance.

    11) Remember Infiniband is a server architecture. Don't expect it to get to your home PC in the next 7-10 years. PCI is more than plenty(and cheap) for a home or even workstation system.

    12) Security: What IT wizard would even consider hooking up a user to his machine's memory subsystem? That's just silly.

    Inifiniband is a breakthrough in PC system bus architecture providing low latency high speed connections to attached components. However I don't think it is the holy grail of computer architecture.

    -Anonymous Coward
  • Dan! ma man!

    Well put!

    Especially the middle rant from Richard Heritage. It's just stupid.
    _________________________

  • Most consumers simply could care less, even if you managed to explain the issues to them.

    That's true while things are being implemented, but just wait till some hax0r manages to disable everyone's TV during the super bowl! Blood pressure will rise, then water pressure will drop, then the switchboards will melt down as millions call for product support.

    It's not that consumers don't get irritated by these things, it's just that you have to get a critical mass all irritated at once. Then it becomes anger.

  • by Anonymous Coward
    Parallel signals must be synchronized. Any difference in path length or loading results in the edges not lining up - known as skew. For example, Rambus PCB routing is approaching impossible due to the need to keep each trace the same length, keep the impedance constant, and avoid crosstalk.

    Serial signals can self-clock. The receiver can lock on, decode the bits, and send them through a FIFO to reconcile the send and receive clocks. This does cost latency, though.

    In other words, it's easier to make a single pin wiggle 8 times faster than to keep the edges lined up on 8 pins. Since pins are the bottlenecks in IC packages and circuit boards, it's also cheaper.
  • The fact that Serial is much, much less tricky to physically handshake is the reason we've seen so many R&D development dollars poured into it.

    Actually I think all the money's being poured into the design of fast serial because parallel interfaces need to be short and very rigidly controlled to be fast. And to get faster you need to add more lines (beyond the double-edge clocking and stuff). Serial offers more in this arena, and if you need to be faster than that yet, you can parallelize individual serial lines.

    I won't argue with you that there's an agenda. However I don't see what's necessarily wrong with it.

    Make no mistake--Serial may be awesome, but this is a new thing. The general attempt has been to spooge parallel design style into a serial interface.

    This isn't necessarily true. Serial's been around forever. This LVD stuff you mention below is a relatively new commercial venue, but the idea for it has been around for quite some time. I believe some of our industrial controllers used the idea of a differential serial signal to get the point across a noisy environment and with low EMC for years now.

    The fact that someone (TI and National are the biggest ones here) has thrown the design into a wire 'n go chipset doesn't make the whole idea new. Cheaper and faster, yes. :-)

    (how many of these serial systems just have a "magic chip" that expands the incoming serial stream into the parallel bus everybody knows and loves?).

    I was taking the article a different way. Replace the parallel busses with serial busses. Right up to the memory/cpu/video subsystems. In other words, make the controllers serial themselves. As a previous poster mentioned, the computer now becomes a series of NCs.

    The conversion from serial to parallel wouldn't happen until right inside the silicon. and even then it can be split off properly to maximize chip space (i.e. split off the bits that are required for 'x' to 'x's spot, and for 'y' to 'y's spot. You aren't converting the whole thing to parallel at once, just where necessary. And if done correctly, the subsystems inside the die could possibly deal with the data at a lower data rate.

    Oh my. Is that so. I would have thought it was easier with those aforementioned 90 pins of parallel joy to have quite a few streams of data traveling over physically independent traces, as opposed to a multiplexed, time lagged, two wire system, which incidentally has no requirement to be bidirectional at all thank you very much.

    90 pins of parallel joy with independent pins are a waste of space and design time. Let's say you've got 6 slots sharing those 90 pins. there's only so many ways you can split them up and even if you took those 90 pins and split them in to 10 control pins and 10 independent 8-bit busses you still need to put the switching fabric on each and every device you plug into the bus, including the motherboard. It's a waste. Not to mention if some device wants to use all 10 busses to transfer something doubleplusfast, every other device must wait. In a cell-oriented serial architecture that doesn't happen. Can't happen if properly done.

    You're correct in stating that you can do it with a parallel bus but I don't feel it's as flexible as a serial bus.

    As far as time-lagged goes, if you designed it correctly (a la ATM) you could implement a Class of Service to the entire subsystem. And if your two wires just weren't pushing enough data, you can parallelize them then.

    No, the real thing to worry about isn't so much untrustable drivers as untrustable hardware. What happens when your network bus is your keyboard bus is your hard drive bus is your memory bus? Answer: You've suddenly got lots and lots of meaningless, inconsequential hardware on the same bus as mission critical, highly secured equipment. Imagine a rootmouse that, upon being plugged in, was able to query the harddrive for the contents of /etc/shadow, completely independant of the directives from the underlying operating system. This must remain a top priority of I/O designers, and actually stands as a reason for separating heavily trafficked interfaces from less traveled, more justifiable to lock off ports.

    This is significant. I believe this is the exact reason why they're taking a very hard and critical look at security on what they're proposing. Personally I think it's great. The idea of security and even encryption should be placed in Layer 1 or Layer 2 on the OSI network model, not up higher commonly sits.

    I'm glad they're looking at the benefits of an open protocol as well. They hit the nail on the head with their statement about closed source drawing attention just because it's a challenge.

    Your idea of a rootmouse is intriguing. How would one make sure that devices couldn't access parts it wasn't supposed to? Perhaps a seperate bus for critical system components, still allowing you to place everything on one but without the security? Perhaps a Bus Administration Unit which components must authenticate against to get access to other devices?
  • I was recently reading about bluetooth. A vast number of vendors are supporting its development. It has got a good speed also. (cant remember it right now).Bluetooth is a very viable technology for home users and the speed is sufficient for printers, scanners, keyboards, mouse etc devices.

    Wireless will play an important part of the future, but it will nto replace wired connections, ever. At least IMNSHO.

    Yes it's fun, yes it's 31337, yes it can be fast. You'll always have interference problems when you want it most, or be just out of range, or something. Perhaps a combination of wireless and high speed serial will get us to where we need to be. The world is very very unfriendly towards wireless.
  • Less than ten dollars is cheap for a card. I wonder if USB isn't cheaper still to add to peripherals than adding a 10/100BaseT interface.

    Cypress puts out a USB controller for under $3 in quantity, and no media transformer is required. I believe Microchip now puts out a USB-enabled PIC too.
  • I think the article is centering on the memory bus in the computer, and not so much an external bus.

    They clearly talk about multiple slots, power supply issues, the classses being storage, network, video, and cluster, and so on. It is obviously about external buses and not system memory. Indeed, there is a discussion about how to connect the serial buses to system memory.

    Admit it: You didn't read the article. :)
  • by Anonymous Coward
    I gather they're going to sell this to the SOHO market.

    They intend to make the CPU, storage, RAM just another connection to the (intra)net. The author briefly cogitates the subject of security.

    How am I, the business user, going to implement security in Linux? You're on the third firewall model, ipfwadm -> ipchains -> iptables. How are us mortals going to keep up?

    You're going to have to settle on a single model, then whack the shortcomings, a non-glory type of task.

    Are you up to it?

  • Consider:
    • Is it be better to have 64 serial lines that, with a bit of buffering, can handle a diverse set of concurrent communications?
    • Or is it better to have those lines used to create a single channel that can "burst mode" impressively, but which is hard to keep busy?

    Parallel is great for integrated buses, where you're going to try to have a bunch of fast devices share those "64" lines.

    But, if you can get some really fast serial connections that only use a wire or two apiece, this can simplify the individual circuits.

    And as the serial connections operate in an asynchronous manner, "bursting" goes away, and the system is liable to cope more gracefully with diverse kinds of "traffic."

    Would I rather have:

    • A 64 bit SUPER-SCSI channel that can burst data across at 64GB/s, at those few moments when I'm trying to do so, and which more typically only handles 8GB/s because there's not a disk drive that can keep up.
    • 32 independent channels providing 1GB bandwidth apiece, from which I can get a sustained load of 24 GB/s?
    I think I'd rather have the latter, even though it has lower "burst speed."
  • These two posts are awesome! I agree that the author is deliberately hiding some key details under the rug, and I wonder if he has a commercial reason for doing so. Don't get me wrong: serial is very cool. It's easier to string a bunch of devices onto the same (properly terminated) cable than it is to build a long, synchronous, parallel bus. But the key issue is synchronous vs. asynchronous, not parallel vs. serial. It is quite true that parallel synchronous busses with hardware arbitration schemes are pushing their upper limits...but

    The next generation busses are only pushing the cost of disambiguating signals from hardware onto software.

    This may be a good engineering decision. Feature for feature, software/firmware is and always has been much cheaper than hardware. But that flexibility comes at a cost: simply put, more features means more errors. Expect to see the software arbitration schemes for NGIO/FIO released on Flash ROM...so that errors can be patched.

    Also, dynamic software arbitration schemes have a bunch of real hidden costs. There are the security costs (how can I keep my mouse from reading my password file?). There are the address assignment/boot-up time costs. But most importantly, there are the overhead costs of the protocol itself. The nominal bandwidth of a serial line employing a software arbitration protocol is much higher than its actual bandwidth. Parallel busses have real limitations, but we know that they really can send information at their nominal saturation rate. Serial busses can't.

    Bottom line: serial isn't necessarily better than parallel. Performance will not improve as much as it seems. And those gains will come at the cost of incompatibility and instability.

  • I'd like to see a "Bluetooth port" on my computer int the future. That way my computer would be able to interact direct with laptops, PDA's, watches etc. But I personally would rather have a cable to my modem/monitor/whatever. If for no other reason than that it would be more practical. I really don't move my monitor often, I don't *need* it to be remotely connected.

    And then you don't have to worry about someone scrambling or listening in on your communications. Or someone trying to hijack your hardware. (If you build it, someone will try to hack it!)

    Having but with a technology like Jini this kind of things could be great. Let's not hook up everything like this, just like you don't use a PCI bus for everything today.
  • Are there any types of slotted cards now that would be unsuitable under the new form of architecture. What about graphics cards, for example?

    Also, can all of the I/O ports that we currently have on the back of a PC be comfortably handled by this new architecture?

    Then there's the discussion of impacts to case and motherboard designs assuming one removes the slots entirely. Power supply was mentioned, but are there any less obvious ones, such as drive bays?
  • Give me a break, they want us to give up 64Bit I/O transfers at 66Mhz (the article computes 264 Mega Bytes/second for PCI 2.1)... for a serial standard? This is 2.1 Gigabits/second folks.... and it's a heck of a lot easier to push this in parallel than it is to use specialized Gallium Arsenide components to try to spit it out a serial port. At the least we're talking about signals with a clock rate of 2 Ghz.


    If you throw the "double clocking" trick on to PCI (seems fair to me), you get the amazing speed of almost a gigabyte/second! Where are we going to get 10Ghz network clocks? Why waste the effort, when it should be relatively easy to extend the 64x performance increase in Parallel bus design for a very long time into the future?

    I'll tell you why... Intel, and others, want to keep the cost of entry high, and since there is no other way to do it, they are willing to have us pay for an unnecessary layer of hardware that only they can afford the research to create, to keep their market share!


    I say no to this obvious attempt to raise the price of computing. We need to keep hardware prices falling in line with Moore's law, and not restrained in such a stupid manner. Serial may be fine for external interfaces, but parallel remains the only sane choice for a very long time to come.


    In my opinion, this idea of making internal busses all serial is NUTS!


    --Mike--
  • Why NOT reduce the CPU to a dozen pins?

    Power distribution.

    Modern CPUs are high power, low voltage devices that need large numbers of power and ground pins. Each pin is limited to a relatively small amount of current. Many pins are needed for a high current, low impedance connection to the power supply.

  • If youve used a sun disk array you will know that its on a fibre loop with a serial fibre cable. The cable and array is relatively cheap (specially an old sparcstorage FC array) and if you have fibre, bandwidth is not a problem. parallel fibre on the other hand is a pain.
  • I haven't worked terribly much with fibre but just how sturdy is it? They're claiming up to 10kft which is a long long way... people are gonna run this under carpet, trip over it, the cat's gonna chew it... I thought that fibre was a pretty resilient technology from an EMI point of view, but what about the Home Factor? Copper wires are usually pretty good about being tripped over and ripped out of sockets. What of fibre? If you kink a fibre cable, what happens to it?

    Nobody has any experience with "fibre" - there's no such thing. The spelling was changed to "fibre" to denote media independence for the protocols that were originally designed to run over "fiber". So, I wouldn't worry about carpets and hungry cats and what not... you'll still see copper for "short distance" applications.

    As for sturdiness of "fiber", it depends on whether or not you use glass or plastic for the core, the tradeoff being dB drop (glass being much less than that of plastic). I usually run into plastic core for multimode fiber applications, and glass core for single mode (where the distances can be up to 50km and dB drop is a big concern, even though your signal strength in single mode is much higher).

  • Whoo-hoo! Someone else mentioning Bluetooth.

    I think you're absolutely right - USB, Firewire & Bluetooth all show the advantages of a PNP serial interface that fast and easy to connect (with Bluetooth of course being much easier to connect than it is fast). Already two companies, Ericsson [zdnet.co.uk] & Idei [msnbc.com] have announced (links go to articles) a desire to develop Bluetooth enabled Flash RAM for storage. Great for PDAs, digital cameras and MP3 players, once they support Bluetooth. While the Bluetooth implimentations are not the worlds most rapid anything, the logical structure is very attractive - need some more {whatever - storage, CPU time, interfaces}, just add it to the piconet.

    Hey, I just thought - has anyone commented on the power requirements/em? of a Serial I/O BUS v's a Parallel one of similar throughput? Are they more, less or about the same?

  • The fact that Serial is much, much less tricky to physically handshake is the reason we've seen so
    many R&D development dollars poured into it.

    Actually I think all the money's being poured into the design of fast serial because parallel interfaces need to be short and very rigidly controlled to be fast. And to get faster you need to add more lines (beyond the double-edge clocking and stuff). Serial offers more in this arena, and if you need to be faster than that yet, you can parallelize individual serial lines.


    Yes. It's very difficult to build high-speed parallel interfaces, even if you have proper handshaking similar to serial. When you have a data cable with 8, 16, etc data lines (parallel), you start to worry that when you change the data on those lines and handshake, that the data appearing on the lines hasn't fully propogated to the end. So what appears on the end is a few changed bits, some indeterminate bits (in the process of changing), and some bits from the old bit of data. It's much easier to worry about 1 line than 8/16/2^n.

    Related note: 'Clock Skew' is an important part of IC design, especially as processors get faster. The clock pulse changes at one end, but the change until some time later at the other end, which messes up timing.
  • Perhaps a combination of wireless and high speed serial will get us to where we need to be. The world is very very unfriendly towards wireless.
    I've been looking closely at Bluetooth v's USB and I've come up with one significant issue. Power. Wireless devices have to have batteries, or at least they have to get power from somwhere. USB, on the other hand, can deliver data and power down the one cable. Those scanners and modems coming out with just the one cable are great.

    Now, as a user of a portable PC, the power benefits of USB do tend to get reduced, since the main device may well be running on batteries anyway. Also, the wireless connectivity of Bluetooth is incredibly useful - and the Bluetooth technology is very low power. So, I'm not saying one is always better than the other, but I am saying that USB's place might well be when you're going to need a cable for power, since you might as well use one that delivers data too...

  • If networking and I/O are to become increasingly difficult to tell apart (loosely from the article), will we also see a possible sharing of hardware devices across computers ? In todays officespace, there is a lot of redundancy going on, with machines and especially harddrive capacities of ridicolous prpotions to what they are actually used for (since certain software seems to be needing these and that specs).
    If anyone could enlighten me on the aspect of using these serial "networking" I/O solutions to share resources based on external hardware, as well as a possible migrating I/O structure, that would be great. It seems this is a better solution than todays "exported network drive" system.
  • [USB is] just too slow. Firewire is better but... I think ethernet would have been a better peripheral bus choice. Ethernet doesn't supply power, does it? One big advantage of USB is providing power to low-power devices.
  • Why NOT reduce the CPU to a dozen pins?

    Power distribution.


    Is that all? Then take the power and ground in through the top of the chip. There's a big powered fan sitting there anyway, it would be nothing new.
  • Although this info is old news, I am supprised that no-one has picked up on the implications Infiniband should have on our purchasing decisions today. The nature of the spec (see: www.infinibandta.org - site should be complete by end of Jan 2000 with more details and the various reports referenced) means that in order to retain 'legacy' devices in the new architecture they will need to be I2O complient. The physical architecture of the 'chassis' and the new processor board size and specification will make PC boxes obsolete. The great advantage I see is the superb scalability inherrant in the few publically available details. The 'just slot in another processor board' may mean 'Work-station' to 'Beowulf' cluster in very quick order. I hope everything will be a lot clearer by Jan 31.
  • personally, having a mouse that could access my hard drive would really piss me off. I'm not sure what to think about replacing parallel buses with serial. Having a bow that was a giant serial network might be cheap and speedy but what happens when people figure out how to circumvent software privacy controls and read all your keystrokes or control devices directly? I think if you were going to move to a serial system that each component set should have a private serial connection ie. processor to RAM, and also have a connection to the rest of the system. That way all private connections remain isolated. Right now my DMA devices can communicate directly with the RAM and likewise the RAM to the DMA device but the DMA device can't sit back and watch what is going on between my CPU and RAM because there is a memory controller between the two. You COULD have like firewall chips in front of certain system resources but wouldnt those add as much cost to the system as the elimination of parallel controllers deducted? I like serial connections for hardware devices like hard disks and the like, because the commands/information and whatnot have termination I can connect them while they are in the middle of an operation and not have the system wonder where the hell it went. Firewire and USB are two great examples of this in effect. I'm not so sure I'd like my internal devices like the CPU and RAM on the same serial link with the mouse and keyboard though.
  • If you kink it ---- it breaks.

    All the fibre channel cable I have worked with has a "minimum radius" of about 2 feet. I.E if you bend it more than then curve of a four foot wide circle -- it doeasn't break but too much light leaks out of the fibre to get a coherent signel.

    At about one foot radius it starts to break.

    This would not be very practical in the hostile environment we call home.

  • That's too bad, the previous post is IMHO a very good post that should be moderated up.

    There is a problem with the moderation system: I am never a moderator when I want, and when I am a moderator usually I rarely find something interesting to moderate...
    Oh well.
  • which is surprising considering that Scalable Coherent Interface (SCI) [scizzl.com] is already standardized and available.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (3) Ha, ha, I can't believe they're actually going to adopt this sucker.

Working...