Future I/O Standards 87
hardave writes "Here's an interesting article from Performance Computing about future I/O protocols and standards." This piece talks about the most recent gathering of the minds about I/O. In the end, it means what we've expected all along; faster throughput, and the benefit of creating open standards.
Re:#RIT WILL 0WN FUTURE I/0 (Score:1)
Rambus not mentioned? (Score:4)
the implementation of external, non-shared, non-blocking
switched connections lower latency communications between multiple channel entities, particularly between systems in a cluster
dynamic system configuration and hot swapping virtual controllers implemented in software, eliminating many host adapters
smaller system dimensions due to the elimination of host adapters and the reduction in power and cooling requirements
new memory and memory controllers for connecting to the serial channel
an increase in the number of host-based storage and data-management applications
the blurring of the distinction between I/O and networking
_________________________
Interesting article... (Score:5)
Two very intersting points, however, are a) They're considering Fibre in a consumer application and b) they're very seriously considering security of the link.
I haven't worked terribly much with fibre but just how sturdy is it? They're claiming up to 10kft which is a long long way... people are gonna run this under carpet, trip over it, the cat's gonna chew it... I thought that fibre was a pretty resilient technology from an EMI point of view, but what about the Home Factor? Copper wires are usually pretty good about being tripped over and ripped out of sockets. What of fibre? If you kink a fibre cable, what happens to it?
The other point was security. Basically they're arguing over two methods. One is "closed-source" and switched, while the other is "open-source".
They go as far as to say that a closed implementation is a big flashy waving sign for hackers, as it's an irresistable challenge. They're bang-on there. I mean a fast standard that doesn't need 200+ connections? I'd be all over that in a heartbeat! It's refreshing to see a gathering of industry leaders actually see that aspect.
... I think it's neat... the philosophy for years has been more parallel connections. Transfer more per clock and you up your bandwidth and therefore your throughput. What's next? Serial processors? A couple megs of cache on the chip, maybe a serial bus to system memory, antoher to system I/O and one to the video subsystem? I mean they're talking throughput greater than PCI 2.1 here... Why NOT reduce the CPU to a dozen pins?
NC? (Score:1)
Now if only they'd concentrate on this rather than the pointless MHz race we could actually see some real improvement in performance.
Don't Get Confused (Score:1)
---
Re:Rambus not mentioned? (Score:1)
Re:Interesting article... (Score:3)
With serial we can actually use RF. Layout the motherboard in stripline to filter the signal between components. The IC runs parallel to an interface section, which is a high speed shift register. If you wanted parallel busses you could then adda pin or two, modulate the signal up to another frequency, and/or spread it out with a Direct Sequence spreader.
Then when we max out the viable speed of the serial bus we will aggregate them together having learned as much about serial bus implementation as we had parallel. In the end a happy mix between the two will be found.
Polydactylism (OT) (Score:2)
Yup, serial I/O is here to stay... (Score:2)
One not-so-nice thing about all this high speed local connectivity is that it worries the Copyright Mafia to no end. The MPAA and others already see people copying entire DVDs in the privacy of their own homes, and are proposing draconic control schemes (like 5C does for IEEE 1394 -- see http://www.dtcp.com/ [dtcp.com] -- in short: how would you like your TV to send a message to your cloned DVD player in order to disable it remotely??).
But fortunately, the same technology can also be used by sane people to implement flexible certificate-based link-level security. Using IPv6, for example, would automagically enable IP-sec, and there should be enough address space left there (~85%) to give manufacturers a way to do autoconfiguration...
Huh? (Score:2)
Serial protocols are useful for anything long-range, but when you need to deliver data between few devices located in the same box, and have to do it fast, one wire instead of 64 means theoretically 64 times less bandwidth, and in relaity at least twice less, no matter what. Only when the length of the line is enough to cause distortion/desynchronization of the signals serial protocol becomes superior to parallel one, and even that isn't true in all cases.
Re:OPEN SOURCE IO (Score:2)
Intel discussed SIO at Linux University Road Tour (Score:1)
As we all know, processor speeds have been going through the roof year after year. I/O performance, on the other hand, has improved at a slower pace. Perhaps now we can look forward to an increased rate of I/O performance improvement.
By the way, Intel said that 2003 would be the earliest that a "cheap" version of the Itanium would be available, cheap enough for desktop or home use. Deerfield is the name of this home version of Itanium. 2003 is a long way off; perhaps that will give Compaq enough time to produce an equivalent Alpha.
Re:Can we please kill USB? Please? (Score:1)
Re:Don't Get Confused (Score:1)
The architechture they described for a SMP system looked something like this:
(See Figure 3 [performancecomputing.com] from the article. I tried to do it as ASCII art, but preview says "that doesn't work")
The difference between this and the current layout for a PCI system is that the memory/channel controller is replaced by the PCI controller, and the switch is replaced by the PCI bus.
Personally, I see USB as a controller hanging off the switch, converting between the (high speed) I/O bus serial protocol and the (lower speed) USB protocol. The same would be true with most existing protocols: IDE, SCSI, Firewire, etc, if for no other reason than to take advantage of existing storage media.
Re:OPEN SOURCE IO (Score:1)
It's not Russian. Does "Mehanicheskij Apel'sin" mean anything to you?
Kaa
Re:Virus writers will SAVE US from remote disable. (Score:1)
Oh, some people will be infuriated by this, but you can count on this becoming mandatory for most devices anyway (hey, ever read the Digital Millennium Copyright Act?). Most consumers simply could care less, even if you managed to explain the issues to them. And while I agree that crackers will find a workaround to this right away, the control issue is interesting even for non-insane applications
It basically comes down to: who do you trust to have any kind of authority on your serial bus? Your hardware manufacturer? (5C shows this might not be a good idea...) Do you purchase your own $125 VeriSign certificate for I/O purposes? Questions, questions, questions...
Re:Polydactylism (OT) (Score:1)
How much do you pay for gloves?
Twelve pins, not 12 hands... <whack>
TCP/IP for peripherals (Score:1)
The Java crowd would love it. Finally they could do systems programming without having to grok pointers .. :-)
Re:Outside 'the box' (Score:2)
I would agree with your assesment of the limitations listed above, but I would point out that what's changing is the definition of 'the box'. Lines are incresingly being blured between where the box ends and the network begins. Highspeed External I/O is proving to be a nesesity in this networked world of ours.
There was a time when a user was happy with just an isolated box. Then LAN funtionality became increasingly needed (got boxen without a NIC? no?).
Today, without massive conectitity, the 'puter will quickly become a doorstop of funtionality. That is why these new standards make logical sence for today, and into the near future.
_________________________
Re:Interesting article... (Score:3)
exactly! massively parallel serial busses. It almost sounds oxymoronic but wow... You could even distribute data across the various parallel serial channels in order to help bottleneck issues. Each channel could have its own throttling / pritorizing management. Redundancy is kind of a lame concept here but the other aspects are sweet.
We're headed into ATM-style busses for intra-system connections!
But, you are forgetting... (Score:2)
A wealthy eccentric who marches to the beat of a different drum. But you may call me "Noodle Noggin."
Its counter-intuitive but... (Score:1)
This is undoubtably a gross oversimplification, but parallel communication systems must ensure that the signals from the many channels arrive synchronized, whereas serial systems get this for free. Maybe some EE's can provide a more rigorous explanation of this, or at least some good links.
Single mode fibre optics, where there is one and only one path along the fibre, can provide throughput which is not physically achievable by other means. C Novom Godom.
Serial and Parallel in a SchizoPhrenic article? (Score:4)
I'm not kidding--I've actually never read an article that on certain levels provided a fascinating glimpse at things to come, but on others rang so wrong that I was left in shock.
Bottom line: Somebody's agenda is leaking. Lets look at the Parallel v. Serial chart:
Parallel I/O Bus Serial I/O Channel
Max Physical Bus Length 1 meter 10,000 meters
Conductors/Pins 90+ 4 to 8
Grantable.
Conductor Materials Copper Copper, fiber optic
What? You can't deploy a fiber solution with multiple cables? None exist?
Given the range on fiber cabling, a rather intriguing method of avoiding data interception is rotating your bits through the available transmission lines, then routing each line through a different path. Now, you could always have the same bit travel over the same cable, or you could use a pseudorandom algorithm with a shared secret seed(see spread spectrum), but you'd most assuredly have a parallel architecture that was fiber optically based.
Slots/Fanout 3 to 16 slots for adaptors Hundreds of channel addresses
Uhm, really? Serial doesn't necessarily possess hundreds of channel addresses any more than parallel must necessarily not be implemented over fiber lines. RS-232, HSSI, pretty much any serial standard outside of USB/Firewire/That funky serial PCI replacement that was hangin' around the last Linuxworld is strictly point to point.
The fact that Serial is much, much less tricky to physically handshake is the reason we've seen so many R&D development dollars poured into it. Make no mistake--Serial may be awesome, but this is a new thing. The general attempt has been to spooge parallel design style into a serial interface. The sheer fact that you have more channels to deal with generally means that it's far, far simpler to design for(how many of these serial systems just have a "magic chip" that expands the incoming serial stream into the parallel bus everybody knows and loves?). But, there's no conspiracy going on here; the advantages one gets from ridiculous quantities of theoretical bandwidth and easier hardware development are rather offset by the advantages of flexible cabling, smaller devices(ever seen those minimodems that aren't even the full size of the slot?), and a blurring between internal and external interfaces. Lets not forget the ability to Kill The Beige Box
Power Supplied Yes No
Gee, small problem, you have twenty cards in your machine, now you have twenty more wires...anyway, this is ridiculous. They're pitching a specific implementation and calling it the architecture as a whole. You can power hard drives off of Firewire, which last I checked wasn't 90 pins in a fanned slot formation.
Addressing Scheme Physical address bus Network addressing
There's a mantra embedded in this that screwed USB rather royally for all sorts of reasons. Turned out USB provided no way to verify which instantiation of a device is which--in other words, if I plug two Super Nintendo controllers into a Super Nintendo, the console knows that the controller plugged into the "Player 1" slot is the 1st controller, and the controller plugged into the "Player 2" slot is the second controller.
You can't do that with USB--every time you boot up, the order randomly shifts. They were so keen on network centric addressing, and so loathe to demand addressing be physically built onto every single device, that they completely broke multiplayer gaming on the same system.
Again, a flaw with the implementation, not the overall architecture.
Total Bandwidth Single session, unidirectional Multiple session, bi-directional
Oh my. Is that so. I would have thought it was easier with those aforementioned 90 pins of parallel joy to have quite a few streams of data traveling over physically independent traces, as opposed to a multiplexed, time lagged, two wire system, which incidentally has no requirement to be bidirectional at all thank you very much.
I'm not one to go ballistic--check my posts, this is rather out of character. But reading something like this pretty much just forces me to go a bit out of character and post the following, care of Richard Heritage, Circa 1995:
God is this [stupid]. I mean, this is rock-hard stupid. Dehydrated-rock-hard stupid. Stupid so stupid that it goes way beyond the stupid we know into a whole different dimension of stupid. It is trans-stupid stupid. Meta-stupid. It is stupid collapsed on itself so far that even the neutrons have collapsed. Stupid gotten so dense that no intellect can escape. Singularity stupid. It is a blazing mid-day sun on Mercury stupid. It emits more stupid in one second than our entire galaxy emits in a year. Quasar stupid. This has to be a troll. Nothing in our universe can really be this stupid. Unless this is some primordial fragment from the original big bang of stupid. Some pure essence of a stupid so uncontaminated by anything else as to be beyond the laws of physics that we know. I'm sorry. I can't go on.
That being said, lets take a look at the rest of the article, which appears to be quite good:
the blurring of the distinction between I/O and networking
This is significant. There's an artificial distinction between networking and system I/O, propogated by belief that all the essential components that a system requires should be held as physically close and as accessably fast as possible. As individual device speeds fail to scale in comparison with available bandwidth(how many megs a sec are we pulling off of hard drives nowadays...now how fast can UDMA66 go? How fast can PCI 2.1 go?), aggregation of large numbers of individual devices becomes the primary design goal. The difference between multiprocessor boxes and Beowulf style clusters will blur, as systems literally become able to blob together--individual cache space for local processing, but it will end up no slower accessing the hard drive of a neighbor than accessing your own.
(Incidentally--I did some experiments a while back with two computers having their external SCSI adapters connected, thus appearing to make a single CDROM show up on both machines. Fascinating stuff, but it's not usable--one computer would freeze as the other initiated SCSI connectivity to the CD drive. Of course, this was on a friend's pair of Windows machines...)
Without adapters full of hardware providing a barrier to access for incompetent or wayward coders, device-level hackers will have unprecedented access to system internals. Obviously, this is a technology direction that needs to take security very seriously.
Somebody's trying to sell hardware that provides a barrier to access against incompetent or wayward coders. What, are they saying that device driver writers right now can't embed trojans in a mouse driver that send data from sensitive blocks of the hard drive to a drop point on a remote network? Give me a break--device drivers have low level system access. There are schemes to address limiting a given driver to a given range, but the entire concept of a driver(the segment in kernelspace that directly interfaces with some hardware) bristles pretty harshly at the reality of being unable to issue calls to given hardware addresses.
Actually, a general design where a driver must declare what bus addresses it plans to use--and is then held to that by the operating system--is a pretty good way to prevent faulty drivers from taking down excessive amounts of hardware.
No, the real thing to worry about isn't so much untrustable drivers as untrustable hardware. What happens when your network bus is your keyboard bus is your hard drive bus is your memory bus? Answer: You've suddenly got lots and lots of meaningless, inconsequential hardware on the same bus as mission critical, highly secured equipment. Imagine a rootmouse that, upon being plugged in, was able to query the harddrive for the contents of
It'll be interesting to see what comes out of the whole SIO gambit. As long as it isn't utterly bungled by Firewire style licensing, it should be interesting.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Here are some of the issues .... (Score:1)
Re:Rambus not mentioned? (Score:2)
It also doesn't support multiple masters so the host would have to poll I/O devices to get data, or have other external pins to control data flow
The future is wireless (Score:1)
I am not sure if firewire/usb can be useful for home users or for anyone else for that matter.
What do you fellas think?
CP
Things we need from an I/O solution ..... (Score:4)
In particular I want to make sure that future I/O controllers handle scatter-loaded pages well - this means some sort of MMU/TLB type structure in the I/O interface (either a fully fledged page table walker or a unibus-adaptor style software managed mapper) - these always seem to get added on after the fact (for example AGP's GART that doesn't handle cache coherency well). The problem is that such an object isn't part of a bus interface protocol - it's part of an interface chip and it's going to be a different, complex register interface for every manufacturer - the manufacturers are going to provide drivers for WinXX - Linux (and other OSs) are going to have to write drivers for all of them - we need either a standard piece of hardware (register interface) or a BIOS flexible enough to be used by all potential client OSs.
On the OS side we need to be thinking ahead too - I'm also looking forward to seeing closed-box computers - they're going to get smaller and cheaper, there's no reason why I should have these monster computer boxes all over my room - what it costs to make an enclosure EMI proof is amazing - I want sealed ones - a whole bunch of little ones that I can plug new stuff in to upgrade - want a faster CPU - replace the old one - it's just a box with a CPU and memory - want more disk - buy a new box, drop it on the desk and plug it in (don't reboot - why would we want to do that), want to watch DVD? bring the TV from the other room, plug it in. We're starting to see some of this with USB - we're going to see more in the coming year with Bluetooth .... devices that appear while they are close to their hosts and disappear as they move away - I suspect that this technology is going to become ubiquitous for things like headsets, laptops, PDAs, maybe even printers.
Up untill now we've require people to shut off the power and open a box in order to stuff a card into a slot when we add new functionality to a computer - I think that in the future that will be the exception (maybe only for memory upgrades) rather than the rule.
Does this guy know what he is talking about? (Score:1)
does not have a firm grasp of the concepts of computer architecture.
1) System I/O(SIO) was renamed to Infiniband in October.
from the article
"agreed to merge their technology initiative with the Compaq-led FIO (future I/O) group supported by Hewlett-Packard Co., IBM Corp., and others."
2) This was an IBM led effort.
3) I thought MCA had a top clock rate of 40MHz?
from the article:
"But PCIx, which is backward-compatible with existing PCI cards, does not provide balanced throughput and is otherwise saddled with the same
restrictions that existing PCI implementations have."
4) By utilizing multiple peer PCI busses(2 to 3 slots per bus) the I/O does indeed become balanced. The only major restriction is on overall PCI bus length which is not a major concern if all your devices are in the same box.
from the article:
"Parallel buses are also restricted in the number of system board slots allowed for connecting I/O adapters. Serial channels have no slots and are limited more by protocol design than by physical signal-transfer capabilities."
5) So this means all Infiniband busses are point-to-point or in a star configuration. Also, signal quality plays a huge role in the max capabilities of any bus, parallel or serial.
from the article:
"Most importantly, parallel I/O buses are a shared resource that implements some kind of prioritization scheme and interrupt processing that determines which controller is transferring data on the bus. In contrast, serial channel cables extend directly from the system without needing intermediary adapters, or interrupt processing, to establish communications between entities on the channel."
6) This may be true from an external sense but, in a serial channel system, you have just moved the location of all these activities. They still occur(inside the host controller) just not on an external bus.
from the article:
"The integration of network switching technology in the channel allows simultaneous multiple data transfers or commands in the channel. In other words, the aggregate transfer rate of a serial I/O channel can be several multiples of the single-link transfer"
7) Huh? How can you exceed the max transfer rate of the physical medium. If the author is writing about multiple delayed transactions improving efficiency of the bus I would agree. However, PCI and to a greater degree PCI-x, support multiple delayed transactions as well.
from the article:
"smaller system dimensions due to the elimination of host adapters and the reduction in power and cooling requirements
new memory and memory controllers for connecting to the serial channel.
"
8) Elimination of host adapters? I don't believe that elimination of host adapters was the intent of Infiniband. A complete system will always have to convert data from one format to another. I imagine Infiniband will be used to mainly cluster machines and to connect to remote I/O boxes that are full of... what... yes... PCI slots.
Also what new memory is the author talking about. Apparently he knows something that JEDIC doesn't.
from the article:
"How Much Data Can a Data Chip Chuck?"
9) This whole section was confusing. Most system performance will not be limited by the memory bandwith but by the processor bandwidth. Since most memory transactions are cacheable, the host controller must snoop the processor bus on a large portion of I/O transactions thus slowing things down. In order to get a significant speed up the operating system must mark I/O buffers as non-cacheable so snoops don't occur(ala AGP).
10) To get efficiency up the packet size must be pretty large. I would not expect the Infiniband protocol to follow any existing protocol. The overhead in current protocols is just too much to get effective MMIO performance.
11) Remember Infiniband is a server architecture. Don't expect it to get to your home PC in the next 7-10 years. PCI is more than plenty(and cheap) for a home or even workstation system.
12) Security: What IT wizard would even consider hooking up a user to his machine's memory subsystem? That's just silly.
Inifiniband is a breakthrough in PC system bus architecture providing low latency high speed connections to attached components. However I don't think it is the holy grail of computer architecture.
-Anonymous Coward
Re:Who pissed in your corn flakes? (nice post) (Score:2)
Well put!
Especially the middle rant from Richard Heritage. It's just stupid.
_________________________
Re:Virus writers will SAVE US from remote disable. (Score:2)
Most consumers simply could care less, even if you managed to explain the issues to them.
That's true while things are being implemented, but just wait till some hax0r manages to disable everyone's TV during the super bowl! Blood pressure will rise, then water pressure will drop, then the switchboards will melt down as millions call for product support.
It's not that consumers don't get irritated by these things, it's just that you have to get a critical mass all irritated at once. Then it becomes anger.
Re:Its counter-intuitive but... (Score:2)
Serial signals can self-clock. The receiver can lock on, decode the bits, and send them through a FIFO to reconcile the send and receive clocks. This does cost latency, though.
In other words, it's easier to make a single pin wiggle 8 times faster than to keep the edges lined up on 8 pins. Since pins are the bottlenecks in IC packages and circuit boards, it's also cheaper.
Re:Serial and Parallel in a SchizoPhrenic article? (Score:5)
Actually I think all the money's being poured into the design of fast serial because parallel interfaces need to be short and very rigidly controlled to be fast. And to get faster you need to add more lines (beyond the double-edge clocking and stuff). Serial offers more in this arena, and if you need to be faster than that yet, you can parallelize individual serial lines.
I won't argue with you that there's an agenda. However I don't see what's necessarily wrong with it.
Make no mistake--Serial may be awesome, but this is a new thing. The general attempt has been to spooge parallel design style into a serial interface.
This isn't necessarily true. Serial's been around forever. This LVD stuff you mention below is a relatively new commercial venue, but the idea for it has been around for quite some time. I believe some of our industrial controllers used the idea of a differential serial signal to get the point across a noisy environment and with low EMC for years now.
The fact that someone (TI and National are the biggest ones here) has thrown the design into a wire 'n go chipset doesn't make the whole idea new. Cheaper and faster, yes.
(how many of these serial systems just have a "magic chip" that expands the incoming serial stream into the parallel bus everybody knows and loves?).
I was taking the article a different way. Replace the parallel busses with serial busses. Right up to the memory/cpu/video subsystems. In other words, make the controllers serial themselves. As a previous poster mentioned, the computer now becomes a series of NCs.
The conversion from serial to parallel wouldn't happen until right inside the silicon. and even then it can be split off properly to maximize chip space (i.e. split off the bits that are required for 'x' to 'x's spot, and for 'y' to 'y's spot. You aren't converting the whole thing to parallel at once, just where necessary. And if done correctly, the subsystems inside the die could possibly deal with the data at a lower data rate.
Oh my. Is that so. I would have thought it was easier with those aforementioned 90 pins of parallel joy to have quite a few streams of data traveling over physically independent traces, as opposed to a multiplexed, time lagged, two wire system, which incidentally has no requirement to be bidirectional at all thank you very much.
90 pins of parallel joy with independent pins are a waste of space and design time. Let's say you've got 6 slots sharing those 90 pins. there's only so many ways you can split them up and even if you took those 90 pins and split them in to 10 control pins and 10 independent 8-bit busses you still need to put the switching fabric on each and every device you plug into the bus, including the motherboard. It's a waste. Not to mention if some device wants to use all 10 busses to transfer something doubleplusfast, every other device must wait. In a cell-oriented serial architecture that doesn't happen. Can't happen if properly done.
You're correct in stating that you can do it with a parallel bus but I don't feel it's as flexible as a serial bus.
As far as time-lagged goes, if you designed it correctly (a la ATM) you could implement a Class of Service to the entire subsystem. And if your two wires just weren't pushing enough data, you can parallelize them then.
No, the real thing to worry about isn't so much untrustable drivers as untrustable hardware. What happens when your network bus is your keyboard bus is your hard drive bus is your memory bus? Answer: You've suddenly got lots and lots of meaningless, inconsequential hardware on the same bus as mission critical, highly secured equipment. Imagine a rootmouse that, upon being plugged in, was able to query the harddrive for the contents of
This is significant. I believe this is the exact reason why they're taking a very hard and critical look at security on what they're proposing. Personally I think it's great. The idea of security and even encryption should be placed in Layer 1 or Layer 2 on the OSI network model, not up higher commonly sits.
I'm glad they're looking at the benefits of an open protocol as well. They hit the nail on the head with their statement about closed source drawing attention just because it's a challenge.
Your idea of a rootmouse is intriguing. How would one make sure that devices couldn't access parts it wasn't supposed to? Perhaps a seperate bus for critical system components, still allowing you to place everything on one but without the security? Perhaps a Bus Administration Unit which components must authenticate against to get access to other devices?
Re:The future is wireless (Score:1)
Wireless will play an important part of the future, but it will nto replace wired connections, ever. At least IMNSHO.
Yes it's fun, yes it's 31337, yes it can be fast. You'll always have interference problems when you want it most, or be just out of range, or something. Perhaps a combination of wireless and high speed serial will get us to where we need to be. The world is very very unfriendly towards wireless.
Re:Can we please kill USB? Please? (Score:1)
Cypress puts out a USB controller for under $3 in quantity, and no media transformer is required. I believe Microchip now puts out a USB-enabled PIC too.
Don't post without reading the article (Score:2)
They clearly talk about multiple slots, power supply issues, the classses being storage, network, video, and cluster, and so on. It is obviously about external buses and not system memory. Indeed, there is a discussion about how to connect the serial buses to system memory.
Admit it: You didn't read the article.
sturdiness and security (Score:1)
They intend to make the CPU, storage, RAM just another connection to the (intra)net. The author briefly cogitates the subject of security.
How am I, the business user, going to implement security in Linux? You're on the third firewall model, ipfwadm -> ipchains -> iptables. How are us mortals going to keep up?
You're going to have to settle on a single model, then whack the shortcomings, a non-glory type of task.
Are you up to it?
Asynchronicity (Score:2)
Parallel is great for integrated buses, where you're going to try to have a bunch of fast devices share those "64" lines.
But, if you can get some really fast serial connections that only use a wire or two apiece, this can simplify the individual circuits.
And as the serial connections operate in an asynchronous manner, "bursting" goes away, and the system is liable to cope more gracefully with diverse kinds of "traffic."
Would I rather have:
Re:Serial and Parallel in a SchizoPhrenic article? (Score:1)
The next generation busses are only pushing the cost of disambiguating signals from hardware onto software.
This may be a good engineering decision. Feature for feature, software/firmware is and always has been much cheaper than hardware. But that flexibility comes at a cost: simply put, more features means more errors. Expect to see the software arbitration schemes for NGIO/FIO released on Flash ROM...so that errors can be patched.
Also, dynamic software arbitration schemes have a bunch of real hidden costs. There are the security costs (how can I keep my mouse from reading my password file?). There are the address assignment/boot-up time costs. But most importantly, there are the overhead costs of the protocol itself. The nominal bandwidth of a serial line employing a software arbitration protocol is much higher than its actual bandwidth. Parallel busses have real limitations, but we know that they really can send information at their nominal saturation rate. Serial busses can't.
Bottom line: serial isn't necessarily better than parallel. Performance will not improve as much as it seems. And those gains will come at the cost of incompatibility and instability.
Re:The future is wireless (Score:1)
And then you don't have to worry about someone scrambling or listening in on your communications. Or someone trying to hijack your hardware. (If you build it, someone will try to hack it!)
Having but with a technology like Jini this kind of things could be great. Let's not hook up everything like this, just like you don't use a PCI bus for everything today.
A few questions (Score:1)
Also, can all of the I/O ports that we currently have on the back of a PC be comfortably handled by this new architecture?
Then there's the discussion of impacts to case and motherboard designs assuming one removes the slots entirely. Power supply was mentioned, but are there any less obvious ones, such as drive bays?
Fast Serial I/O?? -- Reality Check (Score:1)
If you throw the "double clocking" trick on to PCI (seems fair to me), you get the amazing speed of almost a gigabyte/second! Where are we going to get 10Ghz network clocks? Why waste the effort, when it should be relatively easy to extend the 64x performance increase in Parallel bus design for a very long time into the future?
I'll tell you why... Intel, and others, want to keep the cost of entry high, and since there is no other way to do it, they are willing to have us pay for an unnecessary layer of hardware that only they can afford the research to create, to keep their market share!
I say no to this obvious attempt to raise the price of computing. We need to keep hardware prices falling in line with Moore's law, and not restrained in such a stupid manner. Serial may be fine for external interfaces, but parallel remains the only sane choice for a very long time to come.
In my opinion, this idea of making internal busses all serial is NUTS!
--Mike--
Re:Interesting article... (Score:2)
Power distribution.
Modern CPUs are high power, low voltage devices that need large numbers of power and ground pins. Each pin is limited to a relatively small amount of current. Many pins are needed for a high current, low impedance connection to the power supply.
Re:Fast Serial I/O?? -- Reality Check (Score:1)
Re:Interesting article... (Score:1)
Nobody has any experience with "fibre" - there's no such thing. The spelling was changed to "fibre" to denote media independence for the protocols that were originally designed to run over "fiber". So, I wouldn't worry about carpets and hungry cats and what not... you'll still see copper for "short distance" applications.
As for sturdiness of "fiber", it depends on whether or not you use glass or plastic for the core, the tradeoff being dB drop (glass being much less than that of plastic). I usually run into plastic core for multimode fiber applications, and glass core for single mode (where the distances can be up to 50km and dB drop is a big concern, even though your signal strength in single mode is much higher).
Re:Things we need from an I/O solution ..... (Score:2)
I think you're absolutely right - USB, Firewire & Bluetooth all show the advantages of a PNP serial interface that fast and easy to connect (with Bluetooth of course being much easier to connect than it is fast). Already two companies, Ericsson [zdnet.co.uk] & Idei [msnbc.com] have announced (links go to articles) a desire to develop Bluetooth enabled Flash RAM for storage. Great for PDAs, digital cameras and MP3 players, once they support Bluetooth. While the Bluetooth implimentations are not the worlds most rapid anything, the logical structure is very attractive - need some more {whatever - storage, CPU time, interfaces}, just add it to the piconet.
Hey, I just thought - has anyone commented on the power requirements/em? of a Serial I/O BUS v's a Parallel one of similar throughput? Are they more, less or about the same?
Re:Serial and Parallel in a SchizoPhrenic article? (Score:1)
many R&D development dollars poured into it.
Actually I think all the money's being poured into the design of fast serial because parallel interfaces need to be short and very rigidly controlled to be fast. And to get faster you need to add more lines (beyond the double-edge clocking and stuff). Serial offers more in this arena, and if you need to be faster than that yet, you can parallelize individual serial lines.
Yes. It's very difficult to build high-speed parallel interfaces, even if you have proper handshaking similar to serial. When you have a data cable with 8, 16, etc data lines (parallel), you start to worry that when you change the data on those lines and handshake, that the data appearing on the lines hasn't fully propogated to the end. So what appears on the end is a few changed bits, some indeterminate bits (in the process of changing), and some bits from the old bit of data. It's much easier to worry about 1 line than 8/16/2^n.
Related note: 'Clock Skew' is an important part of IC design, especially as processors get faster. The clock pulse changes at one end, but the change until some time later at the other end, which messes up timing.
Re:The future is wireless (Score:2)
Now, as a user of a portable PC, the power benefits of USB do tend to get reduced, since the main device may well be running on batteries anyway. Also, the wireless connectivity of Bluetooth is incredibly useful - and the Bluetooth technology is very low power. So, I'm not saying one is always better than the other, but I am saying that USB's place might well be when you're going to need a cable for power, since you might as well use one that delivers data too...
Serial I/O - Sharing resources (Score:1)
If anyone could enlighten me on the aspect of using these serial "networking" I/O solutions to share resources based on external hardware, as well as a possible migrating I/O structure, that would be great. It seems this is a better solution than todays "exported network drive" system.
Re:Can we please kill USB? Please? (Score:2)
Power distribution? (Score:2)
Power distribution.
Is that all? Then take the power and ground in through the top of the chip. There's a big powered fan sitting there anyway, it would be nothing new.
Implications TODAY of 'Infiniband' (Score:1)
Hmm.. (Score:2)
Re:Interesting article... (Score:1)
If you kink it ---- it breaks.
All the fibre channel cable I have worked with has a "minimum radius" of about 2 feet. I.E if you bend it more than then curve of a four foot wide circle -- it doeasn't break but too much light leaks out of the fibre to get a coherent signel.
At about one foot radius it starts to break.
This would not be very practical in the hostile environment we call home.
Would someone moderate up the previous post? (Score:1)
There is a problem with the moderation system: I am never a moderator when I want, and when I am a moderator usually I rarely find something interesting to moderate...
Oh well.
Also, there's no mention of SCI ... (Score:1)