Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News

Canada to Launch Countrywide Virtual SuperComputer 195

LadyCatra writes "A serious shortage of world-class computing power in Canada prompted University of Alberta scientists to create the next best thing -- a countrywide, virtual supercomputer. On Nov. 4, thousands of computers from research centres across the country will be strung together by a U of A effort to create the most powerful computer in this country. The full story is here"
This discussion has been archived. No new comments can be posted.

Canada to Launch Countrywide Virtual SuperComputer

Comments Filter:
  • by httpamphibio.us ( 579491 ) on Wednesday October 23, 2002 @04:19AM (#4511056)
    Why didn't they just make a client program for distributed computing so the entire country/world could help out?
    • As a canadian, I'd donate some spare cpu% to the UofA.
      I'm sure many others in the world would too.
    • And what do you call this?

      The computers will be linked by the Internet, but involve a simple networking system, Lu said. Keeping the linkage as simple as possible was the goal.

      Read the article the next time, will you?
      • It's not a system that just anyone can use, which is what I was suggestting.
      • Actually the article does not state anything about
        the implementation, other than it is internet based [please correct me if I'm wrong]. There's
        no reference to this project on the UoA web pages,
        either. UoA dept of Physics seems to have a Beowulf cluster, but this "Virtual Supercomputer"
        sounds more like Globus (http://www.globus.org)
        to me.
    • by FTL ( 112112 ) <slashdot.neil@fraser@name> on Wednesday October 23, 2002 @06:04AM (#4511281) Homepage
      >Why didn't they just make a client program for distributed computing so the entire country/world could help out?

      Because there will always be creeps who won't play fair. Much of the work that SETI@home does is security, combatting those who would submit false or abreviated results in order to get higher stats. UofA want to do real computing on a variety of applications. They've concluded that it is more efficent (for their purposes) to go for a small pool whose results they can trust, than to go for a large pool whose results they have to check and double-check.

      Each approach has significant advantages and disadvantages. It depends on the type of work you are interested in performing.

    • Why didn't they just make a client program for distributed computing so the entire country/world could help out?

      We used to call it Napster, but now it's called deadmeat something.
    • Thanks for the interest in our project. We have composed a http://www.cs.ualberta.ca/~ciss/CISS/faq.html [ualberta.ca]. I hope it is useful.
  • And.... (Score:3, Funny)

    by Anonymous Coward on Wednesday October 23, 2002 @04:20AM (#4511060)
    ...on Nov. 5, someone will find a way to temporarily use all of this virtual power to play a round or two of half life....
  • Wow (Score:4, Interesting)

    by Jezza ( 39441 ) on Wednesday October 23, 2002 @04:23AM (#4511068)
    This seems like a really good idea, I don't really understand why more places don't do this. I mean most of us work in offices where the computer power is amazing and largely untapped.

    I think what this really needs is to be make easier for the mainstream, so anyone could do it. Perhaps bundle the tools (programming and deployment) with mainstream operating systems?

    It's just an idea, my NeXT had Zilla (it's version of this) years ago - seems a shame that this hasn't caught on more widely. So come on Apple - let's see it, put it in the Darwin project and put a nice UI on it in Mac OS X.
    • It's just an idea, my NeXT had Zilla (it's version of this) years ago - seems a shame that this hasn't caught on more widely.

      And before that, other people did the same thing. And there is at least a dozen projects worldwide that are doing this already on a wide scale.

      So come on Apple - let's see it, put it in the Darwin project and put a nice UI on it in Mac OS X.


      And what, pray tell, should that "nice UI" actually do that current software isn't already doing?
      • Re:Wow (Score:3, Insightful)

        by Jezza ( 39441 )
        Well I guess the first thing that needed are developer tools to ease the creation of programs to run on the platform - I guess a new project type in Project Builder would help and maybe even some language additions to allow people to more easily create programs. There are a number of challenges involved with creating programs of this type, how do nodes communicate? What happens when a node goes away (someone starts to use the computer for instance) what happens when a new node becommes available? And of course how easy is it to deploy these programs? What you'd like to do is "feed" these programs in via some kind of queue, and allow that queue to be reordered - how does that work? You possibly want to prevent the machines from sleeping or being shutdown, this will also need some UI changes - maybe a machine needs to be shutdown for an upgrade or simply to be moved, how do you over ride the settings? You might also want to see how the programs impact the network (you can imagine that a program could swamp the network with IP traffic if you weren't careful) some form of debuging software that could run on a single machine to simulate it's deployment would also be useful.

        Of course Apple have some good tools here - perhaps Rendezvous (Apple's dynamic discovery or services over IP) could help. These such tools could help make it much easier to provide "community supercomputers". This would be especially useful in higher education, a place where Apple has been traditionally strong.
    • Re:Wow (Score:5, Informative)

      by popeyethesailor ( 325796 ) on Wednesday October 23, 2002 @06:06AM (#4511283)
      Google is doing this. Click on a button in the Google Toolbar, and your compute starts number crunching in its idle time.
      Check out the Google Compute Faq [google.com] and the Kuro5hin discussion [kuro5hin.org] on the subject.
    • Re:Wow (Score:5, Informative)

      by sql*kitten ( 1359 ) on Wednesday October 23, 2002 @06:30AM (#4511324)
      I think what this really needs is to be make easier for the mainstream, so anyone could do it. Perhaps bundle the tools (programming and deployment) with mainstream operating systems?

      Sun have Grid Engine [sun.com] and I believe Intel have something similar. The issue is that this kind of distributed processing is only useful for problems that can be divided into many discrete subtasks, which do not need to interact with other nodes while they are running, otherwise the work you need to do to communicate between nodes slaughters performance (that's why clustering hasn't taken over the world, vertical scaling on an active backplane is still the best solution for most jobs). The typical corporate large-compute job is data mining or decision support, neither of which scale particularly well horizontally.
    • Because contrary to what you may have NOT thought of, its not free...And with the likes of Enron manipulating energy costs, its best for companies to use a little power as possible.
      • by Jezza ( 39441 )
        I see your point (it was me describing defeating the machine's power saving features wasn't it?) but as long as the display powers down and the harddisk can spin down then most of the power savings can be achieved.

        On the NeXT there wasn't any power saving - such things hadn't been thought of, so this wasn't an issue. But as the display and the harddisk aren't needed for this kind of application they can shutdown as normal. I guess once the "community supercomputer" had finished doing whatever it was asked to do then it should restore all the power saving features (and be able to suspend them if it had a new problem issued to it).

        Thinking ecologically about it, remember how much energy was used in the manufacture and delivery of those computers - we should use them as much as we can to make best use of the resources already invested in them.
    • According to SETI Stats by Country [berkeley.edu], there are 212334 Canadian PCs running SETI@Home. I don't know how real these stats are (lots of these may be people who ran the thing once in the past, or who don't run it full time, and obviously this includes lots of computers slower than what you'd build into a modern Beowulf cluster, but it's still quite a bit larger than the network these guys are building. While some of the SETI@HOME network is still listening for space aliens, it's also running a number of earthbound projects like studying protein folding and searching for cancer drugs.

      There are real benefits for Canadian research that can come from this project - certainly there are a number of problems that are numerical and parallelizable, so there can be a lot of future to it if they do enough coordination, but most of Canada's academic supercomputing is currently driven by SETI. Besides scientific research, the other traditional users of supercomputers are weather prediction, oil exploration, and sometimes financial modelling - Canada may have more total supercomputer-based supercomputing than anybody realizes, in addition to SETI. However, the June 2002 top500.org list [top500.org] doesn't show anything in Canada above #227.

      Other results from the Top500.org list - SETI@Home is still about 7 times as large as the largest single machine on the list , Japan's NEC Earth Simulator, which is about 5 times as large as the #2 machine, LLNL's ASCII White.

  • by Anonymous Coward on Wednesday October 23, 2002 @04:23AM (#4511070)
    Anyway, before activating It, make sure It doesn't have any access to a spare nuclear warhead on orbit around Earth.
  • Sun is Right (Score:5, Insightful)

    by e8johan ( 605347 ) on Wednesday October 23, 2002 @04:26AM (#4511074) Homepage Journal
    "The Network is the Computer"

    It would be nice to see a worldwide system. If this is going to work there must be some CPU time quota system, perhaps a quota that can be bought and sold. This could make it interesting for ordinary home users to join (earn quota, sell quota, make $$$). There are many projects in the academic world that could never make a SETI@home launch, since the research is to boring. Still, we need to use all that idle time buring away across the world.
    • Make $$$ (Score:3, Informative)

      by tamnir ( 230394 )

      You mean like Popular Power [popularpower.com] tried (and failed)to do? Check their old site [popularpower.com] to see what they used to propose.

      Looks like selling CPU cycles is not a lucrative business...

      • This kind of thing might work, however, if a given company's IT department (say, the large nameless Networking company I work for) decided to install a client on all the company machines that would do this and ran it systematically. It could loan out the CPU cycles of an entire building at a time to institutions. The biggest issue is, in our case, that we're pushing laptops, and any non-personal computer (i.e. server) is doing real work as it is.
    • Re:Sun is Right (Score:4, Interesting)

      by fruey ( 563914 ) on Wednesday October 23, 2002 @05:58AM (#4511273) Homepage Journal
      Still, we need to use all that idle time buring away across the world.

      Do we? Idle time means the CPU is using less power, and generating less heat. I suppose that theoretically you are also making your processor transistors life slightly shorter, although there are probably arguments that a constant 50% CPU utilisation is not a bad thing because it will be more likely to maintain a constant temperature...

      In any case, multiplied up by many millions of installed PCs, using that idle time means increasing energy consumption by a not insignificant amount. We need to use less energy, not more! Indeed, saying that idle time is "buring(sic)" away is quite the opposite of the truth.

      • Re:Sun is Right (Score:5, Interesting)

        by e8johan ( 605347 ) on Wednesday October 23, 2002 @06:09AM (#4511288) Homepage Journal
        Not all CPUs power down when being idle. Most OSs has an idle task, burning away computer power in an endless loop.

        When usage is 50%, the CPU is probably not turned off at all, since turning on and off clock trees (and getting the PLLs to sync) take time.

        Since most home computers will not power down, we can use that potential computer power to save energy by not running super computers elsewhere.
        • Re:Sun is Right (Score:2, Interesting)

          by fruey ( 563914 )
          Fair enough, good point. Can you confirm that the idle task in question makes the CPU heat up as much, and uses as much energy as a floating point operation continuously looping? I have a hunch that it doesn't...
          • I suppose that a small idle loop (x: jmp x) only would affect the program counter, and a few pipe-line registers, so it would probably use less energy. Also, the memory bus would stay untouched if the CPU has a I-cache. So you'r probably right.
            But I don't think that the difference in power use is big. Still, no need for super computers at the universities, i.e. less power consumption there.
            As the problem of energy preservation in these kind of situations is very complex, we'd better use the economical benefits for the users (i.e. you don't need to buy a super computer to do a few minutes worth of number crunching).
        • Re:Sun is Right (Score:3, Informative)

          by P-Nuts ( 592605 )
          Not all CPUs power down when being idle. Most OSs has an idle task, burning away computer power in an endless loop.
          When there isn't much load, the idle task issues the HLT (halt) instruction. This lowers the energy consumption of the CPU. If you're using Linux, you can disable this feature by adding no-hlt=1 to your Lilo/Grub boot string. On notebook machines, some are cleverer and allow the CPU to underclock itself when it has less load.
          • Does the Athlon have a hardware command like this? I know that Intel's chips do... but I suspect it's only the newest Athlons that do, older ones do not have it and so get no benefit.
          • It's up to the OS to issue a HLT. Linux does this. AFAIK, Windows (any version) does not.
      • Not everybody's against global warming, eh? I once got yelled at by a Canadian ex-pat for wearing a Greenpeace t-shirt, because I was clearly an enemy of Canada.


        Actually, climate change has been a real problem for the ecosystem of the north coast, with a lot of ice melt and more open water than usual. One of the effects is that seals have more of the year, or in some places year-round, that they can find open water instead of making breathing holes in the ice. Polar bears and the traditional Inuit hunting methods both depend on catching seals at their breathing holes, so their hunting is much less effective.


        On computer-related topics - laptop batteries really don't like background CPU-burners. I used to run the GIMPS Great Internet Mersenne Prime Search, and I used to commute by train, with about an hour of battery time each way. NiMH batteries don't have the same failure behavior as NiCD, and they're nowhere near as nasty a toxic waste disposal problem, but they really don't like this kind of treatment. To compound matters, for some of that time period, I was running Windows NT 3.51, which was much more stable than Win95, but it insisted on being a *server* operating system that didn't need laptop power management drivers, so when it got a hardware low-power shutdown signal, instead of going into hibernation mode (see, the polar bears *were* relevant), it would blue-screen and die. I had to stop running the prime search.

    • Re:Sun is Right (Score:3, Informative)

      by lnixon ( 619827 )

      Yeah. There's actually quite a lot of research going into this currently. It's called the Grid (think "power grid", ubiquitous, simple to use), and I predict it will be the next big buzzword.

      See Global Grid Forum [gridforum.org], Grid Today [gridtoday.com] and the Globus project [globus.org] for starters.

      The problem of buying and selling computation power on some sort of broker basis is a quite interesting problem in itself. Exactly what are you selling? Hardly CPU hours, since the value of those depends on the hardware.

      • Re:Sun is Right (Score:5, Interesting)

        by e8johan ( 605347 ) on Wednesday October 23, 2002 @07:10AM (#4511398) Homepage Journal
        "Exactly what are you selling?"

        I'd like to suggest something like the JavaVM, i.e. a standard virtual machine, from which you buy and sell basic ops, i.e. a byte-code instruction.

        The biggest problem will probably be that you will not make any real money from letting your CPU be used. Perhaps a good idea would be to let a university supply you with internet access in exchange for CPU time. They usually have quite alot of bandwith.
    • This would be a cool way to get unrestricted high speed internet into homes...for example, if they were to say, "let us use your idle processes in return for free, unlimited, un-portblocked, static-IP internet that you can run a Domain off!" Sounds like a pretty decent trade to me.
  • by Anonymous Coward
    SkyNet!
  • by stud9920 ( 236753 ) on Wednesday October 23, 2002 @04:36AM (#4511095)
    That's a fantastic idea ! If this works, we'll be able to use it for useful computation ! It might sound crazy, but with such a virtual computer, one could make computations to help SETI or to cure cancer skyrocket ! How did they come up with such a great idea ?
  • by Goat In The Shell ( 320974 ) on Wednesday October 23, 2002 @04:36AM (#4511097)
    ...all Beowulf posts under this thread, including (but not limited to):

    - standard Beowulf trolls mixed with standard Canadian accent lexicon ("eh?", "aboot")

    - posts about how a Beowulf cluster could perhaps help Canada out with a stereotypical Canadian "problem" (lousy beer, socialized medicine)

    - jokes combining the word Beowulf with the name of the mentioned U of A chemist Wolfgang Jaeger

    Thank you.
    • by Anonymous Coward

      ...how a Beowulf cluster could perhaps help Canada out with a stereotypical Canadian "problem" (lousy beer, socialized medicine)

      Dude, Canadian beer and socialized medicine are the solution, not the problem.

  • by jukal ( 523582 ) on Wednesday October 23, 2002 @04:36AM (#4511098) Journal
    The article does not seem to mention whether they use a ready made grid/distributed computing platform or are they whipping up it themselves? Or am I blind? Does anyone know more about this? And what do they mean by:

    "The computers will be linked by the Internet, but involve a simple networking system, Lu said. Keeping the linkage as simple as possible was the goal."

    Based on the article I would assume that they have made a custom tailored system (if not kludge) for one specific purpose ("for calculating energy shifts as two molecules are manipulated around 3-D space") - and not a platform which could be easily tailored and managed to solve different kinds of tasks with different kinds of relationships between the tasks.

    Ohh, I could also link my grid computing links [cyberian.org].

    • by spditner ( 108739 ) on Wednesday October 23, 2002 @05:16AM (#4511191)
      Actually, they are Linux Clusters.

      I was visiting the Vancouver site a couple of months ago when they were assembling it. It looks sweet. A nice big array of Dual Athalons. The system is being linked together over CA*Net 3, a nation wide OC192 fibre network.

      They're also experimenting with distributing different parts of the system in different locales. Like disk storage in one part of the country, heavy number crunchers in the other, to see how distributed a system can really be and still function well.

      CA*Net is still looking for applications, the network is being severely underutilized. http://www.canarie.ca/advnet/canet3.html
    • SHARCNET (Score:3, Informative)

      by WEFUNK ( 471506 )
      I believe they will use high speed networks of Linux based Beowulf clusters (actually clusters of clusters of clusters). Ontario has already established SHARCNET [sharcnet.ca] between a number of Universities with a total of over 500 COMPAQ Alphas (mostly four-processor, 833Mhz, Alpha SMPs) and some Pentiums, all running Linux. A press release [comms.uwo.ca] from last year gives a good overview of the project, already first in Canada and the 11th most powerful academic computing system in North America. I believe the Canada wide project will essentially form a cluster of these cluster of clusters.

      SHARCNET has been up and running for a while and last year accounted for about 27% of supercomputing power in Canada (half of all supercomputing power in Canadian universities), with three sites on the Top 500 list and total power exceeding institutions like Cambridge, Princeton, Cornell and Caltech. There's loads of information available about the hardware and software [sharcnet.ca] used at each facility, as well as CPU load and usage statistics at members sites like these status charts [sharcnet.ca] from the most powerful individual site, at the University of Western Ontario. As for applications, a number of researchers are already using the system for a variety of projects across science, engineering, and economics.
  • Big Iron. (Score:3, Insightful)

    by jericho4.0 ( 565125 ) on Wednesday October 23, 2002 @04:46AM (#4511130)
    I thinks they're talking about linking together several (5-20?) large computers over fat pipes, rather than many small ones. Although seeing that all of Canada's reasearch computing power is less than that of the University of Southern Florida, that might not mean much.

  • The computers will work jointly on a molecular chemistry research question that would take a single computer as long as six years to complete. Jonathan Schaeffer and Paul Lu, professors in the U of A's department of computer science, expect their virtual supercomputer will do the work in one -- one day, that is.

    So how is this different from DC or SETI?
  • by archeopterix ( 594938 ) on Wednesday October 23, 2002 @05:16AM (#4511197) Journal
    The article isn't very specific on the kind of problems they will try to solve. The 'search' problems, where you have a big search space than can be easily divided into smaller chunks are easy. Unfortunately some problems cannot be easily split into many independent parts - simulations generally fall into this category. Weather simulations, nuclear explosion simulations, well, simulations in general :-). You can just assign each computer a square mile of terrain, do the computations for the whole simulations, then merge the results - the neighboring squares interact, so computers have to communicate after each time slice. This is where communication will probably slow your 'network supercomputer' down. No matter how fat the pipes are, they will be several orders of magnitude slower than an internal supercomputer bus in terms of latency. To put it short: this might be of some use, but they better start gathering money for a real supercomputer.
  • UofA called me and asked me if I still had my Commodore Amiga and could they borrow it! ;-)

    (I live in the Great White North, so I'm allowed to say this!)

    -psyco
  • by XinuXP ( 617662 ) on Wednesday October 23, 2002 @05:23AM (#4511208) Homepage Journal
    step 1:build the largest virtual supercomputer in canada

    step 2: ???

    step 3: global domination!
  • More info (Score:4, Informative)

    by Anonymous Coward on Wednesday October 23, 2002 @05:31AM (#4511224)
    Another article [canada.com]


    From the article Gerald Oakham and his fellow physicists have a problem. In the hunt for the most elusive speck of matter known to science, they are about to generate more data than any computer on the planet can analyse.

  • by Chutzpah ( 6677 ) on Wednesday October 23, 2002 @05:34AM (#4511227)
    My school [ubishops.ca], in conjunction with the Université de Sherbrooke [usherb.ca] (mostly the U de S) are setting up a world-class beowulf cluster for general scientific work. A physics professor at my University, who also happens to be a world class astronomer (Dr. Lorne Nelson) has a research grant that he is using to help with the funding for this cluster.
  • by magnum3065 ( 410727 ) on Wednesday October 23, 2002 @05:39AM (#4511238)
    rather than joining a currently existing project? I'm a student at the University of Virginia and we have a project like this that's been going on for 5 years now: http://legion.virginia.edu/

    They talk about how they feel that Canada should be pursuing its own supercomputing, but why not join up with other universities that have been pursuing similar projects and give Canada access to the computing power of other countries as well? Isn't the goal here for people to work together for mutual benefit? I don't understand why they feel the need to isolate their Canadian initiative, rather than giving Canada the access to computing power far greater than they can acheive on their own.

    Check out photos of UVA's branch of Legion: http://legion.virginia.edu/centurion/Photos.html
    (I think these are a little out of date. There's a bunch of rack-mount machines in there now too)
    This room has big glass walls, and everytime I walk by it I wish I had a room like it.
  • by el_flynn ( 1279 ) on Wednesday October 23, 2002 @05:40AM (#4511240) Homepage
    The article quoted that the computers "will be linked by the Internet, but involve a simple networking system". How many of you are willing to bet that someone is already gleefully planning a DDOS party?
  • by g4dget ( 579145 ) on Wednesday October 23, 2002 @05:42AM (#4511242)
    I don't quite get what the news is. I mean, these kinds of efforts have been around for a couple of decades, in various forms. Nor are Canadian academics particularly deprived--people in the US and Europe feel that they have to set up the same kinds of projects to get the cycles they need.

    So, why is this news? Is there some new technology they are using?

  • Why don't they mandate the companies must run an application on there workstations.

    Between 7pm and 7am We have 50pc's doing nothing at all in my office, I'm sure they could be doing some usefull math, especially if there's a national emergency.
  • Raw Power (Score:1, Funny)

    by Anonymous Coward
    Imagine the game of solitaire that the secretary will be able to play on TAHT supercomputer.

    Those cards will FLY to their place, thereby improving productivity ten-fold.
  • by Lumpy ( 12016 ) on Wednesday October 23, 2002 @06:51AM (#4511359) Homepage
    I clipped this out of reuters....

    Today, the Canadian Ministry for computing announced their initial tests of the Canada-wide massive computer project..

    Computer Scientist Thom Serveaux had this to say," when we switched it on every command was answered with the word "eh?" and it kept calling us "knobs" and was asking for "back bacon" we are trying to see if there is any problems in northern nodes that were like the Quebec nodes that started a fight with the other nodes demanding every command to be repeated in french."

    Updates will be posted on their progress..

  • Shatner (Score:4, Funny)

    by Dannon ( 142147 ) on Wednesday October 23, 2002 @06:57AM (#4511372) Journal
    Isn't William Shatner from Canada? Maybe this is an attempt to develop a more powerful 'Priceline SuperComputer'....

    A supercomputer capable of creating more convincing commercials, perhaps?
  • by west ( 39918 ) on Wednesday October 23, 2002 @07:00AM (#4511376)
    Saving up for a "real" supercomputer is a pipe dream. Supercomputers cost several million dollars a year in upkeep, and that's the killer. You might easily get grants to allow a project to use 'x' dollars worth of computing, but nobody is going to approve a capital grant that requires millions each year.

    When the University of Toronto did purchase a Cray in the mid-eighties, there was a massive fight. Many felt that the resources to support the Cray were sucking money desperately needed everywhere else. (although, boy, we in meteorology a happy bunch...)

    While lower profile and somewhat more painful to use, this is far more practical solution for the realities of academic computing today.

    • Here in Finland at least, we have a national supercomputing centre, which manages the supercomputers used by the universities and also commercial companies can buy computing time there. Certainly they get cost benefits of an arrangement like that compared to everyone buying their own supercomputer, which then sits underutilized most of the time.
  • Though I knew about it around two years ago... My undergrad thesis advisor is on the director's committee... =)

    Our, ehrm... Big Iron... (It'll get bigger!! Really!!)

    http://www.cs.unb.ca/acrl/
  • Grid Computing (Score:5, Interesting)

    by npch ( 53012 ) on Wednesday October 23, 2002 @07:33AM (#4511471)
    As many of the other posters have pointed out, this work isn't necessarily new, but it is news.

    There are other tools out there which do this: Legion, Avaki, Sun Grid Engine, Globus, to name a few but the goal is to create a network of (mostly) supercomputers which doesn't require a lot of reconfiguration at each site. What differentiates this work from many other approaches is that it is transparent to the system administrator.

    For those who ask "why can't you just do something let seti@home" the answer is that not all problems in science and business can be easily decomposed into small chunks. Bandwidth requirements and latency may also be a problem. A lot of scientific programmers have to worry about communications much more than about processing power (although this tradeoff has been seesawing backwards and forwards with new advances in both technologies).

    There's a worldwide effort through both business and academia to create a number of good, interoperating frameworks for doing this sort of transient, virtualised supercomputer.

    Have a look at the Global Grid Forum [ggf.org] (which is becoming the focus for Grid computing standards) for more information.
  • by vu2lid ( 126111 ) on Wednesday October 23, 2002 @07:40AM (#4511493) Homepage
    Looks similar to the Grid Computing project [vnunet.com] from India, announced sometime back ...
  • See the supercomputer top500 [top500.org].

    I think that a year ago or so, the Japanese supercomputer for earthquake simulations had more power than the other top 499 supercomputers combined.

    Sure, they'll be able to build a large, loose network of computers, but the access-speed will hardly compare to a single-site computer.

  • CISS (Score:3, Informative)

    by bartman ( 9863 ) on Wednesday October 23, 2002 @08:46AM (#4511735) Homepage Journal
    The 'Canadian Internetworked Scientific Supercomputer (CISS)' website is located here: http://www.c3.ca/ce/ciss_t.html [c3.ca]

    It seems that November 4th they will be doing a full 'production' test. Cool.
    • Looks like they based their protocol on ssh.

      No MS Passport or .NET... odd, I thought MS was in the market for Universities. :>
      • Re:Based on SSH (Score:3, Informative)

        >Looks like they based their protocol on ssh.

        Heh heh, the U of Alberta hosts the web and ftp space for OpenSSH and OpenBSD.
        $ ftp ftp.openbsd.org
        Connected to openbsd.sunsite.ualberta.ca.
        220-
        220- Welcome to SunSITE Alberta
        220-
        220- at the University of Alberta, in Edmonton, Alberta, Canada

        [SNIP]
        Also, Bob Beck [ualberta.ca] works at U of A. Bob helped develop the first OpenSSH release [openssh.com], not sure how active he is these days.

        For U of A, that all adds up to "premium class" tech support for anything to do with SSH.
  • First Task? (Score:4, Funny)

    by Hard_Code ( 49548 ) on Wednesday October 23, 2002 @09:06AM (#4511834)
    Search for the elusive beer molecule [sky-watch.com]. Eh.
  • by maddskillz ( 207500 ) on Wednesday October 23, 2002 @09:29AM (#4512000)
    The new supercomputer will be used to determine when the Maple Leafs will win the Stanley Cup next.
    • I was wondering myself what use a supercomputer would be for Canada. I mean, as far as I know they don't have a nuclear weapons testing program, and how hard is weather prediction in Canada, anyway?

      "Cold again, eh?"

      Reminds me of the weather forecaster character that I think was played by Steve Martin in "The Single Guy," who pre-recorded his LA weather forecasts.

  • by SETY ( 46845 ) on Wednesday October 23, 2002 @09:41AM (#4512079)
    How about the High Performance Computing
    Virtual Laboratory of Eastern Ontario.


    The High Performance Computing Virtual Laboratory (HPCVL) was formed by a consortium of four universities located in Eastern Ontario (Carleton University, Queen's University, The Royal Military College of Canada, and the University of Ottawa).
    http://www.hpcvl.org/


    It's also in the Top 500 supersomputer list, so it must be half-decent. So if four universities can have a dencent computer in Canada, others probably do too.

  • by altadel ( 89002 ) on Wednesday October 23, 2002 @10:56AM (#4512643)
    The University of Alberta has over a dozen clusters. Their central computing facility (CNS) has two clusters, Physics has three or more, CS at least one, Chemistry has seven clusters (0.5 THz total cycles), MechEng at least one, EE at least one, ...

    The U of A (U of Eh?) also participates in MACI (www.maci.ca) and houses three SGI Origin computers, and is involved with the WestGrid project (www.westgrid.ca).

    Prof. Schaeffer's point isn't that we don't have "computrons", but that research is increasingly using simulations (see Jaeger's work) and other computational methods, and computational resources are becoming increasingly overloaded as budgets are not growing as quickly as research advances.
  • I have launched my own shared memory virtual SMP supercomputer, that does not need any PVM or MPI network memory fetches. The "virtual" part is that for an N-processor virtual supercomputer, every Nth clock cycle belongs to a different "virtual" processor, all sharing the same physical processor in one machine.

    </joke>

  • The government has made all four of its 386s available for this venture, which brings, along with all private computers in Canada, the entire total up to seven.

    However, the government is concerned that the supercomputer doesn't properly reflect Canadian values. For example, the color of the cases is a light beige, which is racist. So all computers as part of this net will be painted multi colors.

    In addition, they must be used to do CANADIAN computation from a CANADIAN programmer at least one run time out of three.

    And, to further show off Canadian innovation, they've developed a new language for this cluster, C eh eh.

    Here's an example.

    #define EH EH
    #include "political_correctness.eh"
    #include "Liberal_Party_Donation.eh"
    #include "Kickback_via_golf_course.eh"
    #include "Payoff_to_Bombardier.eh"

    eh main (eh)
    eh
    printefe(FONT_DOUBLE_SIZE, "Bonjour, monde")eh
    printf(FONT_HALF_SIZE, "Hello, world")eh
    eh
  • WestGrid (Score:2, Informative)

    by chhamilton ( 264664 )
    I wasn't aware of a truly national effort, but for some time now there's been a project called WestGrid [westgrid.ca] with the exact same goals, but it's been a western Canada thing only. It's a grid computing effort between UBC, SFU, UoA, UoC, UoM, UoSask, and ULeth, where most of the contributed computing power is home-brew linux clusters. The lab I work in at Simon Fraser University (SFU) in Vancouver, BC, houses one of the contributing nodes, a 192 CPU (96 dual Athlon 1800s) Beowolf cluster (it made the Top 500 [top500.org] list [top500.org] this year, barely ;).
  • (sandfly-journal, ED)

    Today, University of Alberta computer scientists announced an experiment involving hundreds computers around the country in an attempt to build what may be the largest supercomputer in Canada.

    "Mindnumbing" was the word Dr. Al Koholic used to describe the system devised by himself and his graduate students.

    The system will include hundreds of 286s and 386s strung together with dental floss and toothpicks. "With recent budget cutbacks and lack of funding we had to head back to the drawing board and devise this cost-saving, efficient design".

    Dr. Koholic asks Canadians to "... donate any computer hardware they may have that is collecting dust ..." in attics, spare bedrooms and being used as doorstops. "8088, 8086, Apple 2Es... No problem".

    Once the supercomputer is complete it will be used to sort through the megabytes of data generated by the atom smashing lab also located at the UofA. Physicists there are in search of the ellusive 'Hangoverless-drunk' state obtained by smashing together beer molecules.


  • "On Nov. 4, thousands of computers from research centres across the country will be strung together by a U of A effort to create the most powerful computer in this country."

    Thereby surmounting the previous record holder [apple-history.com].

If Machiavelli were a hacker, he'd have worked for the CSSG. -- Phil Lapsley

Working...