Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
News

Seti@Home Bandwidth Problems 295

reflexreaction writes: "With so many of the /. users actively using and supporting Seti@home, many of you have realized that in the last couple of weeks that Seti has had some serious problems receiving completed data and getting new data to process from its 3 million members because of network bandwidth problems. All the gritty details are here. The article details some things that users can do to alleviate some of the problems including connecting during off hours and downloading more than unit than once using programs like SetiQueue for PC and Seti Unit Manager for Mac. Donations are also accepted. There is also a plea for bandwidth donations. It will be truly unfortunate if this page becomes /.ted without benefit from /. users."
This discussion has been archived. No new comments can be posted.

Seti@Home Bandwidth Problems

Comments Filter:
  • There goes whatever remaining bandwidth they had...
    • by d.valued ( 150022 ) on Monday February 18, 2002 @08:53PM (#3029512) Journal
      I'm not sure whether or not this is a good thing or a bad thing. Lemme elaborate.

      Disclamer: I have never been part of SETI@home; I feel that statistically it's a collossal waste of time. I've been part of both the GIMPS project [mersenne.org] and the distributed.net RC5-64 [distributed.net] projects for about four years now. I've got the Kevlar body armor halfway on.

      The good, I guess, is that there's such a collossal interest in this. I mean, hell, if KzAplOcQQ and boB are sharing the Encyclopaedia Galactica (or the Hitchikers' Guide, whatever) over radio waves, then we'll eventually find it hopefully in something that resembles paEr Unicode.

      However, I see a great many downsides to this.

      First off, if the aforementioned theoretical KzAplocQQ and boB of the paEr race have to use radio waves, then there's a pretty good chance they haven't been able to go superphotonic, in which case we're going to have a long wait before we can even think of going to their New York and flipping them the left tentacle.

      Secondly, how will we be able to decode a xenic dataset, much less their language? I mean, what if they can transmit trits or quaytes while we're looking for bits or bytes? How do we know what a newline would appear? Hell, do we even know if it would even be necessary? And what about the characters? What if the Chinese language is easier to interpret than paEr?

      Third, there are much better uses of free cycles, at least fiscally. GIMPS will provide a hundred kilobucks to the first person to successfully find a ten megadigit Mersenne prime. distributed.net provides a two kilobuck prize and a large donation to the FSF, EFF, or other worthy charities. Even the commercial distributed computing projects at least pay for the use of your rig.

      (PS: paEr is a theoretical name for a xenic (alien) species, contrived from randomly entering characters on the number pad. KzAplocQQ is an unpronouncable name, unless you're lucky or high. boB just sounds funny.)
      • Contrary to your claim, there is no better use for 'free cycles' than what I decide to use them for. My computer, my decision - I own the machine and I don't owe it to anyone to dedicate my 'free cycles' to any project other than those that I choose. If I want to give them to SETI@Home, who's to gainsay me?

        Now, do I believe that there's intelligent life out there just yearning to have it's radio signals read? Nope, I don't; although I think it's silly to believe that humans are the only intelligent life in the galaxy, I do believe that intelligence is so rare that in all likelihood our nearest neighbors are too far away to communicate with. So why allow SETI to suck up my extra cycles? Because although I think the project has zero chance of discovering intelligent life, the work and the hopes of all of these dedicated folks appeals to me. I let them use my cycles so they can get closer to answering the question near and dear to their hearts, even though ultimately I don't think they'll like what the find (i.e., silence).

        Still, it doesn't matter if anyone else thinks I'm 'wasting' my cycles. They're mine to waste as I please.

        Max
    • The web pages aren't subject to the bandwidth restrictions. Only the S@H data servers are.
  • Another solution (Score:1, Interesting)

    by spt ( 557979 )
    Another solution to Seti@home's bandwidth problems is for the clients to do something more useful. Like cure cancer [ud.com].
    • Who says ET doesn't already have a cure? And who even knows if their is a cure?
      • I'm not taking any drugs whose recipe came from a potential alien invasion force.
        • Umm.. think about that for a minute. Take the drug and possibly live, or maybe die a quicker, shorter, less painful death. Don't take it, and die the slow, lingering spiral into oblivion (and pain) that is cancer. I'd take it without hesitation.
    • Distributed.net [distributed.net] is also looking for new members!

      willy
      • Dnet is useless. Brute forcing an algorithim that has a known amount of computational cycles that it needs to go through the whole keyspace is the most stupid thing I could think of for tasking in parallel. If people are not doing it for the money I have no clue why they would waste power for something that only has a value for those who like to stoke their egos on how many keys/sec their latest and greatest or oldest and obscure can pump out.
    • Re:Another solution (Score:1, Informative)

      by Anonymous Coward
      I second that. Not that one is necessarily more useful than the other but perhaps there should be more... distrobution... of computing power to different causes. United Devices [ud.com] seems to be a worthy alternative to SETI@Home.
    • Maybe if they had clients for something other than Windows, I would!
    • by zAmb0ni ( 214345 )
      Cure Cancer with UD? Think again.
      If you didn't see the story last week here it is (http://www.theinquirer.net/15020202.htm)

      "THE INTEL/UD cancer project is about to close, but there is confusion as to whether this is due to a shortage of funds or because the work has been completed. According to Andy Prince, Director of Corporate Communications at UD, the cancer programme is about to be terminated because its goals have been met.

      Said Prince: "Absolutely. We have actually exceeded our goals as far as the cancer project goes. According to the contract, we agreed to analyze 250M molecules against 8 proteins. We are close to finishing 3.5B molecules against 12 proteins and will be announcing the close of the project soon - not a premature close, but the actual end of the project. "
    • distributedfolding seems to be having problems for some time too. I haven't been able to upload for some time. bandwidth?
  • by asv108 ( 141455 ) <asv@@@ivoss...com> on Monday February 18, 2002 @08:40PM (#3029453) Homepage Journal
    Current network bandwidth problems
    2/6/2002
    The problem

    When your SETI@home screensaver downloads a work unit, the data flows from a server in our laboratory, through the University of California at Berkeley campus network, and through a connection to the commercial Internet. This connection is shared by all UCB Internet users - departmental web and FTP sites, email, SETI@home, and so on. The University pays for bandwidth on this connection; it is currently buying 70 megabits per second (Mbps). The student residence hall have a separate 40 Mbps connection.

    Until recently, SETI@home was given about 25 Mbps, and the remaining 45 Mbps was shared by the rest of campus. But starting last month (January 2002) the bandwidth used by the rest of campus increased in an unexpected and unexplained way. During peak periods the demand now exceeds 70 Mbps. If SETI@home continued to use 25 Mbps, the performance of all other outgoing traffic would suffer.

    The UCB network administrators have worked hard to balance the bandwidth needs of SETI@home and the rest of campus. Currently, SETI@home traffic is given lower priority than other traffic. During peak periods (typically 10 AM - 10 PM PST) SETI@home averages 6 Mbps, and sometimes gets no bandwidth. During non-peak periods SETI@home gets as much as 50 Mbps.

    When SETI@home is not getting enough bandwidth, our data server backs up - all of its processes are waiting to send data, and it can't accept new connections. During these periods, your screensaver will get report that it "can't connect to server".

    The impact on our overall computing rate is significant but not too serious - the rate has dropped about 25%. But many SETI@home users are unhappy that their computers are sitting idle for many hours, waiting for data. We share this unhappiness, and are working to solve the problem.

    Short-term solutions
    We're working on several short-term solutions:

    Increase the bandwidth of UCB's network connection. We hope to "expand the pipe" by about 10 Mbps - enough to ease, but not eliminate, the crisis. The issue is money - bandwidth costs about $300 a month per megabit, and neither SETI@home nor the university has budgeted for this cost.

    Send data more efficiently. Currently work units are encoded as text. By sending them in binary, we can shrink them by about 25%. (Note: data compression isn't effective for our data, which is primarily random noise). This change will require a new version of the client software. Increase the amount of computation per work unit. Doubling the CPU time per work unit - by looking at more chirp rates, for example - will reduce bandwidth by 50%. There is scientific justification for doing this, although the law of diminishing returns applies. This will also require a new version of the client software. Long-term solutions

    The long-term solution is to allow work units to be sent from servers outside UC Berkeley. This could be done, for example, by sending work units to servers at organizations - companies and universities - that are willing to donate part of their outgoing network bandwidth to SETI@home. In addition to solving the current problem, this could greatly increase our overall data capacity, enabling us to search for ET signals in a wider frequency band.

    This solution represents a significant change to our software; we will use this approach in our next-generation software. We are seeking funding to develop this software, and it won't be ready for at least 6 months.

    What you can do There are a couple of things you can do to keep your computers busy processing SETI@home data:

    If you connect manually (e.g., over a modem) try connecting during off hours (23:00 to 3:00 Pacific Standard Time, or 7:00 to 11:00 UT). You can check the Server status page to see if we're currently dropping connections. Download more than one work unit when you connect. This can be done manually, or by automated workunit caching software. Example programs include SetiQueue for Windows, or Seti Unit Manager for Macintosh. For more information about other SETI@home add-ons see our links page.

    To help us achieve a short-term solution, you can help in two ways:

    Donate to SETI@home. This will enable us to buy network bandwidth. Help us find "bandwidth sponsors". We hope that a major commercial ISP might donate bandwidth to UC Berkeley to help SETI@home. If you work for, or have contacts in, such a company, please contact us.

    • But starting last month (January 2002) the bandwidth used by the rest of campus increased in an unexpected and unexplained way.

      Someone tell those guys at Berkeley to stop downloading so much freakin' PR0N!!!

      -Russ
  • Use Google Mirrors! (Score:4, Informative)

    by joebp ( 528430 ) on Monday February 18, 2002 @08:40PM (#3029454) Homepage
  • by mshomphe ( 106567 ) on Monday February 18, 2002 @08:43PM (#3029468) Homepage Journal
    Doubling the CPU time per work unit - by looking at more chirp rates, for example - will reduce bandwidth by 50%.

    Man, that's why my computer is so damn slow! I need to replace my bird!
  • Easy solution (Score:4, Insightful)

    by xX_sticky_Xx ( 526967 ) on Monday February 18, 2002 @08:44PM (#3029472) Homepage Journal
    If their BW problems stem from the fact that the rest of the campus has experienced a "mysterious" increase in network traffic, a good start may be to block access on ports used by popular file sharing programs. I'll bet that this is where a lot of the BW demand is coming from since the increase happened at the beginning of a new semester.
    • Not that easy-most of the file sharing program BW is used by residence halls, which is on a seperate bandwidth allocation. Ill bet very few UC Berkeley Computers on campus, which are used by staff or students for educational purposes, involve file sharing or related programs. The BW increased at the start of the new semester, but last time i checked, computers being used tend to use more BW than computers not being used...
    • Unfortunatly, Berkley has two pipes, one for the Residence halls and one for the rest of campus. It seems odd that they can't figure out where all the data is coming from, but I don't think its students in the dorms. Its possible that someone is running a public proxy or an ftp on their dept. network, but you'd think a renowned computer school like Berkley could afford staff and software that could figure the simple stuff out.

    • And who says that SETI is more important than file-sharing? I'm not saying it isn't, but your "easy solution" sounds a little knee-jerk to me.
    • perhaps they should move to a more peer-to-peer system, with new work units downloadable by "peers." everyone would be a bandwidth sponsor.

      add some digital signatures, and you could avoid (or detect) tampering.
    • Re:Easy solution (Score:2, Informative)

      by Anonymous Coward
      This presentation [nlanr.net] from the Berkeley network admin (Ken Lindahl) shows exactly how the BW has increased, and the problems they encountered in rate-limiting traffic.

      In fact, more presentations about the BW problem at serveral universities is here [nlanr.net]. They'd like to use traffic shapers, but traffic shapers are only designed to handle T1-level traffic, not OC3-level traffic.

      I saw the presentations in person (and I'm from Berkeley). They don't want to get in the business of deciding what is valid traffic, nor investing time to block the various workarounds (e.g., HTML proxies) that people will use to get around the filters.

      A temporary solution is to use proxies at other campuses to send the traffic to Berkeley via Internet2 [internet2.edu], since that traffic is free and isn't being restricted at Berkeley.
  • Distributed.net just uses a network of proxies, are the SETI people idiots or did they just not have the forethought that the distributed.net people had?
    • Despite the fact that nothing new has come out of distributed.net for a while now, it's still the best-run distributed computing network. They have the most clients, for the most platforms with the most features, and that's why I continue to install the client on several PCs a month.

      I've used SETI@Home and United Devices before, but frankly, I didn't like them much.

      SETI has more users than it needs, last time I checked, the same data was being tested over and over again, simply because they have more volunteers than they need. I'd much rather see that CPU time go to the projects that need it.

      United Devices has an admirable goal, curing cancer, but a lack of SMP support in their clients, and the lack of a Linux or Mac client pretty much rules them out for me. I use Windows, Linux, and Mac OS X every day, I can't run United Devices on all those platforms...

      So come on everybody that's running SETI, save them some bandwidth, come join distributed.net, and we can power through the rest of RC5-64!!!

      Just don't get me started on the OGR projects, they've been open for too long, and no one seems to know how to close them. OGR-24 should have been done a long time ago, but isn't, due (apparently) to a lack of managerial oversight, or poor planning.
      • Why did they jump straight to OGR24? I thought we didn't know the OGRs higher than 19 yet?

      • SETI has more users than it needs, last time I checked, the same data was being tested over and over again, simply because they have more volunteers than they need.

        Wrong. Learn, before you speak.

        From one of the FAQ pages [berkeley.edu]:

        If a signal is observed two or more times, and it's not RFI or a test signal, the SETI@home team will ask another group to take a look. This other group will be using different telescopes, receivers, computers, etc. This will hopefully rule out a bug in our equipment or our computer code

        Need you still wonder why the same Work Unit is processed by 2 or 3 machines?


        Didn't think so.
  • Gritty details? (Score:2, Insightful)

    by e5z8652 ( 528912 )
    "But starting last month (January 2002) the bandwidth used by the rest of campus increased in an unexpected and unexplained way."

    Doh. I was looking for the gritty details. Massive DDOS bot invasion? SNMP exploit? Warez? Rogue Quake III servers? Son of Napster? Backhoe dug up a cable? There has to be at least an educated guess as to where the bandwidth is going.

    I think the network admins at UC Berkeley are just cutting back on Seti, but don't want to admit it publicly. Bad press and all.
    • Re:Gritty details? (Score:5, Insightful)

      by acoopersmith ( 87160 ) on Monday February 18, 2002 @09:05PM (#3029567) Homepage Journal
      Actually, the network admins have pointed the finger at Kazaa & gnutella. According to the UCB Director of Communications & Network Services, "kazaa and gnutella account for more than half the bits in aggregate". And it's not just SETI that's suffering - all network users have been affected. Unfortunately, a lower priority or outright ban on those services has been rejected due to policy and legal issues.
      • Policy and legal issues? How about "illegal filesharing is making the network unusable for educational purposes?" I think that'd let them clear out the problem real fast, and I KNOW it's in the policies.
      • They should set up an internal sharing network... no reason to waste 5Mbps with everyone downloading the latest bs when they can download it once and spread it through their LAN...
      • Yeah - write a gnutella "invisible proxy" that sits right on the gateway.

        Tell it to cache replies for 45 seconds (so that external gnutella clients always show up later than local clients) and provide a list of IP addresses of local addresses that've made requests in the last 10 minutes. (so that other gnutella users can hook up to local gnutella network)

        Leave e/t else alone, and I bet gnutella usage drops by 80%, while still allowing students to download the latest Britney mpeg. Block Morpheus.
  • "... But starting last month (January 2002) the bandwidth used by the rest of campus increased in an unexpected and unexplained way. During peak periods the demand now exceeds 70 Mbps. ..."

    Student goes home for Xmas. Student gets new Windows XP box. Student chats like a 20-something adult using built-in chat SW.
    Bandwidth dissapears Jan 03, 2002.
  • Doh. (Score:2, Funny)

    by rudib ( 300816 )
    Go ask the little green men if they could perhaps borrow some bandwith =)
  • by Black Parrot ( 19622 ) on Monday February 18, 2002 @08:49PM (#3029494)


    Possibly of related interest, the is an article on Internet Scale Operating Systems [sciam.com] in the newest Scientific American.

  • Scaleability (Score:4, Interesting)

    by WndrBr3d ( 219963 ) on Monday February 18, 2002 @08:50PM (#3029499) Homepage Journal
    You have to give the Set@Home Team their props for making a system thats scaleable and able to handle the user load from the first 100,000 users to the now 3,000,000.

    I've always believed the bottleneck in Distributed Computing was the Data Packets being sent/recieved because the demand will grow exponentially the more users you aquire.

    Most applications seem to remidy this problem by limiting the data packet sizes from 5 - 15k compressed packets. This has worked for projects like Distributed.net.

    I can only forsee the future of this problem being the same that plagues Video Card Chipsets, which is insted of re-engineering the device to make a more robust and lower overhead solution, they'll just throw a bigger pipe on the line (much like Memory Bandwidth demand).

    But again, my respect goes out to the Seti@Home team and their sponsors for architecting a technological data mining marvel.
    • Re:Scaleability (Score:1, Insightful)

      by Anonymous Coward
      > because the demand will grow exponentially the more users you aquire

      Call me crazy, but I'd guess that demand on seti's servers grows linearly with the number of users. Unless each new user gets sent all of the data ever sent to all the previous users, of course.
      • This would be true if each user only ran 1 instance of Seti@Home, but I'm sure you know that "score whores" run it on about 40 different machines.
      • Call me crazy, but I'd guess that demand on seti's servers grows linearly with the number of users.

        However, the number of users grows exponentially with respect to time. Grandparent specified only that "the demand will grow exponentially" and that it will increase as the number of users increases. A colloquial meaning of "grow exponentially" is to grow following the early exponential-like stages of a logistic model [google.com], a model designed to model the spread of information such as a web site URL or a Warhol worm [berkeley.edu].

  • When SETI@Home first started, they were having quite a difficult time with people resubmitting completed work units and forged results in order to skew their group statistics. To keep things honest, they resend the same work unit to more than one system and compare the results.

    This has to be a difficult balancing act for them; while they don't give details about the exact nature of the doublechecking (so that people don't try to bypass it), this has to be eating the bandwidth for them.

    Maybe a better solution is not to increase bandwidth but to encrypt the data to prevent tampering?
  • by m_evanchik ( 398143 ) <michel_evanchikATevanchik...net> on Monday February 18, 2002 @08:56PM (#3029525) Homepage
    In a way, this hurdle could prove a boon, by forcing the SETI@home developers to make their system more efficient.

    Necessity is, after all, the mother of invention.

    As their own statement points out, two of the short-term solutions include making the data sent out more efficient (binary instead of text) and letting each node do more computation.

    SETI@home was originally developed to male up for the shortcomings of processing power of any single computer. To solve the problem, they took a bit of a free ride on networking bandwidth to distribute the problem.

    Now their success is also forcing them to be more efficient when it comes to network bandwidth, as well as processor, utilization.

    So this forced economy will hopefully make the system more efficient through improvement of the system.

    Pie-in-the-sky and we have all the computing power and bandwidth we need, but then who would have an incentive to innovate?

    Ultimately, SETI@home's legacy will probably have less to do with discoveries of extraterrestrial intelligence and more to do with the evolution of better computing techniques!
  • by Anonymous Coward on Monday February 18, 2002 @08:57PM (#3029527)
    I would have expected UC Berkeley to have a higher bandwidth connection to the Internet.

    Internet2's goal is 1Tbps connections -- That's faster than 70Mbps by over 10^5. Pretty funny.
    • by guacamole ( 24270 )
      The residence halls have a separate 40Mbps pipe, so it is 110Mbps combined. Also UC Berkeley conntects to Calren-2 and Internet-2 which run at much higher speeds but the problem with those is that they connect to large universities only.
  • My solution... (Score:2, Interesting)

    by Hal-9001 ( 43188 )
    ...I don't run SETI@home. It's my understanding that the SETI@home project now provides more processing power than they really need, as they have not optimized the client and do not support multiple processors.
    • Re:My solution... (Score:2, Informative)

      by jokrswild ( 247507 )
      Sure, it supports mulitple processors.. Just use a program like SetiDriver, and you can tell it how many WorkUnits you want to crunch simultaneously. Though the client really is optimized for intel processors for the most part.
  • by jinx90277 ( 517785 ) on Monday February 18, 2002 @09:06PM (#3029573)
    Given that Google has massive bandwidth and storage capabilities, perhaps SETI@Home should simply ask Google to host their servers. It's a win-win situation:
    • The kiddies get to keep downloading their MP3s and warez without that pesky space junk clogging their bandwidth.
    • Google gets to add yet another feature to their front page: "Search galactic transmissions for..."
  • Seriously - I shut off all my machines seti@home search and my electric bill dropped 10$ and I'm not kidding anyone in the slightest.

    What is the point anyhow? I mean this is collectively costing them (probably) billions of dollars a month to do this - between everyone's increased power bill. And seriously - what are the chances that their algorithm are going to find something worthwhile?
    • Especially if you live in CA, home of the eternal power crunch.

      I'm as guilty as any though. I keep my 2 boxen on all the time.
  • Need I say more? Even my web browser can handle gzip...

    Aside from that : WHY can't SETI get the TINY amount of cash it needs to handle this problem?
  • by mactari ( 220786 ) <rufwork AT gmail DOT com> on Monday February 18, 2002 @09:22PM (#3029647) Homepage
    The one thing that interested me about the blurb from the Seti@Home site that was linked from this article was the following quote:

    > But starting last month (January 2002) the
    > bandwidth used by the rest of campus increased in
    > an unexpected and unexplained way.

    I wonder if this isn't a byproduct of the intense bandwidth issues associated with peer to peer apps like Gnutella and Morpheus, popular music "sharing" applications that seem to get a bit of use on college grounds nationwide. I'd guess (if I had to; definitely talking out ye old arse here) the reason bandwidth usage wasn't noticed sooner is that many places (my place of work included -- I'm a gov't contractor) are placing a pretty high priority on "Homeland Security", including taking a fresh look at internet usage.

    These things aren't exactly bandwidth friendly (see http://people.cs.uchicago.edu/~matei/PAPERS/gnutel la-rc.pdf for a great discussion on the perils of the flaws in the first generation Gnutella protocol).

    Anyhow, that's what came to mind when I read the blurb. I think their best short term solution might be to chase down unattended Gnutella and Morpheus/KaZaA applications and get back that bandwidth.
  • Seti@Home? (Score:2, Informative)

    by jensend ( 71114 )
    Why even bother their servers at all? SETI should wait until we have our own world's problems figured out. Please visit Folding@Home [slashdot.org] or Genome@Home [slashdot.org] for two ways you can help solve actual problems. If solving geeky problems is more your style, visit d.net [distributed.net].
    • by rho ( 6063 )

      Dear unwashed heathen,

      Do not question the Church of Slashdot. Whereas we routinely mock Creationists as the lunatic fringe, we do not hold with the questioning of the legitimacy of a project that one day may lead to the Faithful having sex with aliens. Just like on Star Trek (or Babylon 5, according to the Orthodox branch of the CoS).

      Regardless of your feeble thinking, SETI@Home is deserving of all your base.

      For Great Justice!

      The Cleansed and Purified

    • Re:Seti@Home? (Score:4, Insightful)

      by dstone ( 191334 ) on Tuesday February 19, 2002 @01:32AM (#3030380) Homepage
      SETI should wait until we have our own world's problems figured out.

      Humans are made of meat, and sure, cancer is a problem we'd like to solve. But humans are also uniquely explorers and thinkers, and Not Knowing(tm) IS genuinely one of our problems. Some believe that SETI is a step towards solving that problem. File it under "motivation" or "purpose" (by simplying "knowing").

      A future generation may answer the eternal question for us. And if they do, every generation that follows will be affected in their daily outlook, their goals, their attitudes, their comforts, their concerns, etc. That's at least as profound as a cure for cancer.
  • An easy solution (Score:5, Informative)

    by mosch ( 204 ) on Monday February 18, 2002 @09:26PM (#3029666) Homepage
    I don't mean to be rude but another solution, if you're running windows, is to try to find a cure for cancer, or alzheimers, or anthrax, instead of looking for extra-terrestrial life. This can be done by downloading this [intel.com].

    Go, do it now, I swear you'll feel all warm and fuzzy.

    • by hyrdra ( 260687 )
      I don't mean to be rude but another solution, if you're running windows, is to try to find a cure for cancer, or alzheimers, or anthrax, instead of looking for extra-terrestrial life.

      Yeah. Why bother looking for such trivial things as life in the universe besides us? Why should we have gone into the rain forest just for the sake of going? Let's forget about we found a new type of antibiotic in the process.

      Why should we do things with no clear prospect of return? Well, one could argue we do them for science. You know? That old thing that leads to new advances in humanity? One could argue that great discoveries are often by accident. That means by looking and doing something new -- not always directed toward solving the problem at hand -- leads to a solution of a major problem.

      What does all of this have to do with searching for aliens? Well, it means we shouldn't stop doing something that some here might think as trivial or un-worldly, just because there are other issues at home. There will always be issues at home. Curing cancer, in many ways, is just as big a task as SETI@Home. It's the same as those who questioned the spending of millions of dollars for the space program, and that sending a man to the moon was stupid since we couldn't even solve our own problem of where to put and feed our own people.

      Well, what has gone to the moon given us? Certainly not a cure for cancer, at least not directly. What is has done is captured the imaginations of all those who were glued to the TV when those infamous images were sent back...Maybe a few of those millions have actually gone on to become doctors, engineers, etc. who have cured a disease or solved a new problem for humanity. It represented something new, raised hopes for people during that time and allowed many to live vicariously and not be concerned with current "at home" issues like finding a job or worrying about the war.

      There is a lot of merit for science dedicated toward application and I don't have any problem with, say, searching the a cure for cancer or Alzheimers. But the argument of there are better things to do is like the argument of "People are wasting bandwidth for trivial uses, that's why the Internet is so slow.".

      We should all dedicate our efforts toward solving our present problems, but we should always save a little to go to the moon once in a while...

      • Spare CPU cycles are free.

        Looking around for a signal that has a vanishingly small chance of being found during the existence of the human race is of pretty mickle value compared with adding known results to a database of likely candidates for effective treatments of painful and costly diseases that might just save your life.

        Or your sister's. Or your mom's.

        But of course, if we find the space aliens, they will bring us the cures for these diseases anyway.

        And maybe in return for our contacting them on a nice quality of letterhead, they won't turn us into brood hives for their younglings.

        --Blair
        "Dumbass."
        • Spare CPU cycles are *not* free.

          I think you'll find they cost energy (power consumption), cause extra noise pollution (localized; fans running faster) and will contribute to shortened hardware lifespan.

          My CPUs dropped by about 7degC when I stopped running d.net. Of course, that also meant I rapidly fell out of the top 100 charts :-(

        • Spare CPU cycles are not free.
          I was running dNet clients on a solaris farm for about a month. The temperature in the room dropped 5 degrees c one hour after shutting the clients down.
      • Yes, well if just the folding project did get 1/5 of the cpu power that the Seti does. Right now there's only 39087 active CPU's in the folding project. I am sure that if like 1/5 of the CPU's were on the Folding project, they would achieve great results that faster than the Seti project.

        But then again they might run into server problems too. :)
      • "Curing cancer, in many ways, is just as big a task as SETI@Home.... I don't have any problem with, say, searching [for] a cure for cancer."

        I would like to go to the moon. That would be cool. Watching alien TV would be cool too.

        It would have been somewhat cooler however if I hadn't lost 4 relatives under 50 to cancer in the last 5 years.

        ET or Cancer... ET or Cancer... ET or Cancer... we have to ask???
        • I would like to go to the moon. That would be cool. Watching alien TV would be cool too.

          It would have been somewhat cooler however if I hadn't lost 4 relatives under 50 to cancer in the last 5 years.

          ET or Cancer... ET or Cancer... ET or Cancer... we have to ask???


          Yes, we do have to ask. Let me give you an example, the laser. When lasers were first developed, it was called a solution in search of a problem. A cool toy, sure, but no use to anyone.

          But today, lasers are in CD and DVD players, surveying equipment, surgical tools, weapons, communications devices and machine tools. I don't think there's anyone in the Western world who doesn't (whether they are aware of it or not) use a laser or a product or service depending on lasers every day.

          People have been searching for a cure for cancer for a long time, without success. That suggests that the avenues of research that are being pursued don't lead to it. This is why basic research is so vital, because once a solution like a laser is found, whole new classes of problems can be solved.

          Since I'm not given to futurology, I won't say that SETI research is in anyway relevent to cancer, but here's the thing: no-one knows yet.
      • I'm not saying that seti is without value, I still run it on one of my machines. I'm merely noting that there are other ways to spend your spare cycles that might be worth exploring, since apparently seti is getting more users than it can handle, and let's face facts, dnet is an exercise in proving the obvious.

        For me, the prospect of helping to cure cancer and alzheimers is more immediately important than searching for extraterrestrial life, though I'm not so short-sighted as to ignore the importance of research for the sake of research.

  • Conspiracy (Score:5, Funny)

    by Max the Merciless ( 459901 ) on Monday February 18, 2002 @09:37PM (#3029706) Homepage
    These bandwidth problems aren't technical, they're political. We're getting too close, so they're shutting us down.
  • This isn't good, how's ET ever gonna phone home now???
  • More then one... (Score:3, Interesting)

    by Wiwi Jumbo ( 105640 ) on Monday February 18, 2002 @10:52PM (#3029765) Homepage Journal
    I'm currently trying to run Seti@Home and the UD Cancer Cure program but it's not going well... Seti won't give up any cycles to UD.... and in light of this I'll be shutting down Seti for a while.

    But what I really wish was created was a single program which all other tasks of this nature could be setup as plug-in's.... each plug-in getting all the unused cycles until it completes a unit and then the next plug-in get's it's turn... maybe even be able to decide how you want to skew the processings:

    5 Seti@ Home units, then 12 UD units, 4 Folding@Home, etc....

    There are a lot of projects out there I'd like to help with.... if only they'd play nice...
    • Two things:

      1) I think that it's very great that you're running the Cancer thing, but:
      2) It's NOT possible to run 2 apps that want 100% of your CPU at the same time.

      I run SETI. I have been for about 2 years now. If I hadn't already dedicated machines to SETI, I'd be doing the cancer thing.

      I smoke...
      • It's NOT possible to run 2 apps that want 100% of your CPU at the same time.

        But SETI doesn't need 100% of your CPU. It's not real-time! Ever heard of timeslicing? I have setiathome and foldingathome running just fine on my Linux box, at the same time. You just have to make sure that they have the same nice value (aka "priority").

        If I hadn't already dedicated machines to SETI, I'd be doing the cancer thing.

        You should be able to do so, unless you have a crap operating system.

  • Until recently, SETI@home was given about 25 Mbps, and the remaining 45 Mbps was shared by the rest of campus. But starting last month (January 2002) the bandwidth used by the rest of campus increased in an unexpected and unexplained way. During peak periods the demand now exceeds 70 Mbps. If SETI@home continued to use 25 Mbps, the performance of all other outgoing traffic would suffer.

    So it sounds like all they need to do is ban students from running Windows XP ("Do you want to download a patch? How 'bout a passport account? You know you want one. All your friends are getting them. And I've got another security update for you...what'd you say? Come on, give it a try. The first one's free you know..." etc. etc. That's probably 80% of the bandwidth right there.)

    -- MarkusQ

    P.S. Note for the humour impaired...oh, what's the use.

  • <troll>

    It's == It Is
    Its == possessive version of 'it'

    The rules of the apostrophe for it/its/it's are a special case and do not follow "Bob's Quick Guide to the Apostrophe, You Idiots [angryflower.com]."

    </troll>

  • Sounds like they could use some mirror sites for work units. Distribution could either be done late a night or by sneakernet.

    Also, the big "work_unit.sah" file appears to have most of its content in a uuencoded-type of format, which makes it 33% larger than its binary equivalent. Also, I don't know what format the binary data is in, but could it be compressed more?
  • Different Provider (Score:3, Interesting)

    by cdn_Gfunk ( 559946 ) on Tuesday February 19, 2002 @12:11AM (#3030091)
    this may sound funny if you can't raise money at $300 dollars per megabit but ever think of using a provider like cogent [cogentco.com] you could be provisioned a 100Mbps cat5 link for $3000 per month and use all you want. Just a thought
  • by Anonymous Coward on Tuesday February 19, 2002 @12:12AM (#3030097)
    There are some posts about how the recent spike in Berkeley's campus bandwidth usage could be attributed to some popular filesharing programs. This is, in fact, the case [google.com]. The reason that blocking kazaa/morpheus/et. al. is not a viable solution is mentioned in the previous link.

    UCB net admins and other interested parties have been discussing how to deal with the increased bandwidth demand on the ucb.net.discussion newsgroup: Google Groups thread: "latency from off-campus" [google.com].

    I live across the street from the Berkeley CS building where half the EECS servers are housed, and my connection to those machines can get pretty lagged. Having an inconsistent ISP certainly exacerbates the situation, but my experience with off-campus latencies has been quite bad for the past two years.

    Sure it's sad that Seti@home users can't use their computer's idle cycles quite so effortlessly anymore, but the bigger picture is that everyone trying to connect off-campus is suffering, especially people who are trying to get work done.

    The surprising thing for me is that detaching the dorm network (with all the student-run servers) leaves very few computers that could be sucking up all the bandwidth. We've suffered through DoS attacks from time to time, but the fact that Kazaa is still the number one bandwidth hog makes me wonder who runs these apps (professors? grad students? janitors?) and where are they running them from (lab computers aren't the best places to store all that warez, mp3s, and divx files, unless you don't care that they all get erased every day).

  • Internet 2 (Score:3, Informative)

    by Perdo ( 151843 ) on Tuesday February 19, 2002 @01:14AM (#3030320) Homepage Journal
    CalREN-2 consists of two giant loops - called CalREN North serving UC berkley and CalREN South (in the Los Angeles area). Each loop is a gigaPOP - providing the high-speed connection into the nationwide Internet. Each loop provides OC-48 (2,448 Mbp/s) connections to member campuses.

    Now, since this equipment has been in place since the middle of last summer, Why are they using their dual 45Mb/s connection? Just get some cable dogs out there to run some fiber. Hell, I'll get out there and run some fiber for them. Remember when some yahoo's cut their fiber while stealing copper to recycle? They were down for like two weeks. Well, it took them two weeks to run fiber across the campus again. If they get started now, they could have as much bandwidth as they could possible want by running fiber to their Internet 2 pop.

    I have seen the I2 Pop at the Sonoma county office of education. It is running at OC-3 (155Mb/s). That means a bunch of elementary schools have twice the bandwidth as the most prestigious Computer Science program currently running in the world. Prestigious? Yes, they have effectively harnessed millions of desktops to create the fastest computer on the planet by a huge margin. They push 27 Tflop/s on 25 Mb/s compared to ASCI White that just passed 10 Tflop/s. My computers, like every body else's, have wasted a lot of cycles waiting for data. Imagine if they had 2,448 Mbp/s available to them and enough users to create the first 2+ giga-flops computer. Of course they would need 240 million users to achieve that.

    Just to be a pessimist, that is probably exactly what all the distributed modules in Win2K/XP are for. Bill is going to have a really nice computer one of these days.
  • Last month there was a presentation [nlanr.net] by the Berkeley campus net. admin regarding the issues that are being discuessed here. It shows the traffic flows, how they increased when the students came, how problems occured when controlling traffic, and more!

    In fact, you can look here [nlanr.net] to get the story on what various universities are doing to manage traffic.

    One possible solution is to run SETI proxies at other universities that will route the traffic to Berkeley via Internet2, since that traffic is free and isn't being regulated/restricted. However, this may not work given that the problem is with transmitting the large data sets to clients, rather than receiving their relatively small responses.
  • by Paranoid ( 12863 ) <my-user-name@meuk.org> on Tuesday February 19, 2002 @04:30AM (#3030695)
    Like I said. [slashdot.org]

    This wasn't very hard to see coming, but its still unfortunate.

    For those who are looking for a workunit-caching program for linux, I've written a perlscript which has done a quite good job at it. I've decided to release it tonight, to help everyone out, but its a bit rough on the edges. It does the job, though. Read the README, download it here [glines.org]. Also, mirrors are welcome - my connection sucks far worse than theirs does =)

    • Minor bugfix release here - this should allow you to specify upload/download time periods that include midnight (like their suggestion of 23:00 to 3:00 PST).

      I've also created an actual webpage for it.

      You can find it here. [glines.org]

  • I know that at least 4 universities have a 10Gbit upling connecting eachother. Most others have 1Gbit. The Surfnet-network which interconnects all dutch universities is connected to several other research networks (one of them is the US Internet2) with at least gigabit speed. Read more about this at this [gigaport.nl] website. Since the network is there, and it is clearly meant to be used for research purposes I hope some Dutch university (or the Surfnet organisation itself) will raise its hand and help out.
  • Wouldn't it make MORE sense to try and find out what's causing the sudden and obviously unexpected BW usage?

    I mean, surely they have ruled out file-sharing services etc. They wouldn't overlook something so simple. (slight sarcasm intended.) Data isn't something that leaks out of Ethernet wire, it has to go SOMEWHERE. At worst, it's a bug that needs fixing.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...