Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Slashdot.org News

Assorted Slashdot Updates 93

As you probably noticed, the Server really bogged down today. The reason is quite simple: Katz's story, with about 500 comments weighed in at about a meg. 70 httpd's trying to serve a one meg file (which takes 2-3 minutes for most of you to download) perhaps a dozen times a second. Do the math *grin*. So I have a server level setting that will enforce Comment Limits when we get bogged. This will annoy the heck out of some how you, but it vastly speeds up page loads. After that mess, I'm glad to have some good news: I brought the 2500 stories online that I yanked awhile back when we were getting overloaded (mostly stories from late 97 early 98) so you can search for them again. More random musings are attached below.
I changed the numbered links on the homepage (and the word 'XX Bytes in Body') to link to the article instead of to the comments page, so you can cleanly use them instead of the 'Read More' link to browse at -1 or 'No Comments' mode, and still read the article contents. Thanks to the 8 billion of you who asked for that one :)

I added a Default Comment Limit now. Originally the limit was 50,000 (in other words, never :) but I've changed that to 100. This won't happen much, but it will be a good line of defense when those stories get really huge. You can still change that number in your user preferences if it annoys you. Your user preference will always take priority, unless the server goes into Overload mode.

Finally, I wanna strut a bit: Since we brought the old archives back online, Slashdot now has over 5400 stories in the database. Over 3300 of them were posted by yours truly, which strangely seems to explain my lack of a social life at this point :) Anyway, I just thought that was cool.

This discussion has been archived. No new comments can be posted.

Assorted Slashdot Updates

Comments Filter:
  • by Anonymous Coward
    I know this is a bit off topic, but I'm curious about something, and didn't feel the need to e-mail rob cause I'm sure he gets enough of it, and I really want him to have more time to get out and find a woman. (Although we would expect you you keep the site running smoothly the whole time.. hey rob.. ever considered hiring! I make a great bug tester eheh..) arg.. I should have logined in if I'm gonna kiss up like this.. hehe but I didn't log in, to respect the rule of no talking about having moderation. Anyhow, yesterday I finally recieved moderator status, (whoho) and set about to do my job dilligently. I was under the understading that we recieved a point per 100 comments posted. So I quickly used up my 4 points trying to be a good little moderator and show off the good, and hide the "first post" (accually I hid a "darn, I couldn't get the first post" post)
    Well after my 4 points where gone, that was the end of it, no more point sense then or anything. I'm wondering if this is just how it is, and I was misinformed, we arn't sposed to use up our last point but wait for more, or is this some kinda bug? I'm confused. Just wanted to ask.. oh btw about the job.. (You could probably figure out who this is just by the stle of writing and the heavy use of parethesis.. and bad spelling :)
  • by Anonymous Coward
    v0.3 is no where near recent.

  • Why not serve the topic images from the same server as the html, i.e. good old sebastian.slashdot.org. Ever since slashdot started getting really popular in late 1998 images.slashdot.org [slashdot.org] has been getting even slower than slashdot. That's where the topic images come, so rendering of web pages gets delayed too unless you browse with images turned off.

  • Question: It says the comment limit is 30. Does this mean I'll miss all but the 30 most recent comments posted?

    No. Say there are 300 comments. You will first go to a page with the 30 top comments (depending on how you have them sorted -- I use highest score first). At the bottom of the page you will see something like:

    1 2 3 4 5 6 7 8 9 10

    Meaning you are on the first page of 10, each with 30 comments.

  • A mod_perl process can get pretty huge, because it contains the apache server code, a perl interpreter, which, as any perl monger will tell you, consumes memory prolifligately. A single int consumes about eighteen bytes, IIRC. Not to mention all the Perl modules that may be loaded.

    The problem is compounded by the fact that perl code looks like data to the virtual memory system, which means that children sharing pages rapidly start to dirty them and thus get their own copies.

    In the mean time, the mod_perl process processes the actual request very quickly. The trouble is it then spends precious time dribbling bytes back to the client browser. Valuable time that could otherwise be spent creating dynamic content. And so more and more mod_perl processes are spawned, trying to deal with the load, when at any given time most of them are twiddling their thumbs.

    Enter a proxy, such as a Squid cache. The mod_perl emits the dynamic page at bus speed, and the proxy picks it up. The mod_perl process is then free to handle another client request. The proxy then doles out the page as fast (or as slow) as the client can accept it. Such processes can be built to have an extremely small footprint, due in part to a strong propensity to share VM pages.

    It is therefore not uncommon to see one mod_perl process doing the grunt work that is shipped out by ten or so lightweight caching processes. In any case, far fewer heavy mod_perl processes are required, thus much less memory is consumed by IO-blocked processes. People have reported dramatic savings on the mod_perl list using this approach. (Like, the difference between having to build a server farm or not).

  • If things get really tough then stick a Novell box running BorderManager FastCache infront.

    OK. I know that's not P.C. round here but Novell have a better clue when it comes to Real Time network data server than Linux does. Not that I have anything against Linux being the source server.....it is very good at that.
  • I use it on NT workstations and Linux boxes with little memory. It's great and it runs sooo fast especially when it is bogged down unlike Apache or IIS.

    the URL is http://www.imatix.com/
  • Not really since that's exactly what I just did. "Button 2" in netscape, or "Button 3" and drag down to the menu option.
  • Perl likes to sling strings, so the copy-on-write functionality gets hit more often than you'd like.

    I'm not sure about the difference between code space and data space, but I think that code space is marked "read-only", so copy-on-write never becomes an issue (there are no writes).

    - doug
  • I and others have said it before, but it seems to me it would hugely reduce Slashdot bandwidth if the comments were accessible via NNTP. Perhaps we could persuade Rob to work on that rather than submitting so many stories? Rob, I promise I'll buy a Slashdot T-shirt if you do...

    If the stories themselves were only accessible via WWW, the banner hit count shouldn't be changed much. (I'm kind of surprised I haven't seen any rude remarks about the recent Ziff-Davis banners, BTW...:-)
  • Whatever happened to cachedot? Did I miss a post
    or something some time ago? cachedot has stopped
    working more than two weeks ago. I mailed Rob
    about this, but we all now how busy he is...

    Mathijs
  • by jd ( 1658 )
    I'll repeat a suggestion I made a while back, then. DONT'T serve straight from httpd, but use either Apache's built-in caching system or Squid. That way, if you've a megabyte that everyone wants, it's ALREADY in RAM!


    Again, this is only a suggestion. If there's a technical reason this isn't what's already being done, or a personal preference thing, etc, that's cool. Or even if there is no "reason", it's cos you don't want to, that's also cool.


    The fact that the problem is being dealt with -somehow-, the method outlined in the intro to the thread, or whatever, is what's important.

  • Yes, but if you could affod a Cray you could fricking well afford a large pipe...
  • And/or put it in CVS so everyone can get read access to the truly current code.
  • It's been done. Go to your user preferences and select "Sort by newest first". Maybe ACs don't get to use this.
  • I can certainly agree with that - Katz, if your reading this: If you keep pumping out articles like that then I will make a point of reading every single one of them, and beating other people into doing so as well. It was fantastic to find there *were* people who felt the same way I did. :)

    --
    ashp.
  • by bhmit1 ( 2270 ) on Monday April 26, 1999 @05:59PM (#1914705) Homepage
    slashdot has been bad for a while now, even while viewing the static pages (index.shtml). cachedot was much better for that. Please fix the dns entry. Thanks CT.
  • That doesn't work -- the pages aren't static between users right now. Notice if you're logged in, your username shows up on the page, and there are a multitude of combinations of settings that can make pages show up differently. The only fix is to restructure the page URLs, content, and options to allow cacheable pages, or use Sybase or something like that and multiple front end servers that can handle dynamic pages...
  • Ach, you're right. My bad, I forgot about that. Stupid, especially since I've set up networks like that before, although not to solve that problem... I never thought about using it that way, when you're not actually even aiming to cache the page itself, just the sending of the page.

    I usually design network architectures for high volume of small hits not large hits, which is why I hadn't thought that through. Good bit to file away in the back of my head though!

    5MB seems high for apache -- you sure thats taking into account the shared code between apache processes? I'd figured the overhead at more like a meg and a half... but then, I'm not running mod_perl in any of my servers.
  • That's not the problem. If Rob could send that 1 meg file in anything close to 5 seconds, there'd be no problem. The issue is the number of httpd processes that can be running at a time (on most servers somewhere around 100 with Apache), and the fact that it takes two or three minutes to grab that file over a modem. If you want to do the math, this is the right math:

    Minute #1: 50 new users on modems
    Minute #2: Those first 50 users (still) + 50 more

    Oops, 100 users, we can't fork that many httpd processes. Some have to wait!

    Minute #3: Half those first minute users are done, they've got good 56k connections... maybe a dozen or two people can get back in and start transfers...

    See the problem? You get stuck, its like a traffic jam -- once it gets started it takes ages for it to correct itself. 12 connections per second over the course of a minute to 56k modem users comes just under the point where a T3 would begin to be saturated.

    Its important to remember that its not just number of connections that causes web performance problems, its the number of connections from slow users. Slashdot's problem is not hit-rate... look at the stats for the last 24 hours -- if anything the hit rate seems low. Its the time it takes the processes to free up.
  • The problem isn't the OS, the problem is users on modems downloading large files. The way Slashdot's pages are put together, a caching proxy in front of the server (like cachedot) doesn't work, because even pages that really don't need any variance (or much variance) in content among users still have differences that keep caching from happening. My username (tgd) shows up on the page, even though it doesn't really need to be there.

    Best way to improve performance other than using a more robust database and multiple servers that can generate dynamic pages is to restrict the number of possible combinations that a subpage can fall into, get rid of the stuff thats different between two users who have the same page settings (ie, get rid of the user-specific stuff), and make sure whatever differentiates it is in the PATH_INFO part of the URL so it caches, and the caching proxy doesn't think its a form submission. I bet Slashdot's got enough clout to get a donation of little ones from Corel or some place like that to throw Squid on.

    Only the homepage really needs to be different per-user. The rest of the site should be comment pages in various combinations (for each of the sort methods, and a handful of comment options... moderator pages could be served by bypassing the cache servers... Even if every page was served to the proxy with a two or three minute timeout, you'd get good performance, and could spread the load on long-download pages among a few more servers... getting a few hundred simultaneous downloads instead of 70 (which seems low anyway... Apache can usually grok more than that, although maybe the mod_perl stuff can't...)
  • Rob should release the source code to slashdot at regular intervals (along with the warning that it's beta) this would mean people who want to put all the latest slash technology into their site can do and they can also submit bugs to Rob - which would rapidly move the beta code to stable code.
    --
  • Read some mod_perl documentation. When Apache is compiled with mod_perl your httpd processes take up MEGABYTES of RAM. So why have your images coming from these httpd processes? Serving images off a separate machine makes things all that better for everyone. (send flames to Rob on his choice of mod_perl)

    James Turinsky
    slash-faq editor
  • Jon Katz's article was an excellent article! For me, how what happened to me in Junior High school and High school is much more interesting to me on a deep emotional level than yet another mainstream media outlet mentioning Linux.

    I find that I still had resentments against the way some people treated me in Junior high school and, to a lesser extent, high school. This is something I have kept secret, and Jon Katz's article has really help me get in touch with and start to get over what happened over ten years ago.

    One of my disillusments with the geek culture is that it is hard for people to open up with their feelings. I am glad to see Jon Katz's article giving people the courage to open up and get in touch with their feelings.

    For me, what those kids did was unacceptable. Then again, I have a part in the blame. If I knew then what I know now in terms of how people are emotionally, I would not have projected myself as someone to pick on.

    One advantage of all the moving I did in those years is that whatever image others had about me was wiped clean every time I went to a new school. By my last three years of high school, things were reasonable, if not ideal.

    - Sam Trenholme
  • Yes, it does make more sense in some ways ...

    but it's clunkier, not as pretty, and unless we all use Netscape for news (the easiest thing to do in X, since just clicking on a news url in Navigator brings up Messenger and automatically downloads the group), we won't be able to post in HTML (unless everyone agrees not to bitch), which I kinda like.

    That said, it's less convenient (but hey, if you can't figure it out, maybe you don't belong on slashdot), and a totally different dynamic. It'd be a hell of a lot faster, though, and MUCH easier to read.

    Of course, news.slashdot.org could have other uses too, like permanent slashdot newsgroups. I can think of a few right now.

    I also think we would get to know each other better, and it would build a better sense of community.
  • I myself was surprised at the outpouring of /. readers after the first Katz piece, "why kids kill". A lot of the stuff was very personal and touching.

    I hate to use such a trite term, but I think it was a bonding experience for us all.

  • Katz was asking the question: "Why Kids Kill?" and I would like to ask Katz and all others this question: "WHY ASK WHY ?!"

    I mean, look at what we have around us today

    --- NATO bombs Yugoslavia killing civilians in the name of "peace"

    --- Cops beating up people in the name of "upholding the law"

    --- Parents beat the craps out of the children in the name of "teaching them value"

    --- People killing/maiming/hurting/torturing/abusing each others in the name of "community interests"

    So why should we expect the kids to behave like CIVILIZED BEINGS while we grown ups behave like bloody @$$holes ?!

    Stop blaming the kids. Blame ourselves for making this world a pathetic place for the kids to grow up in.

    Stop blaming the guns. Blame ourselves for behaving so violently.

    All the question asked "Why kids do that?!" "Are guns dangerous?!" are nothing but monday-morning armchair quarterbacking.

    If we don't want the kids to behave like animals, we adults have to stop our barbaric behavior first.
  • I ran my HttpSniffer script over a couple of slashdot sessions to see if you were chunk-encoding stuff (it's ok, you're not AFAIK) in your replies, but also you're not using content-coding (where you can return gzip'ed or deflate'd content that will be unpacked by the client).

    I've never used content-coding myself, can anyone comment on how successful it is ?

    Most of the bandwidth you're using is relatively plain text, I would have thought gzip on a low setting could reduce the number of bytes to be sent by a fair bit IF you've got some CPU power to spare AND the chunk of text to be returned is a decent size (say > 10K). The browser request indicates which encodings are allowed (Accept-Encoding header field).

    I haven't looked at the slashdot perl source recently, but I would think that it would be a pretty small mod to use Compress::Zlib to do an in-memory deflate of large text bodes for users with browsers that will accept it.
  • I submitted an argument such as this many moons ago.

    Why can't it be possible to link in the NNTP server to the articles database on the Web version of slashdot?

    e.g. Replies to article posted to "news.slashdot.org" gets copied to "slashdot.org".

    Replies to articles on "slashdot.org" get posted to the relevant forum in "news.slashodt.org".

    OR... what about making the Slashdot web page a front-end to the real article/replies repository - "news.slashdot.org", a la Dejanews ?!?!?

    Some clever jiggery-pokery and you could probably maintain the look 'n feel of Slashdot, but with the added advantage of people being able to use an NNTP server to read/post articles. Otherwise your always going to be battling with ever-increasing demand fot HTTP downloads.

  • ...should Slashdot use an NNTP server for posting/replying articles, in addition to having it's original web based appearance?

    [ ] Yes
    [ ] Nah!
  • Gee Rob, since you said all those new articles,
    I can't seem to get ANYTHING back for my queries.
    I checked searching for "linux" to see if i'd
    get anything back, and voila!! Nothing. . .
  • It is you who knows not of what you speak. With Perl, your programs are treated by unix-like OSs as DATA pages, not CODE pages. This means that, after a few requests, each of your mod_perl httpd processes inherits a private copy of your perl code and data.

    So in practice, your httpd processes share a few megs of Apache, and do not share any of your perl code or data. This makes them very expensive.

  • Rob,

    You silly doof! Don't let your expensive mod_perl+mysql processes sit there pushing bytes down 28.8 & 56K pipes. Use squid in reverse proxy mode to buffer the output of mod_perl and then let squid, which has extremely cheap threads, twiddle its thumbs waiting for the client to receive.

    This is all well documented in recent discussions on mod_perl@apache.org. See also the mod_perl guide [apache.org] and the mod_perl mailing list archives [swarthmore.edu].

  • aha, that assumes you SEE the banners. I use squid/squirm to replace them with a 1x1 transparant gif as the page loads... it seriously improves /. performance. :)
  • I actually work for a group at my school which does something similar to that. Only partially dynamic pages with a perl/mod_perl backend to it. Our loads are phenomenally low (of course we're not getting Slashdot level load at all!)
    I invite Rob et al to take a look, i suggested using the Slash code but we did this one pretty much in a homegrown fashion.

    http://www.sin.wm.edu
    click the "guest" button, no login or password
  • See, even the mighty Linux can be brought down :)

    (but then again, so can freeBSD, my OS of choice, or so I've been told ;P)

    rob, have you considered using a multithreaded server like Xitami? That might improve performance somewhat.

  • Ah hah! That's how it works!

    The other guy was right... If your limit is set fairly low, you only get [limit] number of comments *per page*. Then there's a 1|2|3|4 etc in a grey dividing bar at the top and bottom of the comments area, which are the pages, each with [limit] number of additional comments.

    The wording in the preferences area isn't exactly clear about that, though. I think someone ought to add the words "per page" to the description of what that setting does.

  • [x] yes

    Err... waitaminnit, I think I said that already a few weeks back. *grin*

  • But it *is* all the kids' faults. I mean, if parents weren't so busy working their asses off to support the kids (who simply MUST have the $300 pair of shoes, or they'll go kill someone and take theirs), and to support their own parents, and the kids' drug-addicted babies, and all the while the government is taking 1/3 to 1/2 of the parents' paychecks, and the parents still have to pay for health care, car insurance, the mortgage, the endless counselors and medications for the entire family so they aren't accused of neglect, and so forth and so on...

    But if the parents hadn't become parents in the first place, there'd be no kids to worry about, and the kids' kids wouldn't be there, the house could be smaller, the bills would be a heck of a lot less, there probably wouldn't be any counselor or medication bills, and things would probably be a whole lot better.

    My suggestion: birth control. Use it early and often, folks!

    Or, we could just go with your idea: that as long as someone somewhere is acting like an asshole, everyone everywhere is free to act like an asshole. Hey, it's not OUR fault, it's that guy over there! How do you expect all of US to behave when he's doing that??

  • How will we be able to tell if the server's in overload mode?

  • Oh. Never mind. We just hit overload mode, and it says there on the dividing bar.

    Question: It says the comment limit is 30. Does this mean I'll miss all but the 30 most recent comments posted?

    If so, that really sucks... I'll have to reload about every two minutes to avoid missing anything.

  • So who's going to take up the collection for slashdot's next hardware upgrade: the Katz Server? If jon's stories and associated comments were served from a different machine than the front page, problem solved, right? Even better would be a small array of boxes, all serving stories on a least-busy basis, so whoever had the bandwidth available would be called upon when someone clicked "read more". I think that as slashdot posts more original content, and accumulates more regular comment-posters, this may become necessary.

    By the way, Katz article was brilliant. I thought Jonathan Yardley had the last word in the Washington Post yesterday, but Katz absolutely blew him away. Actually being in touch with the people you're claiming to represent makes a HUGE difference, and it shows.
    ----------------------

  • I want 'reply' to be a text link for the article so I can 'open in new window' and continue reading the existing replies.

    Anybody else miss this?

  • I'm still getting a lot of failures...

    It took three attempts to get to the page to submit this comment, for example.

  • All the "Anonymous Cowards", and everybody who just randomly surfs on in and never even posts at all -- they can't very well, by definition, have any user preferences set; so they must get the static page, no?

    Christian R. Conrad
    MY opinions, not my employer's - Hedengren, Finland.
  • Is the Katz Article setting some sort of record?
    I don't recall any article ever getting 700 talkback posts....
  • Didn't we have that less than a month ago? And wasn't the answer something like 95% male?

    Sigh.

    D

    ----
  • How about keyword boolean filters?
    I'd like to screen stories that have both "ZDNet" and "Linux" in them.
    I'd also like to screen comments that has the words ("sucks"&&"Microsoft")||"first post"
    I'm sure many would use that.

    It can help browse the main page easily,
    and shrink overcommented stories by eliminating the posts you dont want to see.
    (if you don't think moderators score good, or have certain subjects that doesnt interest you)


    ---
  • /. got /.ed:) IMHO, that is a good thing as it goes to show that we are all mere mortals. Not that Rob and the /. gang hadn't forseen such a possibility, they just don't like imposing themselves:) And, no, I am not getting paid by anyone to write this, either.
  • Some of the slashboxes are having problems, too, though this is unrelated to the overload today.

    x86.org/Intel Secrets hasn't updated in a week.

    MacOSRumors is missing. :)
  • by Fizgig ( 16368 ) on Monday April 26, 1999 @05:48PM (#1914742)
    That Katz article hit a nerve. It got linked to from other sites. My girlfriend had been depressed about the shootings, so I thought I'd email her a link to the Katz story (not that it would lift her spirits or anything), but she had already read it! My girlfriend, the English major, had read an article on Slashdot without me pointing it out (which, if I recall, I've never done before anyway).
  • You obviously didn't read the links he provided.

    It WILL work. The idea is to let the memory hogging httpd processes that are producing the dynamic content to exit out as quickly as possible. Then let squid which doesn't use as much memory pump out the data...

    This design would obviously improve the delivery of the static content more than the dynamic content. But that doesn't mean that the memory usage etc... wouldn't decrease for dynamic content.

    While I'm not sure what type of overhead squid takes per process... But my Apache processes normally chew about 5MB or RAM per connection. Assuming that squid uses 1MB overhead and you have another 1MB of content that has to be put in memory while it's being sent out that would only mean that you're chewing 2MB per process. Assuming that all the content was pulling that 1MB page (which we know it isn't) you would be able to server 250% (or 2.5 times) more connections than you would with the same amount of RAM. That's a pretty good...

  • I'll confess I liked it better when there were more moderators or they had more points or whatever... It really moved good comments up and bad comments down. I think rob has lowered the limit and made it so people gain points more slowly then normally... He seems to like the idea of 10% of comments being moderated wheras I wish it were something like 25%. Ah well...
  • I've seen links to it from all over.

    At least five from different places on BIX.

    The message count is still climbing.
  • The goal is 10% moderation, right?
    But moderation points are granted at a rate of 1%

    Granted, prior posts would give it a little leeway, but....

    That rate is not sustainable.
  • If I'm not very wrong then apache perl modules are compiled and cached, and hence treated as code pages that are shared between processes.
  • Has JonKatz's article surpassed the Linux 2.2 announcement to become Slashdot's busiest article ever?

    In my opinion, it was the BEST article I've read on the Littleton, Colorado subject; the BEST article I've read by JonKatz; and the BEST news-related article I've read on Slashdot. We all give JonKatz hell for articles like his Linux newbie article... but this one was extremely well-written and hit a nerve with all of us. (Probably us "geeks" and "nerds" moreso than others. As well as those of us like myself, fresh out of high school and into college.)

    Ryan
  • Rob Needs beter bandwidth.

    gigabit-Ether
  • Where do you see this??? I get a listing of the first 30 comments. Then it just simply stops. No next comments button, or anything. I really scared myself for a second there cause I thought one of my comments had been deleted. And I was on threshold -1 so I knew it had to be there. Then I on a whim changed my theshold to 1 and the comment was there. This was because it was now included in the first 30. Also I tried changing my threshold in my preferences page to 500 and it did absolutly nothing.
  • But moderation points are granted at a rate of 1%

    You're doing the math a bit wrong. Yes for each individual out of every hundred post, they are allowed to moderate 1 BUT!! you have you remember there are some 400+ moderators. This means that for every 100 post 400+ points are given out. This means that potentially %100 percent of articles can be moved up 4 points. Which is extremly excessive, but it works out right. The problem is oviously that the points are not being given out properly, and therefor I guess not. Unless it is accually only giving them to 400 people at a time, chosen randomly of the people who qualify.
  • by kend ( 22868 ) on Monday April 26, 1999 @06:49PM (#1914755)
    Um, folks -- with all due respect to BSD,
    and using squid to reverse-proxy, what I
    think Rob may have been getting at is this:
    1 Mb file, served 12 times/sec.:

    1,000,000 x 12 x 8 =~ 100 Mbit/sec. Last
    time I checked, a T-3 was approx. 45 Mb/s.
    You could be running a Cray, and as long
    as your pipe ain't large enough, your pages
    are gonna be slow.
  • In the slash0.2 source there is a slashd daemon which creates static pages from the database content.

    Does slashdot not do that anymore due to all the configuration options?
  • I'm relatively new to slashdot, just recently
    set my preferences. But I've already found
    some articles that really moved me, and some
    of the comments are sometimes even better that
    the article itself.
    I actually wanted to comment on The story "Voices from Hellmouth" but it was so filled with comments
    that it really was no point..
    Noone would really read my comment anyway.
    I guess some things really get to you, and people
    with a geeky background (which I partly have)
    is a lot more alike than imaginable, and a lot
    of people have gone through the same things.

  • No kidding! But who are you to complain, Mr. "I'm on a T-1 thanks to my college"? There are real modem users who need cachedot to be fixed! :-)

    -jason


  • Click the link on the left, near the top that says "Code". :-)
  • You missed the point - the idea is not to actually cache any pages.

    When modem users pull down a large file it takes a long time, and that "heavy" apache process is required for the entire duration they're pulling it down.

    By using the squid accelerator, the apache can QUICKLY offload the page to the cache, which then feeds it out to the user at a slower speed with less system load.
  • I don't get it.

    I need slashdot for dummies.

    -cebe
  • For those who know me, I'm not the one that posted that rant about roxen =)

    I'd just like to notice that Roxen would work exceptionally well for this task. One of the issues though is that Malda is a perl nut, not a pike nut. =)

    The threading alone doesn't make roxen faster(in fact, it isn't faster, just more scalable, which is what is needed here..), its its use of select() or poll() along with a server-side implmented caching system, so you don't end up reprocessing crap that you don't need to keep doing. The only benefit of threading of course is that all the threads share the same memory.

    Roxen Challenger [roxen.com]
    Pike [idonex.se]

    Something someone noted a while back is that roxen actually responds faster to the slashdot effect.. =) Probably due to the fact that it stores more stuff in ram because of the higher load. Doesn't make the OS freak out with 1000 connections.
  • This article has a lot more than 20 comments as advertised on the front page.

FORTUNE'S FUN FACTS TO KNOW AND TELL: A black panther is really a leopard that has a solid black coat rather then a spotted one.

Working...