Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
News

Tarpits for Microsoft Worms 245

Digital_Quartz writes: "Wired News is reporting on a clever little tool by Tom Liston called LaBrea which uses unused IP addresses on a network to create virtual computers for worms and hackers to attack. LaBrea responds to requests in such a way as to keep the connection open forever, creating a "tarpit" in which worms like Code Red will get "stuck"."
This discussion has been archived. No new comments can be posted.

Tarpits for Microsoft Worms

Comments Filter:
  • temporary measure! (Score:5, Insightful)

    by Pooh22 ( 145970 ) on Thursday September 20, 2001 @08:36AM (#2324691)
    Ok, so the next version will close the connection in 1 minute. I don't see this helping in the future...
    • it doesn't really help. People w/a lot of extra IP space are ones w/a decent connection (at least in theory). For me as a two IP DSL user it isn't going to do much for me.

      The god damn worm is still hitting my machine every 15s from a new IP and the god damn thing isn't stopping.

      Unless someone finds a way to stop the whole thing altogether w/o having to have the person run it manually patch it I am not terribly interested.

      • Ban Microsoft OS from your network. Easier said than done-- especially when the central administration at your workplace goes about replacing the universally accessible mainframe (via terminal interface) with a snazzy new website that only IE 5+ on Windows with JavaScript and ActiveX enabled can use.

        Not that I'm bitter, mind you.
    • I just recieved an email with the subject:

      Civil society has become one of- Foreign policy magazine winter 19992000

      with the attachment:

      Civil_society_has_become_one_of-_Foreign_policy_ma gazine_winter_19992000.doc.bat

      Didn't open it, but it comes from someone I don't know. Can't be good.
    • by interiot ( 50685 )
      From observation, it seems to me that it takes quite a while for obvious improvements to be included in new worms.

      For instance, it's somewhat obvious to me that it'd be good (for a virus) to leave open backdoors, so that your future viruses can springboard off your installed user base. If an old virus can spread from one computer to 20,000 in a week, then a virus that piggybacks on another virus could go from 20,000 infections to 40,000,000 in a week. AFAIK, CodeRed + Nimda was the first to do this.

    • by Phil Gregory ( 1042 ) <phil_g+slashdot@pobox.com> on Thursday September 20, 2001 @10:47AM (#2325355) Homepage

      Well, LaBrea operates below the level of most Windows network programs. From the program's point of view, it establishes a TCP connection to the server and issues the necessary HTTP commands. More things happen "on the wire" though. Here's a simplified timeline:

      1. Program calls connect() to reach the other host.
      2. OS's TCP/IP stack sends a SYN packet to the other host.
      3. One of several things happens:
        • The host does not respond and the connection eventually times out. Result: failed connection attempt after a short timeout.
        • The host is reachable but isn't listening on that port. It sends a RST packet. Result: The connection fails almost immediately.
        • Some other network error. At most, the connection will time out in a relatively short period of time.
        • The host is listening on the port and sends back a packet with both the SYN and ACK bits set
      4. Presuming the TCP/IP stack got a SYN ACK, it sends its own ACK and considers the connection established.
      5. The TCP/IP stack reports to the calling program the result of the connection attempt. If the three-way handshake went as normal (SYN, SYN ACK, ACK), it considers the connection open.
      6. If the connection was successful, the program starts sending data.
      7. The TCP/IP stack accepts the data, breaks it into packets to be sent, and doesn't return to the program until it's done.
      8. For each packet of data the TCP/IP stack sends out, it waits for an ACK of that packet, retransmitting the packet if it doesn't receive the ACK with a certain period of time. It will wait longer and longer after each transmission before eventually giving up on the connection altogether. The ultimate timeout on an established TCP session is relatively large (and possibly implementation-dependent; I don't remember exactly that part of the spec).

      Most firewalls only deal with the first SYN used to set up the connection. Either they reject it (send a RST) or drop it, leaving the connection to time out. La Brea responds to the initial SYN, then ignores everything else, leaving the TCP session to time out. That timing out usually takes a while (the author estimates about 15 minutes for Windows machines), and the program is unable to do anything while it's waiting for the TCP/IP stack to finish sending its data.

      Ok, enough background. The point is that, with standard programming practices, the program doesn't get to pick how long it waits before giving up. That is dependent on the OS's TCP/IP stack. The ways around it are harder. One workaround is to use raw TCP sockets (and this may not even work; I'd have to check my copies of Unix Network Programming, and they're at home). (Anyway, Windows XP will be the first WIndows to support raw TCP sockets.) The other is to use nonblocking I/O, which requires a lot more state keeping in the program. (But it would allow the program to fire off connections to a number of hosts and wait to deal with the responses as they come in.) I suspect most work writers would just count on very few people running things like LaBrea and write the simpler code.

      So, in summary: LaBrea is pretty nifty and a program can't just shorten its timeout period to get around the delay. The only workaround I can think of at the moment is nonblocking I/O, but that has its own drawbacks (and, depending on the program design, could still be slowed down by LaBrea).


      --Phil (Crazy network programmer.)
  • Interesting utility (Score:2, Interesting)

    by skroz ( 7870 )
    I've been looking into this for about a day... looks like it might have a chance, though it wouldn't be difficult to circumvent this kind of "tarpit" in future generations of viruses. By using multiple scanning threads, monitoring existing threads that might be getting stuck, and keeping an internal log of tarpits, a virus could learn which IPs to avoid. Handy in the short term, though, if enough people implement it.
    • Yes and at least it keeps them working on it. They tie up our time fixing it, we tie up their time making them rewrite it. Not the best solution but for now it is better than nothing.

      Maybe a few will get tired of having to work at it and do something useful.
    • And the script kiddies could call the fix "Briar Patch".

      "Oh, please br'er Bear, whatever you do, don't throw me in the Briar Patch!" - Br'er Rabbit.

    • I'm just waiting for this make it in as a kernel option (iptables) so that I can trap all inbound connections to ports that aren't listening.
  • How long do you think it'll take till a new release of one of these worms just spawns a new process for each attack? Now it might eventually bring your machine down, but the attacks keep going on till it does. The parent process could even kill the children after ten minutes just to help keep the machine viably attacking hosts.
  • I think this is a great tool for catching worms but mostly worms come from the outside where you _don't_ have IP's to spare but you will probably run it on the inside where it won't be that much use.

    There are no good way in blocking worms, the solution is a easy way to check if you've patched your servers against a central patch library..... like windowsupdate where supposted to work.

    // yendor

  • A very sweet trap.. (Score:2, Interesting)

    by TheHawke ( 237817 )
    Its the equavalent of the Honeypot Project and it shows promise. But the main problem is with these kind of projects, one has to wonder where the funding comes from to provide for the bandwidth..
    Honey usually doesn't come without a few stings you know...
    • one has to wonder where the funding comes from to provide for the bandwidth..

      Irrelevant, the worm will use bandwidth whether its funded or not. At least with this you gain some degree of control.

  • For a more detailed version of this same type, see the Honeynet Project [honeynet.org]. Knowledge is power, as the saying goes.
    • No, tarpits are not honeynets. Honeynets attempt to catch infections and analyze them, tarpits attempt to immobilize them by keeping connections stuck to them. Entirely different things.
  • by GreenJeepMan ( 398443 ) <josowski@tyTIGERbio.com minus cat> on Thursday September 20, 2001 @08:38AM (#2324707) Homepage Journal
    This way 10,000 years into the future, the viruses will be magically rediscovered in prestine condition.
  • Verily, the PC is developing into an organism in its own right. How long until hackers develop the first "immunodeficiency", anti-LaBrea attacks, I wonder?
  • The fundamental problem with this is that yes you may trap an instance of the worm with your fake IP address on your local network, but it's only one in a million instances all over the internet. Yes you will preserve a copy indefinitely, but this is not a tarpit, as a tarpit implies that the worm would get trapped and not be able to escape, in reality only one copy get's trapped, the others still roam free.
    • Actually, there were only about 300,000 known instances of Code Red. By this gentleman's estimation, only 1000 or so Tarpits would be necessary. If put in the right places, a single tarpit could stop multiple attackers.

      Incidents.org is gathering information about Nimda to determine what networks are most heavily affected, and therefore in most need of LaBrea traps. I don't know if it's going to work, but the theory is a good start.
      • I don't know if it's going to work, but the theory is a good start.

        Yes, or people could patch their fscking boxes...

        I'm not saying the tarpit idea is bad, it could help to some small degree. But it's a solution that we, Unix admins, are having to use because some Windows "admins" who double-clicked on "Install Web Server" don't know WTF they're doing...

        Of course I can't think of a better solution either. People have tried emailing admins of known infected boxes, etc, and so far none of this has helped...
    • I doubt that his intent was to create just one tarpit. The artical says "This is just a drop in the bucket", If a lot of networks ran tarpits, they could stop a few and SLOW, not stop the spread of the viruses, maybe giving other administrators time to patch.
      And it IS a tarpit because the worm that attacked the tarpit is stuck, and cannot spread. Thats like saying a real tarpit is not a tarpit because is didn't trap all animals, but only stopped a small number.
    • Re:NOT A TARPIT (Score:3, Interesting)

      by Erasei ( 315737 )
      This is exactly like a tarpit. Tarpits don't instantly trap all T-Rexes everywhere in the world. A tarpit will trap the one T-Rex that got a little too close. If there are thousands of tarpits in the world, then a lot more T-Rexes will get trapped. I don't think the author of LaBrea aims to have his program stop all worms on the internet, but if it were to be run on a few thousand machines, it would certainly help.
    • Re:NOT A TARPIT (Score:3, Interesting)

      by interiot ( 50685 )
      Well, for one, this will immensely help corporations with large networks... once you filter incoming email for the specific .exe at the firewall, you need to start cleaning up internal IIS servers. LaBrea will slow the IIS servers down to keep them from hosing your network while you hunt them down and clean them up.

      For two... (drum roll please...) What if we had a LaBrea Beowulf? If major network providers (eg. UUNet) implement this across their networks, it would save themselves bandwidth and thus cost, and would make their customers happier.

      It doesn't kill worms, it just greatly reduces their impact on the network. Sounds good to me.

  • Pointless (Score:3, Interesting)

    by scott1853 ( 194884 ) on Thursday September 20, 2001 @08:46AM (#2324733)
    It's a cool little program. It's purpose, to use up your own resources to prevent other peoples resources from being used up. There seems to be a little flaw in that logic to me. Personally, I like the scripts that connect to servers that have tried to infect them, and send those servers a bit of code to reboot the machine. I'd rather them install the patch automatically and then reboot the machine though. That seems like a much more effecient use of resources.

    Why has nobody either sent out a worm to patch machines, or created a script to patch the sender of a worm? The bandwidth used would be minimal to what is being eaten by these worms, and it would SOLVE the problem. Of course, in this day and age, nobody wants to actually solve a problem, they have to create some technically incredible way of ignoring a problem, or placing blame on the common scapegoat of MS or stupid admins, or doing some trivial task just to prove they can do the same type of thing as the virii spreaders.

    BTW, this article was posted on Wired yesterday afternoon, why did it take so long to get here?
    • Re:Pointless (Score:2, Interesting)

      by Red Weasel ( 166333 )
      According to the article someone did make a worm that patched the infected machines. The programmer was apparently arrested for spreading a worm onto the net.

      They could catch the white hat but the rest run free. That's just funny.
      • No, the article was talking about another worm that was patched by Max Vision (or whatever the heck he's calling himself nowadays). However, the worm was infecting DoD servers, so he sent out another worm to patch them. However (and this is why he was arrested), he put a backdoor in the patched version.
    • Re:Pointless (Score:2, Interesting)

      by Mawbid ( 3993 )
      It's been suggested many times, and a few people have even done it (cleaning the attacker, that is -- I'm not aware of a patch-worm having been released).

      The problem is that it's illegal. No matter how well meaning you are, no matter how much it helps the owners of the machines cleaned by the retaliation script/worm and the Net as a whole, it's still illegal and can get you in trouble. Like entering someone's parked car to turn off their headlights for them. (Sorry to use a computer/car analoy.)
    • Re:Pointless (Score:4, Interesting)

      by Gleef ( 86 ) on Thursday September 20, 2001 @09:26AM (#2324897) Homepage
      scott1853 writes:

      It's a cool little program. It's purpose, to use up your own resources to prevent other peoples resources from being used up. There seems to be a little flaw in that logic to me.

      It's a program to use a little bit of resources on one machine to reduce large resource impacts on many other machines. In addition, it allows you to detect and contact the owner of the infected host, hastening repair of the system and speeding up recovery of the net.

      If you have a large network, you might very well be helping yourself far in excess of the bandwith used by the tarpit, certainly a win in my book. Even for those with small networks, some people might well be interested in sacrificing a small, controllable amount of bandwidth to help the general health and well being of the internet as a whole.

      Why has nobody either sent out a worm to patch machines, or created a script to patch the sender of a worm? The bandwidth used would be minimal to what is being eaten by these worms,

      That is highly debatable.

      and it would SOLVE the problem.

      But the problem isn't "Code Red", that's just a symptom of the problem. The problem is a combination of low security on the internet and the fact that Microsoft's monopoly has the side effect of making many identical security holes on thousands of machines.

      Of course, in this day and age, nobody wants to actually solve a problem,

      Nobody particularly wants to waste a great deal of bandwith to put a band aid on other people's sites for each worm that comes out, which is what you seem to recommend.

      Real solutions to the problem aren't easy, but most of them are being actively worked on:
      * Increase competition in internet server platforms and applications;
      * Improve the distribution of security information and patches to the end users;
      * More commercial internet monitoring and response services (eg. Counterpane);
      * Security-conscious internet insurance plans
      * Segregate the typical broadband customer behind transparent firewalls (I'd pay extra for a premium broadband service to give me a real IP if it would get the bozos who shouldn't have a computer much less an internet server off the real IP space).
      • Re:Pointless (Score:4, Interesting)

        by scott1853 ( 194884 ) on Thursday September 20, 2001 @10:16AM (#2325171)
        Don't give me "it's a symptom of the problem" bullshit. The PROBLEM as it is right now, is the worm itself. Stop this worm, stop the next, give the people time to make the server secure and all the idiots time to figure out what they've gotten themself into by assuming they can run w2k. So your plan would be to just wait for MS to fix ALL their security holes and make it so my grandma can setup a W2k box and never have a problem? How long will that take, 5, 10, 15 years? And the fixes will introduce new bugs. So the answer is to do what gives the biggest response NOW, not a decade from now.

        I don't know what you're referring to in saying that I want everybody to waste their bandwidth. Somebody would need to release a worm that fixes the whole, spreads itself, and removes itself. I'm not saying everybody should install the script that simply reboots the machine, that does nothing but give the machine a 2 minutes break in between infections. I'm not saying the worm should scan a thousand IP addressed to see what machines are infected. Let it check log files if they exist, find any machines that tried to infect it, check and see if those are still infected, if not the worm should delete itself.
        • Re:Pointless (Score:3, Informative)

          by Gleef ( 86 )
          scott1853 writes:

          Don't give me "it's a symptom of the problem" bullshit. The PROBLEM as it is right now, is the worm itself. Stop this worm, stop the next, give the people time to make the server secure and all the idiots time to figure out what they've gotten themself into by assuming they can run w2k.

          OK, we disagree on what the basic problem is. No big deal, we can talk about how to deal with an arbitrary worm (the worm du jour seems to be Nimda).

          So your plan would be to just wait for MS to fix ALL their security holes and make it so my grandma can setup a W2k box and never have a problem? How long will that take, 5, 10, 15 years? And the fixes will introduce new bugs. So the answer is to do what gives the biggest response NOW, not a decade from now.

          That wasn't my plan, although a piece of what I was discussing does involve Microsoft (and other vendors) streamlining their security patch process. There is no way that *any* vendors can fix *all* security holes. Waiting for that would be ludicrous. Regardless, I was referring to how to reduce the impact of future worms (and other internet badness), not how to deal with a worm in the wild now.

          Worm in the wild now: As of this writing, the last three major worms were "Code Red", "Code Red II" and "Nimda". All three of these exploit holes in Microsoft software, and these holes were discovered and a patch written months ago. In addition, Nimda exploits holes opened up by an active Code Red II infection. Any competent administrator unfortunate enough to have to manage an IIS installation has taken their machine offline, made sure their machine is worm-free, patched NT/2000 and IIS, and put it back online. Your main concern is those admins who have not done this, and there are a disappointingly large number of them.

          I don't know what you're referring to in saying that I want everybody to waste their bandwidth. Somebody would need to release a worm that fixes the whole, spreads itself, and removes itself.

          Where do you think the bandwidth issue comes from? When a worm scans host machines to look for places to spread, it uses a lot of bandwidth. This is what most people here are complaining about. Your proposed worm may fix bad IIS installations, but it would have to use at least as much bandwidth as the worm it's designed to fix.

          The people here (me included) won't thank you, since they care more about how these worms impact bandwidth than whether someone has an infected machine somewhere. The administrator of the machines you've "fixed" won't thank you, because now they've had two or three intrusions while they were napping, rather than one.

          If the repair worm has a minor bug in it, it could potentially do more damage than the original worm, or open up a new security hole as it fixes the others. In such a case, at best you are looking at a lawsuit against you; at worst, multiple felony convictions in multiple countries.

          I'm not saying everybody should install the script that simply reboots the machine, that does nothing but give the machine a 2 minutes break in between infections.

          Good, because while I'm not sure what you're talking about here, it doesn't sound like a good idea.

          I'm not saying the worm should scan a thousand IP addressed to see what machines are infected.

          In order for a worm like you describe to work, it probably would have to scan thousands of machines for a vulnerability, infect the machine with your worm, and then detect whether or not the worm is present from the inside.

          You *might* be lucky and target a worm which leaves external evidence so you can scan thousands of machines for the presense of the worm. Both Code Red II and Nimda can be detected from the outside, but the check I know of for Nimda uses a lot of bandwidth. Regardless, a worm would have to scan thousands of machines to impliment your idea, it's just a question of what it scans for.

          Let it check log files if they exist, find any machines that tried to infect it, check and see if those are still infected, if not the worm should delete itself.

          What log files are you talking about? None of the worms leave a log that I know of. Neither NT nor 2000 log intrusion attempts without extra software. I would wager that very few of the infected machines have IDS software installed. In order to write a worm to effectively track down and eliminate worms, you have to use scans at least as extensive as the ones the target worms are using. Unless the target worm has a buggy scanning algorhithm, any repair worm would kill at least as much bandwidth as the original worm.

          This cure is worse than the disease, in my book. I'd rather focus my attention on long-term solutions that will reduce the overall problem.
          • Ok, this is getting a little absurd, this is my last explanation:

            I don't know what you're referring to in saying that I want everybody to waste their bandwidth. Somebody would need to release a worm that fixes the whole, spreads itself, and removes itself.

            Where do you think the bandwidth issue comes from? When a worm scans host machines to look for places to spread, it uses a lot of bandwidth. This is what most people here are complaining about. Your proposed worm may fix bad IIS installations, but it would have to use at least as much bandwidth as the worm it's designed to fix.


            It wouldn't use as much bandwidth because the corrective worm would only fix machine it knows to have the virus, by analyzing the servers log files to see what machines have infected it and then making sure those machines are no longer infected. There would be no random connections made. After all the infected servers have been hit, that's it. Code Red continually tries random address. I'm not going to figure out the math but my solution would require a lot less bandwidth overall. Not that it wouldn't require an equal amount of bandwidth at it's peak, but there would be a peak, and then a dropoff of both Code Red/Nimda and the corrective worm itself.
            • scott1853 wrote:

              Ok, this is getting a little absurd, this is my last explanation:

              Agreed, this is absurd, this is my last explanation too.

              It wouldn't use as much bandwidth because the corrective worm would only fix machine it knows to have the virus, by analyzing the servers log files to see what machines have infected it and then making sure those machines are no longer infected.

              This is not a feasable answer. Windows NT/2000 logs do not contain the information you seek. Even if you can point a specific log entry that would implicate, for example, a Code Red infection, there is no guarantee that future worms will cause log entries or even leave the log files intact. Windows has particularly bad logging, but even Linux/BSD/Unix machines cannot protect the log files from a worm with root access.

              I snipped the rest of your post, since it all hinges on the assumption that your worm knows which machines are infected without scanning.
          • Any competent administrator unfortunate enough to have to manage an IIS installation has taken their machine offline, made sure their machine is worm-free, patched NT/2000 and IIS, and put it back online. Your main concern is those admins who have not done this, and there are a disappointingly large number of them.

            No, there's a vast majority of machines running IIS because the clueless user who installed 2k on his own box saw "Web Server" check box during the install, and instantly thought "I surf the web" and thusly checked it. So he's running IIS at default security, with the default pages with their holes and all. And he has no idea he's doing it. And he doesn't understand "security patches". And so, his computer is attacking others completely without his knowledge because "he doesn't run a web site".

            Those are the people whose computers you have to fix. They're certainly not going to do it.

            • I would still call these people admins (clueless, incompetant admins, but admins nonetheless). In my book anybody in charge of a machine (i.e has the root/admin password and doesn't have someone else to admin for them) that is directly connected to the internet is an admin, whether they know it or not. This includes anyone who plugged their new Windows ME Gateway machine into their cable modem just to play Everquest.

              I call them admins because they are in a position where they need to be responsible about how their computer is configured and interacts with other computers on the internet. The fact that they haven't the faintest idea what they've gotten themselves into is very sad.

              This is why, in an earlier post, I advocated having broadband services by default seting people up with fake addresses and transparent proxy servers. People (like me) who need or want direct connections would have to know enough to at least ask for them. This measure alone would reduce some of the worst stupidity on the internet (eg. huge Zombie farms).
        • If your script is going to go around patching everyone elses holes, why would those 'idiots' ever know they had a problem in the 1st place? Wouldn't they just think they had the most secure system in the world that never had a security problem?
          • Why do they need to know? If they were told, would they understand? Code Red was referenced on LOCAL news channels for 2 weeks, even giving users instructions on how to fix their machines. Telling them didn't help. And if informing the users that they are stupid is required, this corrective worm has complete control of the machine just as Code Red does. Change their wallpaper to a bitmap of instructions. That's really a minor point though.
      • If you have a large network, you might very well be helping yourself far in excess of the bandwith used by the tarpit, certainly a win in my book.

        A variant of this that stickied up ALL the ports rather than just port 80 might be interesting. Deply that on your net and anybody who tries to portscan the phantom machines might spend a LONG time trying to categorize them. B-)

        Similarly, making some of the otherwise unused ports on a REAL machine sticky would also be a problem for portscanners - though somewhat impolite to people who are attempting to connect for legitimate reasons.
    • Re:Pointless (Score:3, Informative)

      by PhilHibbs ( 4537 )
      Why has nobody either sent out a worm to patch machines, or created a script to patch the sender of a worm?
      Already happening [vnunet.com]. Unfortunately I think that self-destructing worms are by definition going to be less virulent than worms that take over a machine completely and keep trying to spread until they are removed.
  • It seems more of a 'feel-good' measure than anything. After all, Liston's quoted as:

    "I'm holding about 1,000 Nimda scanning threads and 300 Code Red scanning threads at the HackBusters site. I'm holding them hard and I'm not letting them go"

    Well what about the other threads that are spawned by the virus? If I remember correctly, don't Code Red and Nimbda spawn multiple threads to infect/probe several hosts at the same time? How does this really do anything other than just hold a thread captive while the other XX threads go about their daily business?
    • As the author stated, many networks only use 20-30% of their IP space. The other 70-80% can be used as a tarpit. The XX threads that aren't trapped will process their real host quickly, and then likely get stuck in the tarpit on the next try.
    • OK. I have to admit that this tool is pretty neat. But here is a potential problem I see:

      1: Computer running LaBrea picks up a request for 10.0.4.1, and adds it to the IP address list it monitors.

      2: Computer "Atlantis" boots up and requests the ip address 10.0.4.1 from the DHCP server.

      3: The DHCP has no record of any other computer using this IP address, so it issues this IP address.

      4: "Atlantis" is now cur off from the network.

      Does anyone know if this is a problem? I imagine it could be solved by making it dhcp aware and using rarp after seeing dhcp requests...
  • sure sure, viruses can be rewritten to timeout to avoid a tarpit, but unless software like this becomes widespread (and I doubt it will), chances are very very few viruses would be built to consider them. Same reason viruses aren't ported for Macs or *nix. I find one of the best ways to avoid these outbreaks is simply through nonconventional software solutions. So I say kudos to this kind of development.

  • It still saps my pathetic bandwidth. (64 k)
    Is there a way that I can re-direct port 80 requests using NAT (FreeSCO Linux Router) so that they go to Microsoft's website and not mine?
    I suppose that it would still sap my bandwidth, but at least it would eventually land in *their* lap...

    Cheers,
    Jim in Tokyo
    • Is there a way that I can re-direct port 80 requests using NAT (FreeSCO Linux Router) so that they go to Microsoft's website and not mine?

      Maybe an ICMP redirect if your firewall / server supports it? You want "default.ida"? Maybe you should try 207.46.230.219 ... ;-)

      • No.

        ICMP redirects only work on packets on your local segment - what you propose wouldn't work. Even if the router you're connected to would accept an ICMP redirect from you (unlikely, most ISP's turn this off on their CPE), you would just create a loop, because the packet is STILL destined for your network. (So you'd just end up soaking up even more of your own bandwidth.)
    • I don't know anything about NAT, but you can put this in your Apache server config:

      RedirectMatch ^.*\.(exe|dll).* http://support.microsoft.com

      • Comment removed based on user account deletion
        • Not a redirect - I want to tell the router to send any port 80 request to microsft's IP.
          As it is now, the router sends all requests on port 80 to 192,168.1.xxx - I know it's gonna still hit the router, but I want it to send it elsewhere from there...
          I'm not talking javascript here - NAT...
    • Assuming you have forwarding on, I believe so.

      add to /etc/forward.cfg a line that reads:
      t,80,207.46.230.219/80
      and restart forwarding.
      But if you are running any kind of server on port 80, this does not help you, since this redirects all the traffic.

      • Since 90% of my traffic on port 80 is Microsoft-relared poop, I can use 8080 - Most of my useful traffic comes from a site with a real IP - I use DynDNS to resolve my dynamic IP (Kickass service!) but most people hit it via mmdc.net, so it wouldn't be a problem.
        Thanks!
        Jim
    • I'm trying to find a way of how to do it using ipchains... but I'm not sure how to distinguish the "bad" packets from the good ones. I don't want to permanently DENY people from my server, since I do run a public web site. More so, just to ban them for a while (ala K-line)
      • What's that from?

        As for IPCHAINS, I would have my standard script in a daily cron job. Block them as they come in, but then dump all the new rules each day - Add them again as they misbehave... Twice daily, if necessary.
  • by davidu ( 18 ) on Thursday September 20, 2001 @08:51AM (#2324752) Homepage Journal

    Tools like LaBrea are cool, but aren't more then hacks. By wasting the TCP timeout on these worms it just forces the next worm writer to create a multi-threaded worm which would instantly be immune to such a defense.


    A better defense, which I admit is more costly in terms of CPU is to run border IDS systems and simply have rulesets to filter this kind of traffic out.

    For Example: Here is a snort ruleset for Nimba and Codered and possibly other worm varients against Windows OS's:
    alert tcp any any -> any 80 (content: "cmd.exe";msg: "cmd.exe access in HTTP!!";react: block;)
    alert tcp any any -> any 80 (content: "root.exe";msg: "root.exe access in HTTP!!";react: block;)

    If you're running BigIP switches:
    rule block_nimda {
    if (http_uri starts_with "/scripts" or http_uri contains "root.exe") {
    discard
    } else {
    use ( server_pool)
    }
    }

    The point is...
    It's better to stop these things on border routers and on the edges of Lan's then on individual machines or IPs. LaBrea does nothing to protect other machines aside from slowing down the worm which is almost futile.

    Just my $.02,
    dave
    • by Anonymous Coward
      Yeah, but all you have to do is unicode the cmd.exe string, or %u encode it, and then your filter is useless. You have to canonicalize your string before you do the compare.
    • A better defense, which I admit is more costly in terms of CPU is to run border IDS systems and simply have rulesets to filter this kind of traffic out.

      No, a better defense is a solid firewall, a border ids, host based detection measures, anti-virus, and additional barriers such as honeypots and LaBrea

      No security technology you mention will solve all problems. To provide good security, one must deploy many different technologies depending on their business or personal needs. LaBrea is another tool in the box to throw up in the way of attacks. It happens to be good with worms and scans, while weak in other areas. That's why a variety of barriers should be used. Even then, there is always more that could be done.

    • if you deny the request, the worm knows right away, and will move on to another machine. If you acknoledge the request, but then ignore it from there on... it will have to time out before it moves on to the next IP. I'm betting you already know this, but thought it was worth clarifying for those of us who aren't script enhanced as you appear to be.
    • Those rules of yours would have blocked your own post because it contained "root.exe"... it's not always bad to have that string in your packets.
    • Tools like LaBrea are cool, but aren't more then hacks. By wasting the TCP timeout on these worms it just forces the next worm writer to create a multi-threaded worm which would instantly be immune to such a defense.

      Multi-threaded. You mean it might spin up, maybe 100 or 300 threads and attack other machines? Oh wait! Code red did that!

      Many wroms are multithreaded, and Labrea would show them down too. However, a very clever virus might initially take a performance hit but then recover and not hit known tarpits. That would, however, prevent the virus from being very...undetectible
  • Looking at my Apache logfiles, I see the infected systems trying to obtain many .exe files, like cmd.exe. I was wondering if I could stop those systems, by taking a "shutdown.exe" program, renaming it to "cmd.exe" and putting it on my web-server. Than hoping that they download this "cmd.exe" and will execute it.

    OK, it's only a stop-gap solution, just for this particular attack, but it could quiet things down (on my subnet). One problem is that I couldn't find a Windows "shutdown.exe" program that has no GUI and doesn't take any command-line parameters.

    Willem
  • by lar3ry ( 10905 ) on Thursday September 20, 2001 @08:52AM (#2324758)
    Should be simple to write a script that would examine your HTTP error_log file for '\.exe' and insert a rule into IPCHAINS to DENY all connections from that IP. The connection will time out, of course... but it will slow down the virus.

    Much better than having your system get hit 15 times a second from Nimda probes, anyway.
    • A kind fellow posted that yesterday. It looks like:

      for LUSER in `grep "winnt" /var/log/apache/error_log | awk '{print $8}' | sed -e s/]// | sort | uniq`; do
      if [ ! "`/sbin/ipchains -L -n | grep $LUSER`" ]
      then /sbin/ipchains -A input -s $LUSER -d 0/0 -j DENY
      fi
      done

      Your error_log is likely in a different place, perhaps by a different name.

      I've been running this from cron on several machines. I'd suggest trim your error_log first to just the last couple days. And watch not to put the cron jobs too close together, or they'll pile up once the number of attacks in the error_log gets to several thousand - trail and error on that for your system, but keep on eye on the processes.

      Note that if you flush your firewall rules (say, on a reboot) you'll open up to the addresses this has been blocking until this is run again. By trimming your error_log to the last, say, 24 hours and flushing the firewall rules, you can make allowance for systems that have been fixed or taken offline.

  • Instead of fixing the operating system to avoid these obvious mistakes, we have people creating solutions outside of the operating system. It's like when MS tells people that their systems are buggy, so instead of fixing their own system, they suggest people buy more licenses and more machines to run as backups.

    What happened to fixing the problem where it originated from?
    • Wow. I wish I had some mod points to mod you as a troll. If you're read the article (or any of the thousands of others on the Net about this right now), you'd know that this HAS been fixed for a LONG time now. It was part of Service Pack 2.
  • VMware?

    What is the postercomment compression filter?
  • Filesystem loops (Score:3, Interesting)

    by Ed Avis ( 5917 ) <ed@membled.com> on Thursday September 20, 2001 @08:53AM (#2324768) Homepage
    Within my home directory I have a couple of symlinks pointing back at the root of the home directory. Because it's exported by Samba to Windows machines, and Windows (or rather, Win32) doesn't know about symlinks, the 'Find File' utility from the Windows Start button would get stuck descending forever into these links. I can't say for sure, but it's possible that a few worms like ILOVEYOU were thwarted or slowed down by this, if they do a depth-first search for files to infect.

    Unfortunately, I think that in the end Samba was reconfigured not to serve symlinks :-(. It would be nice to have an option to serve the first level of symlinks but not allow recursive ones.
    • As far as I can tell, samba follows symlinks very nicely. (mixture of home directories in tomcat's webapps and having available space in the "wrong" directories.)
      I think it's the server that would get stuck, not the attacker. :-(
  • by MS ( 18681 ) on Thursday September 20, 2001 @08:56AM (#2324780)
    Some additions w/r to Nimda:

    Strange: of the 27 hosts (IP-based) I run on a single box, the most popular got probed first, not the server with the lowest IP-number, so the worm seems not attacking the IP-numbers sequentially, but rather due to some reference somewhere else. This may also explain, why it spread so quickly: if the worm could replicate itself from a popular webserver, the chances are good for a quicker spread among many surfers... This worm is really an excellent piece of code - kudos to its author!

    And here are some log-entries from another box (NT runnung Apache):

    First suspect entries on July 12(!):

    My Timezone is GMT+1 (That's mid-europe, one hour ahead of Great Britain)
    (SR) stand for ServerRoot which I omitted here

    [Thu Jul 12 03:39:40 2001] [209.3.150.130] File does not exist: (SR)/scripts/..%5c..%5cwinnt/system32/cmd.exe
    [Thu Jul 12 03:39:42 2001] [209.3.150.130] File does not exist: (SR)/msadc/..%5c/..%5c/..%5c/winnt/system32/cmd.ex e
    [Thu Jul 12 03:39:43 2001] [209.3.150.130] File does not exist: (SR)/_vti_bin/../../../../../../winnt/system32/cmd .exe

    I had a few more interesting logs between Jul 28 and Aug 30... but the /. Lameness filter considers it a Junk character post, so I had to shorten it...

    May this information be useful for someone!
    ms

    • Who was it that invented the really fast way to propegate a worm?

      Anyway, it mostly had to do with getting a huge list of vulnerable IPs before you even unleash the worm, giving each worm "process" a chunk of the list to work with. It looks like you were being probed for vulnerability as some other people stated they were, but nobody was actually infected until recently. Very efficient!
    • I did a bit more research:

      nslookup 209.3.150.130 gives mail.worcestercs.org, but the domain worcestercs.org is no more in use and Networksolutions tells me it is available...
      ARIN on the other hand assigns the IP-address 209.3.150.130 to: Qwest Communications, WORCESTER COUNTY SCHOOL (they have a block of 128 addresses)
      209.3.150.130 seems now to be a dialup address of IConNet.NET

      :-)
      ms

  • When I saw this thing hit I decided to modify the 404 script I'm running on my web server to log all Nimda attempts. I made a front end script that shows their ip and creates some whois links.

    I then devote a few hours of my time in the evening to click these links and let the netblock owners know that the specific IP is infected. I would hope that people would keep the ball rolling and inform their downstream or shut them off.

    In the multiple hundreds of emails that I have sent out, I have received 2 replies by real people. This tells me that nobody cares. No big surprise, its been proven again and again.

    what are you doing to help?
  • Heres what I was just about to submit:

    LaBrea - The Tarpit: Keep your friends close, your enemies closer.
    - -
    With the recent proliferation in worms (Code Red [symantec.com], Sircam [symantec.com], Nimda [symantec.com], etc) beyond either switching to a more secure? [linuxsecurity.com] webserver or keeping up to date with the patches for your own and hoping that others do the same; approaches to actively dealing the problem have been limited. One can try to either contact the administrator[s] of the machines infected or take a slight more risky [wired.com] proactive [bbc.co.uk] approach. 'LaBrea' - The Tarpit offers proof of concept? for an interesting open source approach.
    Linux today [linuxtoday.com], Wired [wired.com] and Linuxsecurity [linuxsecurity.com] have covered this developing project, more information is available from Hackbusters [hackbusters.net] here [threenorth.com], here [hackbusters.net], here [dshield.org], here [incidents.org], here [fwsystems.com], or here [zcsi.net].
    - -
    Im off to sulk. :)
  • Sounds interesting (Score:4, Insightful)

    by rabtech ( 223758 ) on Thursday September 20, 2001 @09:29AM (#2324906) Homepage
    it sounds rather interesting, but might I suggest securing the server in the first place?

    For any IIS admins out there, you need to download and install URLScan. It is a free tool put out by Microsoft. It scans incoming requests and only allows ones that meet its criteria of rules (with a default blank ruleset, all requests are discarded.)

    <a href="http://www.microsoft.com/Downloads/Release.a sp?ReleaseID=32571">http://www.microsoft.com/Downl oads/Release.asp?Rel easeID=32571</a>

    There are a variety of other methods that can be used as well, and I am currently working on a guide to security for IIS admins. It isn't that hard... take the time to do it right.
  • when Apache is free and has none of the security bugs of IIS. Even if you are running winNT/2k, I don't see why anyone who had the choice would choose IIS. On my machines at home and at school, one of the first things I do is rip out wu-ftpd and replace it with NcFTPd for the same reason I'd rip out IIS and install Apache.
    • For a lot of reasons. It's easy to install on NT/2K server, it natively supports ASP, it's mind-numbingly simple to adminster, it seemlessly hooks into other Microsoft server products, and a large number of thrid party server apps require it.


      Now, if you want any good reasons.. I can't help you.

  • I don't recall what the limit is on open connections on a typical *nix system, but wouldn't this tie up connections? The longer you hold each connection open, the more simultaneous connections are being wasted.

    IOW, don't use this on a production machine. Perhaps you could run this on a separate box that doesn't do much, but that sounds like a lot of work (compared to, oh, say, patching the NT boxen).
  • How biased can get you get with the title "Tarpits for Microsoft Worms"? Did the Slashdot editors think they were being cute by just associating worms with Microsoft? This kind of behavior only colors the image of the Slashdot geek in a bad way. I know the other side of the fence does it as well, e.g. associating the GNU licence with the word viral, but that doesn't justify this non-professional behavior.

    While I'm here, I'd like to make the observation that bashing Microsoft has now become trendy. It's in the same category as the Starbucks and Abercrombie and Fitch. It's so profuse that it has infiltrated my computer science classes. The professors and students try to make jokes and slam Microsoft in such a miserable way that the situation becomes completely inane.
    • How biased can get you get with the title "Stop the Microsoft Bashing"? Did you notice that the news item in question is in fact about a software product which bogs down the processing of worm software? Did you notice that these worms only attack Microsoft operating systems? I think that the title is not so much "Microsoft bashing", but just a good summary of what the article is about.

      Lighten up.
    • They exploit security holes in Microsoft software on Microsoft OSes. Other software and OSes are immune (although if a user has access to the file space, they could place an infected file on the non-MS server, making it an "immune carrier"). So what should we call them?
  • If only we had the power spike from Goldeneye. Anyone who didn't secure their computers properly would get the machine fried by black hat hackers. It would be an excellent mark of shame.

  • Many people are writing about how this is worthless because:
    - It can be circumvented by future worms
    - It does not protect your current hosts
    - Other worm threads continue to scan
    - etc

    While all of these comments are valid, they miss the point of a solid security strategy - defense in depth. This seems to be a valuable addition to an existing security infrastructure. One thing in particular is sequential port scans. A port scan would most definitely get snagged by such a host if it were scanning ip's sequentially.

    Of course virus writers can circumvent tarpits with thread timeouts, etc. but that requires much more code and skill. It would also create a larger amount of code that may be easier to detect.

    This program, just like any other security product, does not prevent any sort of attack, but if installed enough places, it will raise the bar for future attacks.

  • by smooc ( 59753 )
    But they couldnt withstand the biggest hack:

    The /. Effect

    -- site is unreachable
  • Distributed was a great buzzword for a while. Peer networks were wonderful when we were getting music for free (like all this beer everyone keeps promising). Stickin' it to the man is the thing at the moment, but come an opportunity to actually do something useful, and no one "gets it" all of a sudden.
  • It seems to me that on top of wasting bandwidth and other resources, this technique would serve as an immediate spur to write more sophisticated worms. For example, the term "timeout function" immediately springs to mind....
  • IF they detect the luser's home site (cable, dsl mostly) is causing problems for The Net, overall.

    obviously so many systems are infected and going unchecked. I sent mail to postmaster@ so many times in the last few days and have gotton ZERO replies back. shit, they don't even read their own postmaster accounts - how could you expect them to be responsible enough to check their own logs and system resources?

    it appears that the only way to let these turkeys know they have a local problem (one which has global implications) is to shut them down until they clean up their act.

    it isn't really hard to sample traffic on an ISP's port concentrator (router, dslam, switch, etc) and if you see a customer sending out this kind of crap traffic, shut down their port and let them contact you. when they do, inform them how to fix their system and then switch their port over to a non-public lan and monitor to see if the virus has been removed. if and only if it has been removed, then you can switch them back to the common public wan.

    given that M$ lusers tend to install-and-forget their boxes (at least home lusers do), I see no other way to stop this M$ menace from affecting others.

    I, for one, am sick and tired of paying for other peoples' poor choice of o/s.

  • I admit, I didn't read the article, but...

    By the time the thing hits, me, it has come from some idiot whose machine is infected. This doesn't stop their machine, nor does it tarpit that machine.

    My own machine is not vulnerable, so it's already not spreading the crap. So what good would installing this thing really do?

    Ok..I'll go read the article now... :)

The most difficult thing in the world is to know how to do a thing and to watch someone else doing it wrong, without commenting. -- T.H. White

Working...