Computer Immune Systems 92
LL writes "We might soon be seeing commercial delivery of autoimmune security systems. Rather than the surface bit pattern detections of antivirus checkers, these system attempt to provoke virii in a secure area (IBM) or match network packets against signature tags (Forrest). The interesting plug is that the author suggests that large programs such as operating systems should be made in such a way that no two copies are exactly alike. Now guess what favourite beast has this trait?"
I believe that the point is... (Score:1)
Security holes are found in Linux distributions all the time.
This would allow early detection of problems like wuftp allowing root access. This early detection and the automatic downloading of the patches would allow system administrators to fix security holes before they became problems.
Re:Sexual Reproduction of Computer Virii (Score:1)
Computer code (computer viruses) are binary characters and commands with specific and important symbolic meanings. To deal with sexual reproduction, one would have to deal with recombination of reproductive data - splicing, crossing over, etc. The problem is that code is too fragile! You have a protein mutation/genetic mutation, chances are it won't really adversely affect an organism (hey, most of our DNA is garbage introns that can afford to be corrupted), get one character or bit wrong in a virus's code and BAM! a non-working virus. IMHO, at this point in research computer code is too fragile/symbolically dependant to be treated like chemical molecules.
I forget which book it was mentioned in (either Richard Dawkins or Stuart Kaufmann), but they mentioned this criticality at which point systems can no longer withstand point changes without catastrophic failure.
Respectfully,
Kevin Christie
kwchri@wm.edu
Re:Evolving viruses. (Score:1)
Er, no. The survival strategy for all living things is to reproduce. (survival of the genes [code] is what governs an evolutionary proccess, not survival of the individual.) The way that biological viruses spread is by taking over cells and telling them "stop what you're doing and make me!" When your cells become to busy making a virus to do those nice cell things like respirating and keeping you alive, you die.
Computer viruses, if exposed to evolutionary pressures rather than being designed, would likely do the same - reproduce. The code would insert itself into a process and say "stop what you're doing and send out lots of copies of me to any other process you can!" When too many processes are busy making virus code instead of doing their job, you get a sick computer, even if the virus is "harmless" in terms of not being intended to make your computer crash.
Interestingly, automated immune systems for computers might have the same effect as our immune system does - making us feel sick. Most of our feeling of illness when we have low grade viruses like colds is not due to anything the virus is doing to us yet but the loss of energy to fighting it and side effects of our immune response, such as fevers. So with one of these systems, you would get a slow down of sorts from the extra processing of the anti-virus software even though the virus might not (at that level) be causing any problems.
Just some thoughts from someone who knows biology.
-Kahuna Burger (can't remember my password at work.)
Re:Way off-topic, but... (Score:1)
I think virii has been pretty much accepted as a word, and as Mark Twain said "I have no respect for a man who can only spell a word one way."
~Chris Carlin
Re:Way off-topic, but... (Score:1)
Chris Carlin
The best defence... (Score:2)
As the viable attacks will be the ones which survive, those will be the ones distributed, copied and reused. Within a given timeframe, by creating a "super-defence", you -ALSO- create "super-virii".
The problem with any evolving system is that it will remain, over a long enough time-frame, roughly in balance. Nothing can become super-strong, without in turn strengthening it's opponents, by natural selection.
Only a "truly perfect" defence will work, but no such defence exists, or even theoretically could exist. This leaves you with the "best practical" approach, which is to make things as protected as reasonably practical, and no more.
This kind of approach has the advantage that you don't accelerate (too much) the development of super-bugs (as medical practices have an unfortunate tendancy to do - idiots!) whilst offering a sensible level of protection against more common attackers.
Ideally, though, defences should do more than just defend. The more time you spend defending, the less time you have to do anything else. This, in itself, is a form of DoS attack on your system, via wetware rather than software, making the admins install so much protection that the system becomes unstable and/or unusable, under typical loads.
What you want is a form of defence which actually contributes to the rest of the system in other ways. That way, you are gaining overall by expending the resources, and don't run into the DoS trap.
Re:The best defence... (Score:2)
(This includes coined words, jargon, local dialects & local terms, regional spelling, national spelling, etc, ad nausium.)
On top of that, I believe "virus" has a Latin root, which makes the plural "virii". This is distinct from a word such as "data", which is a plural and who's singular is datum.
Oh, and "rap" ain't music. It's noise with speech trying to drown it out in the folorn hope nobody'll notice how cruddy it is.
Not unique (Score:3)
Sure, you can play with the config or use patches or whatever, but a lot of the code will come out the same. It's not like the compiler puts some kind of unique fingerprint on the kernel you build.
axolotl
Evolutionary Speed (Score:2)
As much as I would absolutely love to fully envision the Net as a living, breathing organism...it isn't. There are aspects of biology that are appropriate, but I think it's fair to say that these researchers are presuming excessive organic/technical equivalence:
Technology is externally changed, quickly, and often within the same generation of machinery. Organics internally evolve, extremely slowly, and even then almost wholly reserve their changes for the next generation.
The fact that technology is externally changed means that there's no evolved internal consistency--the immune system must be explictly modified to support the new transplant. As biology and technology have shown us, spooging the new into the old is difficult work. The speed of modifications too is frightening--while it's obvious that the host systems change much faster in a technological environment, I'd be interested in knowing the genetic variation of attacking bacteria and virii vs. the command variation of attacking trojans and computer viruses.
The generational woes are the killer--it is impossible to establish the biological concept of a "homeostatic self" onto systems that never stay either frozen in the present or predictable in their growth towards any degree of future.
Now, granted: There are assuredly "all quiet" states on the average network, and recognizing such states is a common tactic of network monitoring systems. (Indeed, there's a free app out there that will generate a firewall config that will pass any traffic it noted on your network during a "trusted state" period, then block anything else.) But that's a rather blunt methodology, and denies the inevitable existance of new services. The big problem is: How does one respond to a deviation? The curse of unpredictability is the inability to automate appropriate responses. The curse of being forced to constantly formulate appropriate responses is that it's burdensome and prone to false positives. The curse of not formulating appropriate responses is that you end up not responding at all
I should be fair--I like what I'm hearing from these guys. I've been saying for quite a while that systems that prevent the results of an instability from being necessarily exploitable(essentially, randomizing and shuffling systems so that there is no predictable "skeleton key" to the system that works every time). Their talk about monocultures is perfectly appropriate here. IBMs work with victim labs is beautiful, if not more than a bit macabre if backwards ported to human biology. Even the packet signaturing is interesting. But we should be aware of the limitations of this technology, and I'm interested in just how aware these researchers are of the differences between the evolved and the created.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Re:Evolutionary Speed (Score:2)
Gotta love the English language. Unlike, say, Spanish or French, there is no central committee which decides which words are valid and which ones aren't. While dictionaries and Trusted Newspapers take some of the responsiblity, the general rule is rather democratic: If enough individuals use a given word to represent a consistent concept, and if that word is not a homonym of a word with a slightly different(and more standardized) spelling(their/thier/there), that word is considered coined and valid.
Remember, it is not the purpose of a dictionary to create the language, only to reflect it.
Altavista shows 8,496 usages of the unique word "virii". At bare mininum, "virii" qualifies as an alternative, non misspelled variant of the word "viruses".
Don't play semantic games with me, AC
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Virus solution - better security models (Score:2)
For a classic virus to work, it must attach itself to an executable, and spreads when that executable is run (modern email "virus" programs are often technically worms, not vir[ii|uses]). In Windows, this is easy, because the system directories (c:\Windows) are writable by the regular users.
In Unix/Linux, the system directories where most binaries are (/usr/bin,
If one were to write a Unix/Linux virus, the obvious target program would be
At best, a virus could affect user-owned binaries, say in ~/bin. But except for convenience scripts, who uses that? Anything widely used and standard goes into a directory protected from accidental or deliberate damage. That's just good practice.
If all operating systems followed Unix' wise example, vir[ii|uses] would be merely an interesting theoretical exercise, rather than a serious hazard.
---
120
chars is barely sufficient
not all life is ruthless (Score:1)
I suspect that once computer viruses start exhibiting evolutionary-like behavior they will behave just like their biological cousins; sometimes reproducing at a frenetic pace and crippling and destroying everything in their wake, and lots of dormant viruses stuck on the wrong sort of OS, or small viruses that reproduce at low rates and aren't malicious.
Just like life; variety.
Re:Evolving viruses. (Score:1)
solution to a problem. The internet is large enough an e-ecosystem to support millions of copies of a virus, so even if the survival
rate of the variants produced by breeding and mutation was very low, there might be enough survivors each generation to evolve
into a truly dangerous virus.
Except it wouldn't. What's a good survival strategy for a virus? Not to be detected, of course. What's a good way not to be detected? Don't do any noticable harm.
An evolving virus (if it survived at all, real-world systems are rather brittle for a-life organisms to survive in the wild) would very quickly become very small, very prolific, and completely harmless.
Re:The best defence... (Score:1)
alt.comp.virus FAQ [wisc.edu]
There's another page with some serious analysis of the Latin words
right here [perl.com]
Basic Flaw (Score:2)
The human immune system knows what it should find and anything else is an invader. Computers aren't like this, they change all the time - installing programs, writing files. You can't just, I don't know, look for a different electron on the hard disk?
The only thing I can come up with is that the anti-virus package CRCs every non-data/document file as it hits the hard drive, then if the file is modified I guess it might have a virus on it's hands (or it could just be a valid patch). But in that instance, it would be better for all the base systems in a network to be identical, rather than each one being slightly different - that way you could recognise a difference in one as a potential virus...
Perfect security is impossible (Score:1)
(which is a proof that you can't prove everything).
The consequence is essentially that any sufficiently powerful computer system cannot be made virus/cracker proof. No matter how good your
AV software and how tight your security procedures, unless you limit the power of your machine (not how fast it runs, but what sorts of things can it do) you cannot ensure your security.
I've decided that it's really not worth the bother to run a totally secure system, and I don't even run a virus scanner anymore.
Before everybody jumps on my back, I'd better clarify "sufficiently powerful". You could say that a machine that is stored in a locked room w/out any connection to any external network that requires a swipecard and 128=byte password to access has perfect security. But, such a machine is not "sufficiently powerful" to be crackable. It is less powerful than my pokey 'ol 486, because my 486 can connect to the internet. If I wanted, I could set it up as a web server. But a machine in a locked room can't do this, and therefore is less powerful.
Even if you have an internet connection, if it refuses all external connections and is behind a good firewall, it may be impossible to break into. But again, it is less powerful than any web-server, even one that just displays static pages. It is once you cross a certain threshold of useability that it becomes "sufficiently powerful". If you have an open telnet port and 1 user account, it is probably "sufficiently powerful". That's not a very high threshold.
The threshold for virii is even lower than cracking. If I want to run outside software, I have to expose myself to virii. If I have good AV software installed and running, I may be able to detect all virii. But, then I can't run any programs that appear sufficiently virus-like because the AV software will flag it, and Godel's Incompleteness theorem shows that if my software catches all virii, it MUST catch some non-virii.
So, security is an impossible goal.
It's still pretty cool to have AV software that automatically looks for 'new' virii though.
Re:immune system analogy flawed (Score:2)
The immune system was successful initially because it could very quickly generate new defense mechanisms that pathogens would take some time to adapt to through evolutionary mechanisms.
Even so, after many millions of years of evolution, there are now numerous pathogens that simply aren't touched by the immune system at all; the only reason why those pathogens haven't wiped us out is because natural pathogens don't have malicious intent, and most of them have co-evolved to co-exist with us.
When it comes to computer viruses, the insight to be concerned about is the insight of the virus writer. Unlike the biological world, where pathogens need to spend millions of years of evolution to figure out general mechanisms for avoiding the immune system, a virus writer can come up with a general purpose strategy for evading a "computer immune system" within days.
If you want secure systems, in a world of human adversaries, the only way to build them is so that they are structurally secure or cryptographically secure, and those are engineering problems that are very different from what biological systems have faced until now.
(As an aside, the next step of evolution of biological pathogens may be interesting. The immune system got us quite far, but it is growing old as a defense mechanism as pathogens have found general purpose ways of evading it. Perhaps its successor is our brain, as we design drugs and treatments rationally. It will be interesting to see how the pathogens will respond.)
immune system analogy flawed (Score:3)
Perhaps the biggest point of departure is that biological systems are evolutionary, while computer systems are designed by humans, with knowledge of the possible countermeasures. That means that many immune system strategies just won't translate.
But even more important is perhaps the observation that most biological systems (even plants and most animals) don't even have immune systems. They rely on other mechanisms for their defense, mechanisms that many engineers would probably consider "good engineering": make it hard for the viruses to get in, destroy viruses that do get in, minimize the effects of infection if it does occur, stop the spread of infection with various barriers, and have lots of redundancy. The evolutionary pressures for some animals to develop immune systems probably simply don't exist for computer systems.
So, if you want to push the biology analogy, it may well be better to do without an immune system and to simply design good, strong systems.
CyberLupus? (Score:2)
Interesting. So if you have one of these AV systems in place, and apply a binary patch to some code (a'la Id DOOM patches), your changes will get clobbered. Makes sense, and I can see why it would - the checksums and size changed after all. But what you're saying is that this AV system could one day decide (or be prodded into) going after stable, unmodified code - having seen it as infected?
As for CyberAIDS, I recall something from circa MS-DOS 5.0/6.0. I'd heard of a virus, aptly named CyberAIDS, which would do nothin more than disable your antivirus software. I don't know specifics, but it was interesting to me that it would trash NortonAV, CentralPoint, whatever, leaving you wide open to conventional bugs. I think (IIRC) that it would leave the TRS running, but disabled. Cold.
Way off-topic, but... (Score:3)
A few years ago, Wired (before they lost their edge) ran a pseudo-retrospect issue from the future, in which they reviewed the turn of the millenium from a few decades ahead. It was a prety neat diversion. Anyhoo...
One of the main articles dealt with 'The Plague', a super-flu/AIDS/Ebola mutation that threatened to wipe out humanity. (It's striking how biologically apropos the computer virus analogy is, and how well it tracks with real life problems, solutions and latest computer development) The article was written in retrospect, like the whole issue, and in the form of interview with one of the top researchers involved in stopping the disease.
The truly neat thing about the story, and what keeps me remembering it, was that the disease was cracked not by medically traditional means but by a mathematician who found a way of attacking the geometric form of the virus. I don't know how unconventional this approach is in virology, but the cross-polination of medicine and math really struck me.
I'm a very strong believer in gestalt thinking, and in the fact that laws of nature from one field map remarkably well onto seemingly unrelated fields. Take Newton's Laws of Motion, abstract a bit and apply to sociology. Action-reaction. The Law of Entropy seems to hold true when placed in the context of politics.
This is why the article resonated with me, and why the topic of evolving virii triggered me to go OT about memetic cross-breeding.
Re:Evolving viruses. (Score:2)
Perhaps scan the filesystem for email addresses frequently sent to, and send melissa-style mailings to them? Maybe search for common email programs, and infect them?
USUALLY Not unique (Score:1)
Your right about people using the same binarys.
But it will optomise diffrently if optomised for a 486 vs a P2.. Thats not very wide varation.
Also some people (like myself) recompile to pick what will be drivers what will be in kernel and what won't be supported at all.. That being.. conformed to needs prefrences and hardware.
Also it dosn't stop with the kernel. Diffrent libarys may be used, even the core libary can be compiled diffrently. The system configuration.. etc.. it changes the defects in the system. A virus might be made to infect Linux but the defect used by the virus can be swapped out or may never have been installed.
Not particularly new (Score:2)
The problem and the solution (Score:2)
Yes, but the problem contains within it its own solution. Viruses evolve. So systems must also evolve. There will never be a perfectly secure system... for long. But neither will the most harmful viruses remain viable for long. Tremendous forces (unstoppable forces?) are quickly mobilized against them. The writers of malicious viruses are clever, but I doubt that they're as clever as the combined cleverness of all those who work to stop malicious viruses from doing their damage.
Only a "truly perfect" defence will work, but no such defence exists, or even theoretically could exist. This leaves you with the "best practical" approach, which is to make things as protected as reasonably practical, and no more.
Viruses, as they evolve, can be expected to arrive at the "most practical" approach, rather than the most damaging. Over time, this would lead to the evolution of stealthy viruses that do little or no harm to the systems they infect, use minimal resources, and may even offer some benefit (f'rinstance cool graphics, greater efficiency, protection against other viruses). A "most practical" virus-proofing scheme would not waste its time with these benign viruses, which would drive the evolution of ever more benign viruses.
Evolving viruses. (Score:3)
--
It's October 6th. Where's W2K? Over the horizon again, eh?
Re:Details on Forrest's research (Score:2)
The suggestion that no two operating systems are to be exactly alike is also an interesting one, but hardly practical. First of all, most security holes occur in applications, not operating systems per se. The dangers of monoculture are real, but purposefully avoiding popular software (1) leads to suboptimal solutions to problems (do you want to avoid Apache just because it is the most popular web server?); and (2) strongly smells of security through obscurity. Besides, think of technical support nightmares: does anybody really want to support hundreds and thousands of "slightly different" operating systems?
I feel that the biological metaphors are somewhat overblown and could be misleading. On the other hand, they journalists like them...
Kaa
Re:Details on Forrest's research (Score:2)
The question wasn't kernel fingerprinting. Basically, it's the same old argument: if 90% of the world's computers run Windows, then a single flaw in Windows makes 90% of the world's computers vulnerable. As far as I understood, Forrest was arguing for internal differences in operating systems that would confuse a virus, or a root kit. Checksum are irrelevant here.
Kaa
Re: Artificial this, artifical that (Score:2)
"Any significant advance in technology is indistinquishable from magic."
That someone was Arthur Clark, and I belive the correct quote is "Any sufficiently advanced technology is indistinguishable from magic".
If you put a caveman in front of an Imac, he's going to insist it's a deity
Until he finds a heavy blunt object.
Kaa
sounds nice ..... / (Score:1)
but can or will it actually work when but to the
test ?
Re:immune system analogy flawed (Score:1)
Why did they do that? Where was the foresight there? Because of the bottom line: There was no immediate need for such measures, and it would've cost money and resources and time-to-market to put them in.
Responding quickly and cheaply to only the immediate needs is a hallmark of both evolutionary systems and market-driven systems.
In the big picture, human foresight is often a good deal less important than we usually think it is. My best working hypothesis is that human ingenuity and intentionality essentially only accelerate what is undeniably an evolutionary system.
Computer AIDS? (Score:3)
So the idea is to increase security in a number of ways including (but not limited to) having each copy of the OS be unique, and having the AV package put the subject in a box and taunt it. (For those of you who haven't seen it, now's a good time to watch that Monty Python "Holy Grail" movie.)
So how strong are the odds that such methods could inadvertently result in some sort of computer auto-immune disorder? Could our anti-virals manage to interpret the kernel as a virulent entity to be removed? Or, are we all just too smart (or lucky) for that to happen?
"Una piccola canzone, un piccolo ballo, poco seltzer giù i vostri pantaloni."
Re: Artificial this, artifical that (Score:1)
Artificial means 'made by human hands'; it is cognate to artifice. It has aquired a negative connotation over the years as artificial flavours and products have been created, but it still retains some of its old splendour.
You make a good point, though: is an AI an intelligence? If it is, then 'artificial intelligence' is the appropriate term. OTOH, if it is not, if it is merely a program which aids a human (even in the absence of said human), then it is more properly called an 'automated intelligence,' as you point out.
The one is the strong AI position, the other the weak AI position. Having just spent a semester working on AI, I must say that I consider the strong AI position bollocks, for all sorts of philosophical, mathematical and practical reasons.
Perhaps I will start calling it 'automated intelligence.'
Re:not all life is ruthless (Score:1)
itachi
Nothing is foolproof (Score:3)
However, like our bodily immune systems, these systems could serve as a first line of defense. Their advantage lies not so much in that they are universal proof against infection (they aren't), but in that against "routine" infections they shut the virus down before it has the opportunity to do any real damage, far faster than would be possible if human intervention were required. Inevitably, some infections will slip through (just as with biological immune systems), and when that happens you need outside intervention; i.e., the computer equivalent of a trip to the doctor's office.
-r
Re:Virus solution - better security models (Score:1)
Re:Evolving viruses. (Score:3)
Some viruses are actually pairs of viruses, which, when they find each other (both infect the same file or piece of memory, etc.), will join and/or manifest some new behavior (start their payload).
Very interesting stuff actually. It's too bad that malicious virus writers have tainted the whole topic. Self-replicating, autonomous programs are very interesting.
Jazilla.org - the Java Mozilla [sourceforge.net]
Re:Viruses / Virii (Score:1)
dictionary.com [dictionary.com]
Now, IANALS (I am not a linguistics scholar), but isn't virus(the computer term) a homonym for virus(the biology term) in the same way that bark(the tree skin) is a homonym for bark(the sound a dog makes)?
If this is true, Virus(computer) is most likely an English word, and no official linguistic rules have been made for it.
The beauty of the English language is that we are free to modify it to suit our needs. It's adaptable, and if we feel like spelling the plural of virus, virii, viruses or vira, it should be accepted.
The way I see it, in biology, it's unlikely to see one viral cell. Virus seems like it would be plural already. I'm probably totally wrong in this paragraph.
I've read the articles you point to, and understand them. This is definately not meant as a flame, but aren't there more important things to worry about than how we spell the plural of virus?
Re:Details on Forrest's research (Score:1)
*AHEM* Windows?
Maybe not in the *nix (or bsd
Re:Viruses / Virii (Score:2)
I implore you, Mr Penguin, to read this FMTEYEWTK [perl.com] on the matter. Latin just didn't work the way you claim that it did, and neither does English.
Re:!? (Was: Re:Viruses / Virii) (Score:2)
Not all nouns that ended in -us became -i in the nominative plural. Only second declension masculine nouns did so. There are several (I can think of three) other flavors of -us nouns, none of which follows that rule.
So virus fails to follow the focus/foci rule for at least three different reasons:
Re:The best defence... (Score:2)
I can see you haven't read the other postings here lately. You see, your simplified view really was not how Latin worked. Here's the short story [slashdot.org] from today, and here's the long one [perl.com] from some time ago. Thank goodness we don't have to remember all those rules in English!
I find it painfully but amusingly ironic that you should have used who's improperly in the cited passage above. You need the relative pronoun to be in the genitive case--to wit, whose. I believe this falls under the category of throwing stones in glass houses. :-)
Re:Virus solution - better security models (Score:4)
--tom
_______________________________
No, it's really far more complex than that.
You are correct that it is no mean trick to write a program that can damage the system it runs on, largely irrespective of what kind of system we're talking about. And so long as you can hoodwink some unwitting user into executing that program on their system, that program can, of course, cause damages commensurate with the privileges and capabilities of that user.
What you've failed to consider is how the dramatic cultural differences between Unix and the much-maligned consumerist toys serve to affect the issue to our benefit and their detriment.
Probably the most important of these cultural differences is that Unix has historically been a source-only world. Programs are distributed in the form of source code, code which shall be configured, built, and ultimately installed on the target machine. Programs solely accessible in machine language form fall immediately under a taint of mistrust.
Think back to the last time you read a notice from someone whom you've never heard of before that was asking you to go fetch some random binary program from some random place on the net and then to run that program under full sysadmin privileges? I can already see the incredulous Unix sysadmin reading that and bursting out in uncontrollable guffaws. Because the de facto standard for program interchange in Unix is as source code, a Unix programmer will be far less likely to fall for your ploy than would your average Prisoner of Bill, who has been lulled into gullibility by a binary-only culture.
But for the sake of the argument, let's say that you've found a way to effect this trick. Suppose you're an employee of some reasonably respected company that happens to produce a binary-only distribution of their commercial software, and you decide to sneak something wicked into the binary image. You manage to replace the standard, clean copy on your company's ftp or http server, or even floppies or CDs, with your own naughty version. People are accustomed to downloading from your company, or using your company's floppies, so they do as they've always done, run the installation as the superuser, and you thereby have your way with their system.
If this scenario were to play out, just how dangerous--how destructive--could it really prove? Whom could you harm, and who would be immune to your ploy? The answer is that you could only hurt those folks running the exact platform for which your binary had been compiled, and everybody else is unassailable. By platform, I mean the whole feature vector that includes processor chip (eg Sparc vs Intel), operating system (e.g. SGI vs BSD), shared libraries (e.g. libc vs glibc), and site-specific configuration (e.g. shadowed vs non-shadowed password files.
Let's not get too full of ourselves and pretend that the Unix culture's predilection for source-only program distribution derives only, or even mainly, from altruism. We have no choice in this matter. If you're on Unix, you don't have the source, then you can't run the program on all your diverse systems. And if Unix programmers do not provide source, they cannot hope to have their program as widely used as it would otherwise be.
Consumer-targetted systems from Microsoft or Apple are two instances are a static monoculture, as vulnerable to mayhap as a field of cloned sweet corn. It only takes one genetically engineered virus to bring down the whole field. Unix is different.
In his acclaimed essay, In The Beginning, Neal Stephenson writes:
There is no one thing called Unix. Instead, Unix comprises a diverse set of subtly (and often not so subtly) variant platforms. A nefarious binary laced with exquisitely designed evil bullets hidden inside it can hurt only a few of us. When Apple and Microsoft laugh at our diversity, be sure to remind them that is it their lack of the same that contributes to their incredible vulnerability--and to our strength. Hybrid vigor ultimately wins out over a monoculture, for the latter is too in-bred and fragile to prove long viable.Let me now return to your particular suggestion, that of a malignant Perl program activated by a Makefile rule at installation time. Because you're talking source code, and because Perl tries rather hard to attain a high level cross-platform intercompatibility, this form of subterfuge would appear exempt from the inherent protections stemming from diversity in variant Unix platforms. So, could your trick be done? How much of a problem could this really be? What might happen?
The answer is that of course, it could be done. And in point of fact, a demonstration model is already available, courtesy of Abigail. Guess what? There's no reason to run around like a chicken with its head cut off: the sky isn't falling. This sort of approach stands little chance of making a big splash, because you aren't going to insinuate it into a place that can affect a lot of people. Sure, you might catch a few folks, but just how long to you think this kind of thing will go unnoticed? Remember, it's in source code. That means anybody who wonders what happened can just look at it. There's a very low barrier to entry. And even if the naughtiness removes itself from your copy once its dirty deeds are done, that naughtiness is still sitting there in plain view for easy inspection back wherever you got your copy from.
Is there a way around this? Well, yes, if you're as clever as Ken Thompson. Fortunately, you aren't, and neither are the crackers. If they were, they'd doubtless receive more Turing Awards for their vaunted efforts. :-)
The only way you're going to get good propagation is if your nastiness into a copy that a lot of people will download and install. There's a very fine reason why so many archives contain a checksum of the image. It's to help with this problem. Security of course depends on several matters, including the strength of the algorithm and the integrity of the authenticating agent. But better that than nothing.
Let's talk about propagation some more. I assume that the goal is to have a notable impact, which means you need to spread your bad code as widely as possible. A hacked up install script, even if all goes to your liking, just doesn't have a very high rate of reproduction. First of all, how often do how many people install this software? Secondly, how do you plan to trick them into doing so? It's not really much of a challenge to get one person to this, especially if they trust. If that's your goal, maybe you'll succeed. But the risk of being traced and apprehended is high.
So how come this stuff can spread like wildfire amongst the OS-challenged? Can't whatever mechanism that's used there be used to get at the rest of us, too?
Over the last few years, a frighteningly frequent conduit of contagion for viral infection on toy systems has been the implicit, automatic execution of code with little or not manual intervention on the part of the box's owner. DOWN THIS PATH LIES MADNESS!. That this can ever, ever happen is as a plain a symptom of complete and total cretinization in the toybox world as you are ever going to see. It's stupid, it's crazy, and it's dangerous. Any programmer who even suggests it needs to go back to flipping hamburgers. Any user who asks for this feature needs to be quietly taken into the back room by the doleful men in long trenchcoats, where he will be told in no uncertain terms that his request is not only in the best interest of no one but criminals, but that he also now has a permanent record even for asking about it.
No, I don't care that a customer asked for it. Customers are idiots, just like any other user. So what if they pay you? They're still idiots, and it's your professional responsibility to act responsibly, to refuse to go along with their madnesses. The customer is not always right. In fact, they're very often wrong. A physician or a lawyer doesn't do whatever the customer requests, and neither do you. They, meaning the customers or users, simply don't have the background and training; they don't have the experience of seeing why automatic execution from untrustable source is the work of the Devil.
It's not as though we in Unix have never seen this issue before. In fact, we've seen it time and time again. And guess what? We recognized the problem and we addressed it. And we don't cater to that kind of lunacy anymore.
Here are a few concrete examples.
Remember when vi would--or at least, could--automatically execute macro commands embedded in a file in a specific way? That was a dubious feature called modelines. On my OpenBSD systems, if I type :set modeline, the program comes back and says set: the modeline option may never be turned on.
Another example of learning from our mistakes is the issue of shell archives. Instead of automatically running the sharfile through /bin/sh, there are specially made unshar programs that will do the common things, safely, and nothing else.
When CGI was first getting big, owners of toy systems would blindly install compilers and interpreters in such a way that these would easily execute arbitrary content coming in off the wire. Despite my pleas, both Netscape and Microsoft were actually advocating this! After a year of warning admins not to do this, and sending mail to the companies who were saying to just go ahead, nothing changed. So I released latro [perl.com]. Then and only then did various companies retract their suggestions, even though they'd been aware of the nature of the problem for a long, long time. Sure, you could be equally stupid on Unix, but for some reason, we weren't. History counts.
Implicit execution of untrusted material is simply stupid beyond words. And for some reason, the toybox people keep falling for the same chump moves, from MIME attachments to word processor and spreadsheet macros to embedded active scripting controls. I don't know quite why they just keep doing this crap. My hunch, and it's only a hunch, is that this is happening because Microsoft and their moronic minions simply cannot for the all the tea in China ever manage to think outside of their quaint but completely fictional little single-user universe. Maybe they don't hire people who come from a background in multiuser and/or networked computing systems. Maybe they don't hire people with real experience at all, just script-kiddies trying to make a buck legitimately but with no true understanding. Maybe the software makers simply can't say no to a customer request, no matter how suicidal they know that request to be. I don't know.
Whatever the cause, decades of history are completely and repeatedly ignored. They keep making the same mistakes, and they don't fix the underlying causes. Sure, there are things that are hard. Denial of service attacks are hard. People who know exactly all the ramifications of IP who go sending maliciously hand-crafted packets aren't much fun either.
But these highly technical ploys aren't why most folks on their toyboxes are being screwed up, down, left, right, and sideways. They're being screwed because of very simple matters. They don't have the notion of a protected execution mode. They don't have file permissions or memory protections. They automatically execute content willy-nilly, often with complete access to the whole machine. They expect a program to show up in binary not source form. They don't compare robust checksums from a strongly authenticated sources. They live in an infinitely vulnerable monoculture. They expect things to just magically happen for them without a thought or a care, and guess what? Their wishes are duly granted, much to their eventual dismay.
It is possible that mass-market factors may someday end up plaguing Unix systems in ways not so far removed from the stupidities that the toy boxes are riddled with. We just have to tell them no, and to condemn in the strongest and loudest possible terms any backsliding into insecurities that if we ever had, long ago banished. Looking at the Winix phenomenon, in which a dozen different vendors put together and ship their own Linux operating systems, all specifically constructed to be user-obsequious and Unix-hostile all in order to appease the lowered expectations of a hundred million Windows idiots, who, despite their numbes, really can still be wrong. The stupidity of the masses must never be underestimated.
Re:Nothing is foolproof (Score:1)
Actually, though, I was fascinated by thinking about potential active, practical security implications. At least in one respect of one particular example given in the original article.
Seems quite strange when put in a broader context, that (to torture it all out a bit) it's amazing that computers on the Web last as long as they do. They typically have 'doctors' (sysadmins) on hand keeping an eye on them, but they don't have their own immune systems. From which it follows pretty reasonably that they're either 'immunised' (setup and patched properly) or tend to get thoroughly compromised as soon as someone finds a hole.
A topic for another time: This is (generally) about developing software immune systems. How long before corresponding software pathogens and other marauders are developed that meander about the Web of their own volition looking for victims? Beyond the current implementation of viruses, how quickly do we expect a proper, unattended software arms race to creep out across the Web?
Artificial Immune System (Score:2)
But if you have rough idea what's on the network you're trying to attack, and what hosts are on there, you may well have a good idea of roughly what kind of traffic is going about. If you know what hosts are there and have an idea of what traffic is (probably) there, then why not just bury a false ID somewhere in your packet?
You could attempt to forge an ID from knowledge of the network, and fool the alarm mechanism by effectively masquerading as normal traffic. This is probably preventable by looking at exactly where the ID occurs in the packet and deciding if that's where it should be.
Beyond that, though, what's to stop you quietly trickling a normal-looking flow of do-nothing packets through the network to a given port on a given host? Then when a detector is generated, it'll trigger on your harmless packets an get ditched. Then one day you make your packets do something nefarious, and they get overlooked, something like 'friendly fire'.
My half-cent worth (Score:1)
After reading the article I would say that I much perfer Dr. Forrest's approach. It is an internal defense and does not rely on outside resources. I definitely do not like the idea of my system automatically sending and receiving files without my knowledge. It puts the integrity of my system into the hands of this "central" virus authority.
But the API is the SAME (Score:1)
Dr. Dobbs Dec/99 has an article by Bruce Schneier on Attack Trees. For those interested, it discusses one methodology of breaking security.
Cheers
Re:What favorite beast has this trait? (Score:1)
The big difference I notice between humans and linux is the extent of the differences in individuality. Yes, I can set up a linux machine with a different configuration, but that is a far cry from the extent to which my DNA differs from your DNA. We're not able to (yet) reconfigure ourselves, we are a fixed individual with an individual blueprint. We only can add to deffensive (autoimune) network, gain experience fighting disease if you will...
Linux configurations (of the same distrabution) all have the *ability* to be identical. Linux machines all stem from one set configuration and only begin to act differently based on external stimulus. There is a finite extent to the changes that can be made.
As far as evolving operating systems, I will agree that Linux is the closest to that - with the user getting the ability to choose what patches, updates and fixes they wish to rebuild their kernel with. But it is still driven by a person.
There was an earlier thread about your OS getting updates on its own. This too would only be a limited representation of DNA. The true extent of AI required for a software autoimune system would be one that sees what you use, checks to see where your system is vulnerable or not satisfying your needs, looks to see what patches/fixes/upgrades exist and considers what other problems those cause and performs some limited impact study to see how badly it would affect you and then based on that, grabs the patches and "mutates" itself for your benefit.
Woah, that's kinda neat when you (or I at least) think about it...
Anybody got the foggiest idea of how to even start coding that... (well other than #include stdio.h)
Doesn't even pass my BS detector (Score:1)
From the article:
49 bits would hold a single IP address (32 bits), a single TCP or UDP port number (16 bits), and one additional bit. They claim that it's holding two IP addresses and one port. (80 bits). I don't even know what to say about the fact that it's holding only one, and not both port numbers. The article says "stringing together", so they're not generating a hash. I could do a lot of speculation as to what they're really putting in those 49 bits, since the article is obviously not correct, but I won't bother. For all I know, the 49 bits figure could be wrong as well.
So then, they compare these packets with a pool of random 49-bit numbers ("detectors"). 12 contiguous bits in common, and they throw the detector away. A detector must last for two days against this to be actually used. Let's look for ways to prevent any new detectors from ever being used. First, random chance. If there's enough traffic to make such "advanced" software necessary, every sequence of 12 bits will probably occur over the course of two days. Different port numbers (whether they save source or destination doesn't matter, because there will be traffic in both directions). Different IP addresses on either the remote or local network. An attacker purposely causing this to happen. 4096 consecutive legitimate connections from a machine that allocates its ports sequentially and isn't connecting to any other machines in that time. (SMTP, FTP, and HTTP could easily cause this. IRC could with an auto-reconnecting client that keeps getting disconnected.)
Let's say a detector manages to get by (maybe their network connection is down for a couple days). Let's see what happens next:
They don't say what a match is. A full match? That's worthless. They're probably using the same threshold, which leaves the same problems with false alarms.
Oh well. It's a really cute idea, as long as you don't throw any facts at it.
Re:Nothing is foolproof (Score:1)
1) check bugtrack.
2) pull the patch down to evaluate.
3) deploy the security patch.
Is an interesting proposition if you consider the following example: Joe friendly hacker spends days chugging Mt. Dew and pizza while snifing around another system, sticking all kinds of things into every port he can find. He writes and rewites code to exploit an small and obscure security hole and gets root.
Joe friendly celebrates his successfull hack with an 'Xtra large meatsa trio' that night, and looks for another network to slip into. Much to his astonishment, everyother network has automagicly deployed a patch to the exploit he so painfully spent days to find, and furthur extortion of that security hole is prevented.
To quote a military tactic "speed kills".
By automating the reporting/testing/fixing/deploying process of keeping up with holes, our joe friendly hacker may indeed pull off one or two successfull breaches, but not to many after that.
This does, however shift the hacking from being directed at a netowrk, to the hacking on the reporting/testing/fixing/deploying system that everyone is using.
_________________________
Re: Artificial this, artifical that (Score:2)
"Any significant advance in technology is indistinquishable from magic."
If you are shown a card trick, it's 'AI' until you're shown how it's done. If you put a caveman in front of an Imac, he's going to insist it's a deity. Thus, Any AI system (and I may be going out on a limb here by using the term ANY) is also an AI system, untill you read and understand the source code.
Now understand that automating a mundane decision process is what has made automation (in it's current industrial application) such a productivity booster. Afordabley automating physical processes (robots that weld car frames, robots that paint, ect.) has taken decads to come on-line, and continues to evolve. On this same liniage, Automating a decision process (i.e. automated trading systems) can and will also reap huge productivity rewards.
I would agree with you that it truly is automation at work here, and there's nothing artificial about it. Programers work long and hard to coax the code into doing what they want it to do.
_________________________
Re:Artificial this, artifical that (Score:3)
IMNSHO, This term is very over used. Any time a system goes live on a network, it's deemed to be somehow "alive" by putting an Artificial in front of it. A good example of this was when IBMs deep blue beat the a grand master at chess (Kasparoff(sp?), it was hyped as a "giant leap forward for Artificial inteligence".
There's nothing artificial about it. It was the result of many of the greatest programs and chess master toiling for years to pull the project off.
Its more acurate name would be Automated Intelegance.
And this 'Artificial Immune System' is also just and automated series of self updating decisions. Taking the human out of the loop doesn't make it artificial, it just makes it more cost effective.
_________________________
Details on Forrest's research (Score:3)
Re:Evolutionary Speed (Score:1)
Yes, there are some things wich are wrong, But "its" and "it's" are two seperate words, as opposed to your claim that "virii" isnt a word, and therefor should not be used.
There are many "un-official" words.
The deknez has you, ulna ek tuln.
Re:Basic Flaw (Score:1)
Some packages will monitor for writes to files (such as system files) that the package feels a binary has no business writing to, and put up a "permit or deny" dialog. This isn't exactly an attempt to detect viruses, but it is an attempt to detect and stop viral reproduction, even for unknown viruses. [No, I am not going to get into the Plural Wars.]
Carrying the metaphor too far (Score:1)
Put simply, a computer virus is not a living organism in the usual sense. It does not "mutate". (The statisical liklihood of a computer virus evolving from pure chance is far greater than the lifetime of the universe.) It does not reproduce sexually or asexually.
Moreover, computer operating systems and their virii have not even scratched the surface of the incredible variety and complexity of the immune system of human beings.
You could probably compare the state of computer virii and AV software today to bacteria methylating their own DNA to protect its own DNA from restriction enzymes that instead attack foreign DNA (read, virus material).
The best that these AV programs can do today is look for signatures or activity of *known* viruses.
"Taunting" a virus to trigger in a protected space only works if you know the virus phenotype in the first place.
Scanning network packets seems to be an expensive and legally tricky proposition, since most virii will be inside binary files, which means you not only have to look for MIME data inside packets, but decode them too, which involves a whole other security issue altogether. And then you will only catch the virus that you have information on, that you already know about.
Not not unique (the big picture) (Score:2)
I don't think thats what the author means. I think that hes talking about other common components, like web browsers, and email clients, which is what most modern viri exploit.
At the moment a viri author can make huge assumptions like, its a win32 os with Outlook, and winsock, and use small exploits in each of them to spread the virus.
The linux kernal may be mostly the same accross most intalls of a popular disribution, but the differences stack up when you consdier all the other permutations of mail client & server and html renderer/http server, java VM, etc, etc, it becomes very hard to create a virus that will work with them all!
ThadRe:But the API is the SAME (Score:1)
Sorry Spidy, but I'd have to say you're OT rant is a little OT or OR'd itself.
Immune Security Systems Research - (Score:1)
!? (Was: Re:Viruses / Virii) (Score:1)
Furthermore, your Japanese seems as odd as your English. "Watakushi" doesn't mean "I" in the Japanese I learned. It's "watashi", and the plural is "watashitachi" - Watashi no namae wa "RFC959" desu; watashitachi wa kohii nominagara - unless you speak some dialect unlike the Tokyo Japanese I learned.
Viruses / Virii (Score:1)
Both plurals are used, viruses is more common, but in scientific circles virii is used.
It is one of those things like formulas formulae.
If you want to be proper, this also holds true with Japanese words. I am learning Japanese and one of the interesting things is that for most words there is no plural form, so for kimono, kimonos is incorrect. There is a plural for I (I = watakushi, We = Watakushitachi) and you (you = anata, You [plural] = anatagate). Doing it otherwise is like saying yous
PS: Did anyone know that the singular form of data is datum?
Watashi / Watakushi (Score:1)
According to the book "Mastering Japanese", I is watakushi. I suppose it is one of those regional things or something. Just like I heard that Japan can be called either Nihon or Nipon.
Watakushi wa nihongo ga wakarimasu yo, keredomo takusan wakarimasen ne.
Arigato gozaimasu
Sexual Reproduction of Computer Virii (Score:3)
Theoretically it should be possible to create viruses that reproduce sexually. There are two parents involved and the offspring shares traits of both parents. Have data structures similar to chromosomes that hold traits of the virus such as where it is stored, what it does, how it reproduces, its lifetime...
The viruses would then go around looking for other viruses of the same basic type (species), mix together the chromosomes and create varied offspring. You could even have designated virus breeding grounds.
In the programming side of this, someone would create the basic structure (species) of a virus and a way to insert traits. Virus writers would then come around and specify the traits they want, and send it out (either to a "friend" or to a possible designated virus breeding ground).
This would create a new type of virus. One that will eventually become so varied that any in that species can not really be removed easily.
Next we'll be seeing system AIDS... (Score:1)
I can hardly wait.
Re:Computer AIDS? (Score:1)
Re:Sexual Reproduction of Computer Virii (Score:1)
Re:Details on Forrest's research (Score:1)
Hm, I do think some kind of fingerprint could be created for each compiled kernel, Added and changed to some selected portions of code, this would make checksumming of the programs appear invalid if not having the right 'unopen' key. of course, this might also add some problems in detecting changed binaries(publisher spreading 'pure' binaries.) but would work fine in a OpenSource version. (A database over file checksums, a local key for the computer, something like that)
so In part, it might actually be practical and do fix some possible virus infections.
whee, finally managed to html format my post. sorry for the garbled ones before this....
Re:But the API is the SAME (Score:1)
Sorry for the OT rant. Moderate us both down.
Re:Details on Forrest's research (Score:1)
And yes, I did understand Forrest's arguments(partly at least) but my rant was rather badly expressed in this matter. Sorry for causing misunderstandings.
And no, checksumming is not really irrelevant. If each program was evaluated before executed, it would cause an overhead in load time, yes, but would also decrease the probability of running an infected file.
This would of course require a lot of implemention problems to get secure(or
then again, much binary released software uses this sort of thing for protection against 'cracking', and patches or cracks still appear..
yet another rant. Continue in private: spider@darkmere.wanfear.com
Security (Score:2)
What favorite beast has this trait? (Score:1)
Well, people, for one. Elephants, for another. Even penguins. No two penguins are exactly alike.
Wow. Now there's a good springboard for an extended metaphor. "No two penguins are exactly alike. Run Linux for evolutionary viability."
Seriously, though, what about evolving operating systems? Wouldn't that make some sense? Software DNA?
oops. (Score:1)
should have used the preview button. i always ignore the best advice.