Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
News

NYT Magazine Says No Network Is Secure 144

bw writes "The NYTimes magazine explains why there is no such thing as a secure network. Along the way, it compares the attacks of script kiddies to a million monkeys firing catapults at random -- some attacks are bound to succeed. Also, Eugene Spafford thinks that after Y2K suits dwindle away, hungry lawyers will start looking at how the promiscuous connectivity of modern office apps can have dangerous side effects (think Melissa with a payload). " A truly excellent article! It's quite long, but worth the reading time, and if you don't have a (free) NYT login yet, this is the time to get it.
This discussion has been archived. No new comments can be posted.

NYT Magazine Says No Network Is Secure

Comments Filter:
  • by Anonymous Coward
    There are a couple of major problems with security today.

    Network admins in a lot of corporations simply do not take it seriously. The attitude is "Hey! We're behind a firewall, there's no reason to worry about security." Bzzt! Wrong. In a corporate environment, the outside internet is only half your problem. You have to fear your own users just as much. All it takes is one employee who is disgruntled or a corporate spy or who simply knows to much and sets up a vpn link from home to work using SSH and you've got a major problem on your hands.

    Also, users tend to not like security because it makes their lives more difficult. This problem is particularly bad with the clever users who often figure out how to bypass security in order to make their lives easier. Once security is compromised, you've got an opening for those corporate spies and disgruntled employees.

    If you want any measure of security at all, you'll keep your sensitive machines off any network connected to the Internet. Apart from that, I'd suggest running a REALLY secure operating system such as DG/UX with the B2 options installed. In order to get the B2 rating, they had to audit every single function in their C library to insure that there were no side effects that could compromise security. They also had to audit all of the system programs that the OS ships with. They have Posix.1e functionality and then some. It's really quite an impressive system.
  • by Anonymous Coward
    This may be the first exposure of /. readers to Charles C. Mann, probably the best-informed tech journalist/writer out there.

    In the August issue of the Atlantic Monthly, he has a very clueful article about Linux, and goes into the GNU story. There were a few minor inaccuracies, but the big one is failing to mention RMS' greatest contribution: gcc.

    Unfortunately, the August issue of Atlantic is not yet on-line. But monitor the URL

    http://www.theAtlantic.com/atlantic/issues/curre nt/contents.htm

    and it will appear eventually. Also you can search the Atlantic for previous articles by Mann. Slashdot readers will probably enjoy his multipart series on copyright issues in the digital age.
  • by Anonymous Coward
    that kind of thing never happens on the enterprise network. spock doesn't allow it.
  • by Anonymous Coward
    One serious issue of user convenience vs. network security is passwords. People have to know so many of them that duplicate passwords and/or easily accessible lists of them tend to proliferate. Furthermore, large numbers of passwords discourage changing them regularly. Currently I have six system passwords at work, one for my email software, and one for my phone. Outside of work I have an ATM PIN, PINs for several credit cards and two passwords for my home Linux box, and a password for my ISP. I also still have an account on a machine where I went to school. This is why I'm an AC, I've drawn the line at learning passwords for web sites.

    What's worse is that for better system security, passwords should be hard to guess which, unfortunately, makes them hard to remember. Over a dozen different hard to remember passwords that should be changed every couple of months is near impossible to manage.

    We need a better solution.

    Will
  • What about computer terms?

    How about '--verbose'? :)

  • Yet, we see in the NYT article that even the systems that reside on the Secret / Most Secret "air gap" networks are running MS software. WHY IN THE HELL ARE THEY DOING THAT?!?!?

    Microsoft Office. It's not just the number-crunching that needs to be classified -- the resulting statistical analysis, report, and presentation are classified too. In fact, many people with classified data on their computers -- probably a majority -- have it there for communication only.

    For an amazing fraction of the people in this country, these kinds of tasks imply use of MSOffice. And the rest of us have to communicate with them.

  • This is a very good point, and not just for software and system configuration. The same principle holds for policy about what's classified (proprietary, trade secret, whatever). It's seemed to me for quite a while that US government policy is to classify as much as possible, so that the really valuable information is a needle in a classified haystack. This leads people to lose respect for the classification.
  • While I appreciate RMS's ideas, this one doesn't sound practical for general consumption.

    You assume that the only reason people crack computer systems is for the challenge. Crackers like to push this image because it makes them look like the "tormented genius" who breaks into computer systems as an intellectual challenge. In fact, they provide a service by showing us our security holes! What wonderful people! (Sarcasm intended)

    It is a well-known cracker ethic that once you break in, you don't damage the data. However, a number of crackers (especially the current onslaught of 13-16 year-olds who may or may not understand the "community" they have chosen to align themselves with) don't follow these guidelines and just break things. Denial of Service attacks have become more common, yet taking down a computer system is just a few steps less severe that destroying it's data. Sometimes having your information when you need it is as important as having it at all.

    And I haven't even talked about the virus writers who have violated the "don't touch the data" principle more often than all crackers combined. That's another use of security: to keep programs from destroying the data of other programs. The only difference between a trojan horse and a buggy program is intent. A simple bug can trash a whole system if the operating system allows it.

    Basically, to argue that security is unnecessary, one has to pretend people don't do things just to be annoying. They do, and they will if you let them. Cracking is driven by curiousity and the desire to destroy. You've only considered one of those motivations.

    By your criteria, the only computers that ought to be hooked to the Internet are the vast majority of home machines that are used for games and web-surfing.

    Everyone else has data to protect and work to get done.

  • Try, make the whole world a graffiti wall and no one will use it. I agree with your meaning, just pointing out the flaw in your analogy.
  • The problem with pervasive networking and content enriched email is that it turns every old application into a security-critical application.

    Not so long ago it seemed that you could get away with not auditing the many large applications which are not set[ug]id and do not directly process data from the network.

    Nowadays even the most innocuous tool is going to have malicious data piped through it sooner or later - ghostscript, libjpeg, your cddb-enabled CD applet.

    While an attacker may not crack root directly through such attackes, it still let them use your account - i.e you email, PGP keys and personal files. They may still crack root later using keystoke sniffers, careless passwords or bugs in local setuid apps.

    The solution? We can start by making sure that all developers understand that security is a basic requirement for all software - you would think that this is a given, but alas security is usually an afterthought (if it is considered at all). Compliers like stackguard-gcc and languages with built-in security like Java will help, as will fast virtual machines that we can use to imprision suspicious code.
  • How about that? We all saw the news reports about how much it was costing companies for downtime and cleanup. It is a direct result of Microsoft's poor attention to even the most minimal security. The numbers are big enough to warrant a suit. Companies everywhere (except software probably) would probably like to see Microsoft's products held to the same standards for safety and fitness for purpose that their products are. They would probably like to see that Microsoft can't get away with making fraudulent claims in marketing materials anymore. Anyone else think it could work?

  • They won't lose marketshare unless people really believe they are doing something wrong and that their software is not the best available. That won't happen as long as Microsoft is allowed to continue blaming everyone else for their own security problems. Microsoft is a big respectable corporation. They wouldn't lie about this, right? They'd get in trouble if they did, right? The government looks out for consumers on stuff like this, right?

    The only way to expose them is to take them to court and win. That is probably what it will take to wake up the PHBs of the world.

  • Seems to me that the situation should be handled similarly to the way the police deal with someone who left their car open with the keys in the ignition because they "just had to run in and drop something off real quick." When their car gets stolen, the police will do what they can to find the person who did it and punish them, but the person who was dumb enough to leave the thing wide open gets a ticket for that as well. Consider it a ticket for negligence that costs taxpayers money by unnecessarily inviting crime. People shouldn't make things so easy for criminals. This aplies to everyday security, and should obviously be taken much more seriously by individuals and companies with more to lose than a car.

  • I'm telling you: This is a very important election cycle. We had better make sure that the public understands the issues, and that the politicians disclose their positions... and then vote!

    Ok. I'll agree with you on this. Now, how do you propose we do this?

    How do you go about explaining these issues to people without sounding like a conspiracy nut?

    How do you get "the people" to pay attention to something they don't really understand.

    How do you give them a reason to pay attention? Probably 70% will be happy if the government "cracks down on crime" and "stops violence in our schools," and/or "saves social security" or "ensures good health care for HMO members." None of which they are likely to really do, but they will claim they did, and people will believe that.

    Then there's the problem of getting ANY politician to state his/her position on things that they don't even want to admit are issues. They don't want this stuff to be public usually. If it does go public, they have to seriously oversimplify it and then spout some cheesy rhetoric that will make people nod their heads and think they understand what's going on.

    Most likely, though, they don't even have a position. In that case, it is likely that they will just go along with the people who are funding their campaign rather than doing what's in the long-term best interests of all of us. This can be applied to all tech-related issues really. That's why I'm asking these things. It's not just the things you mentioned in your post, but many other issues as well. The decisions they make aren't just going to be policies that can simply be changed later. They have a lot of real world changes that will cost a lot of money to make, and even more money to change again later. It's rather important to get it right the first time.

    Finally, who do we talk to about all this? Who are the top geeks that have the ear of Congress and the other various government bodies? Do such people exist? Are they trustworthy? Are they reasonable? Will they listen? I'd like to do something besides just talk about it on /., you know? What I'd like to find out is where I should expend time and energy to make the most impact. Anyone have any ideas about this? I'd like to see the government start doing things right with regard to technology, for a change.

  • ...so that the really valuable information is a needle in a classified haystack.

    That's a good point. The problem is the asymetry in the process. You can get into a lot of trouble for failing to classify a document that needs to be, but no trouble for classifying a document that doesn't need it. Also, it's easy to get a document classified, but an extensive review is required to de-classify it. That's why we still have classified military secrets from WWI.

    Just imagine the harm if a terrorist group knew how many Sopwith Camels we have stationed around the country!

  • by sjames ( 1099 ) on Sunday July 25, 1999 @08:05AM (#1785718) Homepage Journal

    An important point made in the article is that overly paranoid security causes users to bypass ALL of the security. If you make users accept new randomly generated passwords each week, they will write them down. If you allow your audit procedures to take years, they will quietly install unapproved soft/hardware.

    The two most ignored aspects of security are determining what level of security is actually required in the first place, and minimizing the burden of the security on being productive.

    Of course, even when that's all taken care of, there will still be cases where truly paranoid security actually IS called for. At that point, the problem becomes one of employee education, and an HR issue. You can't have a secure system if your employees won't respect that security need, or if they are black hats. Especially in the latter case, security flaws are not the system admin's fault.

  • US West need a seminar or two on DSL security. They do not tell people to set up a password on their DSL modems, and I get the impression they might even frown upon such behaviour.

    A friend of mine recently installed DSL. I telneted from his machine to my ISP's shell account. I checked where I had logged in from, and telneted back to his modem. With no password I could easily have made it unusable and inaccesible to him. Obviously after that demonstration he fixed it!

    His IP address completely gave things away: wdsl106.*.*.*. It would not be difficult to check the numeric range and screw everybody up. In fact, I would like to see it done as it would bad PR for that crap company US West - it might improve things before DSL is completely rolled-out and there is the potential for some real and very expensive damage.
  • Oh please, it was a JOKE! Geez... Read the message again.. He's not a script kiddie, he downloaded the port scanner and compiled it himself. Think about it.
  • The most major problem is that no mater what biometric you choose, there are people who lack that biometric (handprint scans are useless to someone missing that hand, for example). To reduce the number of people who can't theoretically use a biometric system, you have to use multiple biometrics, any one of which grants access. This, however, increases the possibility for error. Furthermore, there is only one biometric which everyone is guaranteed to have, namely DNA, however this one has two severe problems: privacy concerns and the fact that identical twins (and clones) have the same DNA, so the biometric is not totally unique.

    The other problem comes with theft. Nowadays, people will shoulder-surf or guess your password, or steal your token; eiother way you're rarely hurt. But I don't want to think of what they'd do to get my handprint or retinas.
  • Hiding behind a legal disclaimer may or may not be sustainable, but I don't think it is legal. There is an act which requires merchantability or fitness for use (something like that). It puzzles me that Joe Klein is going after MSFT for uncompetitive practices, when I think there's a much better case for selling products unfit for use in terms of stability or security, or false advertising. Incidentally the government has a stated purpose of protecting consumers against intentionally or otherwise rogue merchants - a purpose that is beneficial to the individual, and thus a concept that some /.ers will find shocking.
  • As tchrist points out, Microsoft aren't the only people who ever made security errors. Even today, there is plenty of free software written that contains security booboos. The difference is, when a free software author sees their name on a security advisory, they get their act together sharpish. When Microsoft sees their name on a security advisory, they damn near laugh it off. They blame the user, they blame the hacker, they blame whoever reported it. They will never learn, ever, because deep down they believe that selling the most software makes them right.
  • One of my systems was hacked, rooted, and the acct logs (utmp/wtmp/messages on a Sol2.5) box were wiped. A few obvious backdoors were written in the system (a couple extra inetd and services entries, as well as 2 new users). A web page that was 2 inspection-steps away from going live was hacked, thus causing a full-scale review of all content for that site.

    Why do I consider this helpful? Several reasons. My org is unbelievably averse to change, and my repeated requests to do upgrades (solaris to 2.6 or at least to patch 2.5, upgrade wuftpd/httpd/bind/sendmail, put in antispam measures, remove production stuff from development box, etc) were continually denied or put off. This hack was kind of the 'I told you so' which I could use as ammunition (along with my previous emails, so remember kids, DO EVERYTHING IN EMAIL AND KEEP BACKUPS! The phone should be used for ordering food and holding paper down on the table) in my battle to upgrade. Now I was able to put in BIND 8.2, Sendmail 8.9.2 (+ antirelay, screw the ignorant misconfigured customers trying to go thru it), wuftpd 5.0, sshd 1.2.27, etc...

    I'm also only going to be there for 2 more days, before I start my new job across town for a substantial pay rise (and parking space!). Of course, they don't have a replacement for me, and I'm the only unix sysadmin in the entire company (btw, they have about 10-15 boxes, handling DNS for the org and its customers (about 200 domains), sendmail (forwarding for customers), apache (4-5 sites), samba, etc). They'll probably need a whole bunch of work until they find a replacement.. I'm thinking $200/hr, so I can get 'em for all the agita they gave me... STICK EM UP!

  • This article and some posts really point to the computer user of old. The Word, Excel, PP types.

    The article touched on a more disturbing trend I have seen. Educated Users (or somewhat). Those users who are running Server OS's at home, and understand how to circumvent Security SOP's. I see users bring in modems from home and hook them up. People dialing out of the company to their NT or *nix boxes at home with dedicated inet connections.

    I can deal with users who are want to argue why is a bad passwd. It is the users who are more educated and tend to forget that the workstation/network they use at work, is not theirs.

  • ServerOS meaning NT, Linux, BSD etc. I purposely used "ServerOS" as specifically mentioning a specific operating system in ANY post seems to attract some petty responses, as well as to accept that a larger audience reads slashdot, not just computer junkies as myself.
  • My perspective as stated below was to accept that a larger audience may read this, not just computer junkies like myself. So to use Word as opposed to say MultiMate, someone out there may say "oohh, those that just use it for email and doc's".
  • Of course by your post I see that NOT mentioning an OS has the same results.
  • "Personally, I like the smarter users, they don't make idiotic mistakes such as bad passwords, and install software. They understand rules better and what is implied in them, along with their consequences."

    AGREED! After reading my orginal post a second time I don't think I got across my point. Using "Educated" was a poor choice, "Dangerous" would've been more appropriate. Those individuals that know just enough to try something "cool", but don't know quite enough that they shouldn't.

    The problem lies not only with those users, but ultimately with the impotence of our IS department.

    Here's a cool example. IS does auditing of Web traffic so they can bring in some cool charts that show user's visiting unauthorized (porn etc.) sites at the next director's meeting, thereby getting the "OK" to further restrict net access. Unfortunately, after a month of auditing, the only users visiting unauthorized sites were the directors themselves:(

  • ok, bad example... actually show's where the real problems is:) hehe.
  • Who needs mirrors?

  • Is someone who wants the law to blame the victim of the criminal instead of the criminal a hacker or a cracker?

  • If we assume your "equation" is correct, then it is time for some breakthrough theory or implementation to get around it.
  • I disagree... Most opensource authors (and users) try to insure the product is as secure as possible. Look at various security holes in Linux kernel for example -- they were fixed as soon as they were found. (That's were the expersion "Security in Internet time" comes in).

    It would be hard to prove that Microsoft *doesn't* fix bugs/security holes quick enought. After all, they are commerical -- how long should it take them to recall a defective product?

    Unless you can prove that either Microsoft or any Linux product spefically ignores a certain security holes, they can not be sued.

    For example, they might be able to try to sue Microsoft over the Melissa virus, since Word Macros are a well known security flaw [they run with virtually no protection/sandboxing features (such as disabled file I/O), besides a macro warning box], which Microsoft should have done something about.

    That's basically the same kind of like GM purposely (and knowing it) shipping defective seat belts, causing the death of 23 people in the past year. Of course GM would get sued for it, and the sueing party would likely win. Of course if GM didn't know about the problem -- it would be quite unlikely that any law suit against them would be successful.

    It's not a price issue -- we already know that GPL'd software has to follow the law, eventhough it's GPL'd (which makes stealing parts of commerical code or reverse engineering illegal or infriging on trademarks). Recently the author of gaim, the gnome aol instant messager was required to remove "AOL" logos from his product. So he had to follow the law.

    So basically the same laws apply to closed source and opensource projects.

    One thing to note: OpenSource projects would be *MUCH* harder to sue, since you could easily claimed that the user:

    1) didn't get the lastest updates / didn't read bugtraq

    2) didn't fully inspect / test the code before installing (since he had everything the author had).

    But still, the author would be responsble if the problem was blatly obvious (such as every Caldera employee knew that every 99th CD would distroy the user's Windows partition or the source code proved that this features was spefically coded in by a Caldera employee). This would be a definate lawsuit. However if this wasn't a known bug, the author would not be repsonble.

    ***So does Microsoft really know that Macros are a real security hole (or just a great easy to use feature that as a side effect makes viruses easy to write), and that they are ignoring to fix it?

    ***Well, we will leave it up to the lawyers to decide. Obviously, everybody has there own opinions.

    Thanks,

    Andrew B. Arthur aka AArthur
    arthur99@global2000.net
    AIM: aarthurppc
  • I, too have been in the situation of finding that too many "restrictions" cause the more savvy users ("power lusers") to attemtpt to subvert them.

    I'm glad to have seen it called out in print.

    This phenomenon points up the fact that most of the security functionality being implemented is aftermarket layers upon software systems which are inherenltly not secure, and not impedance-matched across platforms. Until information systems are designed from the bare-metal up with sound, standardized information security practices in mind, this phenomenon will persist.

    A successful attempt to subvert the security of a system should render it inoperable (like a dead man switch) and the data effectively lost to the author of the subversion and every one else until an authorized principal intervenes.

    The system also needs to distinguish sensitive data and non-sensitive data, secure conduits and insecure conduits--somewhat like Perl's taint mechanism. If inconveniences are only assiciated with sensitive operations, the users are less likely to revolt.

  • Right, hopefully the admin has services protected and packet filtering at least so a password known by someone outside of the network won't have an opportunity to use it in the first place.
    Gah, security is a pain in the butt.
  • Not only am I concerned that this will inspire a "security crackdown", but I'm afraid that this might be the beginning of a P.R. campaign by those who would benifit by it. The only way for big corporate America to cut down the no-barriers-to-entry competitive enviroment they are facing is to have Draconian laws passed about who can connect and with what.

    I attended a security seminar, given by a big American corporation, within in the last year. The instructor was a ex Air Force half-wit who regurgitated a bunch of bad ideas for network security. I came a way with the feeling that people are already making plans to make the Internet less open: fewer hosts, fewer competitiors (Internet 2). It fits right in with Microsoft's new server applications strategy.

    I'm telling you: This is a very important election cycle. We had better make sure that the public understands the issues, and that the politicians disclose their positions... and then vote!
  • The product of security multiplied by convenience is a constant.

    What is more convient that putting your finger on the scanner imbeded in your keyboard and nothing else to login? How is that insecure? You cant steal my finger.

  • Well, actualy, no...

    Suppose that all systems were open, then:

    1. no one would hack any system, most files on most systems are not very interesting and if there were no challange in cracking them no one would bother.

    If you actualy read the artivle, you would know that attacks come from 'scrit kiddies' There is no challange for them now. They would continue to attach networks for fun.

    2. Sysadmins would have time for more interesting things than building barricades around the systems. more work would get done.

    Sysadmins would spend 100% of there time fighting fires from the lack of security, and no work would get done. Even if the network is never attacked, they still would be fixing probem caused by lack of security. Another reason for security is to pervent internal users from donig what thet shouldnt. Imagine if everyone could 'rm -rf /'

    3. if someone really needed to get access to some machine he wouldn't be stop by security measures (I have some files on that machine but I don't have access anymore and all my important work has to wait until tomorrow when the sysadmin comes bck in)

    Did you not read about grey networks? Secure data will migrate to insecure networks. Secure work will be done on grey boxes. And you seem to be implying that the sysadmin has supreme access to both networks. In anything but a day old unix box, sysadmin provilages aer fragmented and customized to individule admins. Its hard to do this with the default security model on unix or NT. Thats what NDS is for. unix is a all or nothing with suid's hacked in on top, NT by default grants everyone all access, and you havwe to expilicity deny rights. Only NDS on netware (or solaris, nt or cladara (?)) works sainly with ACL's and stuff.

  • When it comes down to it, what your talking about is authentication. Authentication is the root of all security, proving to something (or someone) that you are who you say you are.

    There are three forms of authentication. Something you have, something you know, and something you are. The first is something like a key, or a (magnetic stripe on a) credit card. You use a key to authenticate yourself to a door, or the ignition on your car. You might use a mag stripe to do the same, open a door.

    Something you know, is a password, or a pin, simple enough.

    Something you are is biometrics. Fingerprint scans, retina scans, facial recognition, DNA, etc.

    Good authentication requires two of these, preferably one being biometrics. Toe get money out of a ATM you need to have a bank card, and you need to know your PIN. To enter a secure room you may need all three, a PIN, a mag card, and a guard to match your face to the picture on your ID badge.

    Since remembering a password is hard (*cough*) people given the choice will choose easy passwords, or not, write it down and tape it to there monitors. Either method doesn't help security at all. If logins in requires a fingerprint scan instead of a password, then your double better. Finger print scans are more secure to break then good passwords and you can't tape your finger to your monitor.

    Its not impossible to make a system that is both easy to use and secure. Unfortunately systems are never both because sysadmins and developers don't realize that users will subvert security if its hard to use.

  • If you are a systems admin,and worth a damn, you should have plenty of free time on your hands. If you type it more than once, automate it.

    I'd bet you have never worked for a PHB (assuming you have worked at all). Often one isn't given the resources to do a task once let alone spend time to automate it. And how the heck do you automate something like reinstating passwords. As the article pointed out the wetware attack is commonly the easiest one and admins spend a lot of time dealing with it.

  • Let's face it, most people who can work their way around a computer wouldn't trust MS software in an environment requiring high security.

    Yet, we see in the NYT article that even the systems that reside on the Secret / Most Secret "air gap" networks are running MS software. WHY IN THE HELL ARE THEY DOING THAT?!?!? These people are supposed to know better, but they do it anyway. More than likely they were "ordered" to do it by someone who doesn't know any better, and doesn't have the common sense to trust the people that were hired because they know better.

    Running MS software within a designated secure environment should result in charges of high treason.

  • What is more convient that putting your finger on the scanner imbeded in your keyboard and nothing else to login? How is that insecure? You cant steal my finger.

    Anyone in the house have a pocket knife?

  • Imagine if when the automotive companies had learned that rear-ending a car (think Pinto) triggered a massive explosion, that they had first tried to blame the people doing the rear-ending for the causing the explosion, and then the victims for getting in the way. That's what's going on now.

    When you buy a car, you have a reasonable expectation that it won't gratuitiously explode for lame reasons.

    When you buy Microsoft products, you know that they will fuck up. Microsoft has been around for many years now and the "quality" of their products is well-known. Is ignorance really a believable excuse anymore? If you buy MS, you deserve to lose.

    There's a funny idea going around: "Nobody ever got fired for buying Microsoft." Hopefully, that quote will begin to lose its meaning over the next few years. If I buy Microsoft and my company loses thousands of dollars per month due to screwups, then I deserve to be fired for my stupidity or negligence. And if I were working for the government or military, the word "treason" starts to sound applicable.

    Anyway, MS shouldn't be sued over this. They should simply lose marketshare (or -- *gasp* --- improve their products!) as people start having to take accountability for using known-defective software. Let 'em explain to their stockholders how "goodwill" on the balance sheet has been placed in the liabilities section.

  • The problem is that it doesn't matter how hard you work to secure your network,
    if your user will tell his password to someone else, your work is in vain.

    Melissa, ExploreZip, and Happy99 are good examples of this.
    You try to build a good secure system for your users,
    which in fact are smart enough not to tell anyone their passwords.
    But the users did the mistake of having friends with Outlook...
    (they are smart enough not to run exe files they get by mail)
    Bam! Your mailserver is flooded.
    (I think a solution would be to discard messages with "X-Mailer: Microsoft Outlook"
    and tell the sender about security problems with it)


    What you can do is to revoke access for "security hazards".
    If a user is too dumb to tell his password, tell him not to do so,
    and tell him that if people do "bad things" to his data ,it's his fault.
    If the user has access to important things, revoke his access.
    Disallow insecure software, etc.
    Also, use SATAN-alike tools (NESSUS is quite nice).

    Bottom line:
    A wall is as strong as its weakest brick,
    so instead of trying to make a strong brick stronger,
    try to take care of the weak ones.


    ---
    The day Microsoft makes something that doesn't suck,
  • But I (and presumably the previous poster) don't
    want black-box ActiveX controls to run. Period.

    Why not just pop up a dialog saying "This MS
    product is collaborating with an MS web designer
    to pump an unspecified privileged executable
    (written for your inherently insecure system)
    your way. Would you like to risk losing all
    your valuable data for the opportunity to play
    with an annoying interactive advertisement that
    you probably won't like, and if you do it's just
    because you've become a mass-market puppet?"

    p.s., this forum is in English, please learn
    to use it.
  • So we're going to have Python scripts that steal fingerprints? :)
  • I ment that if/when someone sues Microsoft the open source community should be in support of Microsoft instead of the normal Microsoft bashing..
    Becouse if/when someone sues Microsoft for bugs then the Linux comunity will be next on the list.

    Reputation aside Microsoft has a lot of money and that more than anything will attract lawsutes.
    RedHat and other such companys are also easy targets... once people see money in it...

    My origianl post was a tad slanderous of Microsoft.. That wasn't a smart thing to do...
  • I've heard this type of approach also advocated in the situation of drugs - legalise everything in state owned pharmacies, the drug barons lose out, the proper information is then available, and the organized crime behind it all collapses.

    In the case of opening all systems though, it's a nice pipedream, nothing else. The fact of the matter is that it would not work (the emptying bank account example is enough to show that). I don't see any real conclusion to the current scenario of skR1pt k1dd1e monkeys firing their catapaults... There is no coherent "community", there's always going to be someone releasing the exploits in Point'n'Click form, so there's always going to be the kiddies. Dead end. Please check and try again.

    --Remove SPAM from my address to mail me
  • but this comment is so inane it has to be done...

    Actually, hold on, never mind. I think its inanity is so blatant I need say nothing.

    --Remove SPAM from my address to mail me
  • >Better_Operating_Systems.org

    How about starting with Better-DNS-Knowledge.org?
    IIRC, underscores aren't allowed in domain names.
  • "It took them out to figure out this long?"

    I think you mean:
    "It took them that long to figure this out?"

    Thats much better;)
  • This is just the way I feel about it.

    Who's to blame a/o are you whining? I have said, for many years, only users and administrators can secure a system/network. Also; if it breaks, then the user is never held responsible, because the user can be blamed for system/network problems, but the administrator must always resolve the problem. In other words never blame the customer-user of a system a/o network - for the designed and/or innate weaknesses and problems of the systems/networks.

    A stupid user with the same password on multiple systems or an enterprise/corporate/research network that requires a user to maintain 10 to 20 passwords for day to day work is "almost" as stupid as the user.

    For real security always think point/person to point/person and network I/O ledger/log (monitor payload for terminal/user signature and sensitive content) ... Identify either/both ends of the security breach and eliminate the source/s. Actively defend your systems/networks at the point/terminal/person (in your network) and know all network I/O sources.

    The person at the terminal end, and/or the administrator of the network is always the cause of the security breach. If you don't look at and demand security built in to your H/S, then why should H/S companies put a priority on system/network security and spend profit dollars on developing better H/S security. However, no matter what H/S security is developed, I expect security breaches will happen as long as users and administrators don't work together and brief each other on "How To" secure ....

    Lets not ask congress for more (forever lasting) worthless/unworkable laws that allow US to blame the criminals for self-inflicted problems by lazy administrators, Tech-Stupid CEOs/Bosses, ..., uninformed/untrained users .... Security is the issue for all US not just the TAP of IM/IT/TelCom/... administrators and techs.
  • I hope this doesn't inspire PHBs to pressure the system administrators into tightening network security too much. Those of use behind firewalls already suffer enough.

    Where I'm studying, the only thing access we have to the net is via a web proxy. Apparently everything else is too insecure, including ssh. Fortunately, if you know the right people, you can get access to the SOCKS5 server, which will let you do most things (although I still haven't found a good telnet client for NT which will work with this particular SOCKS server).

    It's lucky we're even allowed email privileges after last years mail-bombing that had our mail servers unavailable for a couple of days... ;-)

  • Don't get me wrong, security is definately important. But as this article points out, it's really the users that are insecure, more than anything else.

    Obviously there's only so much you can do to secure machines and networks, but if you don't have educated users, then you may as well leave the whole system open. ;-)

    I was only complaining because it's frustrating to know you can't access your own machine via ssh from elsewhere because the sys-admin has deemed ssh to be 'too insecure' for use on the network, despite the fact that the only people who would want to use it are those who are probably quite aware of security issues already.

  • Well, Q3 will work through SOCKS, so yes, pretty well everything important will work... :)
  • Your slashdot password is stored in a Cookie, you want' so you don't need to remember it if you don't want.

    a simpler solution would be to use somthing like
    g%4d3*af for everything, I ususly use the same password, anyway, you don't need to remember your /. password if you don't want to, and customizaion is cool to. I encourage you to get an acount
    _
    "Subtle mind control? Why do all these HTML buttons say 'Submit' ?"
  • You have a choice of weather or not you want to run an Active X control in each case, it dosn't happen automaticaly
    _
    "Subtle mind control? Why do all these HTML buttons say 'Submit' ?"
  • I can deal with users who are want to argue why is a bad passwd. It is the users who are more educated and tend to forget that the workstation/network they use at work, is not theirs.
    It's the responsibility of the company, the department who generates policy in coordination with others within it, to generate company policy regarding their computer policy. It's the office administration to say who gets analog and why. Furthermore, it's the IS fo the company who decides to blow the wistle on people who try to mess with their machines. Of course, this might be just ignorance or just plain old stupidity of different parts of the company.
    I see users bring in modems from home and hook them up.
    A simple clause such as "Installing, deletion or modification of software with your any computer hardware will lead to imediate diciplinary action," would circumvent some problems. And anyone who signs something they don't understand, such as a work contract, usually get what they deserve.

    Personally, I like the smarter users, they don't make idiotic mistakes such as bad passwords, and install software. They understand rules better and what is implied in them, along with their consequences.

  • Who's to blame a/o are you whining? I have said, for many years, only users and administrators can secure a system/network. Also, if it breaks, then the user is never held responsible, because the user can be blamed for system/network problems, but the administrator must always resolve the problem. In other words never blame the customer-user of a system a/o network - for the designed and/or innate weaknesses and problems of the systems/networks.
    That's like blaming the parent who's kid left his key in the front door lock. I would most certainly agree with you if the user or administrator is the person to fix if it's their fault for weak passwords.
    A stupid user with the same password on multiple systems or an enterprise/corporate/research network that requires a user to maintain 10 to 20 passwords for day to day work is ?almost? as stupid as the user.
    To a point, it's not help-able. NIS and LDAP for example. What about when on the same box, a user has shell, pop and ftp? I'll admit, I do it myself. Call me stupid, but for certain levels of security or differnet groups of services/machines, I use different passwords. There is a level of practicality that a person has to evaluate. Will I be researching passwords for different machines all day long instead of working? For the 30 accounts, in which I can't memorize all the passwords, should I write them down? And if I do, triple encrypt them under one password? But that's bad enough...

    I don't think it's so much stupidity as just ignorance. Was it an end to stupidity or an end to not knowing that too much causes cancer?

    ---
    When in doubt, scream and shout!

  • The problem isn't the users or the administration of the company. It isn't the administrator either. It is really the ignorant. It is the users who fail to use good passwords, the administration who want access from anywhere and the bad administrators who leave their broadcasts open for network packets to be accepted from. It is the bad programmers who trust user input to work with strcat in C or using open(F," $fromUserForm"); in perl.
    • Problems from my old job? The usage of sniffers, since everything was on hubs.
    • Unix machines used from inside the network that is not secure.
    • Access from anywhere for ftp without restriction other than username and password
    • Age of oldest backups, 2 weeks. Hard drives also quadrupled the space of the tape drives.
    • Bad backup schemes: if you can't fit all of it one one tape, do a full backup for a fraction on different days, otherwise use an incremental for that day. I don't feel like sorting through 25 tapes if a system goes down.
    • Bad passwords galore
    Its too bad one can talk, but no one listens.
  • One wonders if articles like this will result in more traffic toward operating systems built with security in mind like OpenBSD.

  • I think this quote from Michael J. Miller, editor-in-chief of PC Magazine is appropriate (from his opinion column of May 25, 1999 [zdnet.com]). He is speaking about the Melissa virus:

    "The biggest problem is that the architecture of Word and Excel, with their embedded macro capabilities, makes them great targets for virus writers. Visual Basic for Applications makes writing such macros easy, and in this context, that's the absolute worst news."

    Certainly Windows 9x and other consumer-level products from Microsoft leave much to be desired in the way of security. In fact, the first time I discovered you could bypass the Windows 95 "login" by pressing Cancel, I nearly blew a gasket laughing.

    Microsoft does user interfaces probably better than anyone. But despite what many consider to be a superior "look and feel", I won't use Internet Explorer because I don't like the inherent security risk associated with ActiveX components. Similarly, though it might be more convenient, I won't turn on embedded macros in any Office product because it's not worth the risk.

    The great benefit of the Melissa virus for me is that the wipespread coverage got my students asking me about the virus. I was able to take a day explaining the nature of macros and why the fundamental design of Microsoft Office puts them at risk. Now at least those that paid attention are more cognizant of the security issues with the systems they use every day.

  • Go down Main St. and tell all the shop keepers to leave their front door unlocked and instead spend their time/money to get a bigger safe for the cash and other valuables.

    Then, when I woke up at 4:00am and I suddenly had a craving for a candy bar, a book, a new coat, a TV, or a Ferrari, I could just go and get one. Leaving the appropriate amount of cash on the counter... of course.

    Wow. Wouldn't this be nice and convenient.

  • Security is certainly difficult, but network security is totally
    possible


    If you're suggesting that it's possible to totally secure your
    network, then you're just plain wrong.

    It doesn't matter how often you do a "sweep", it doesn't matter what
    tools you use, you can only scan for holes that you know about. What
    happens when a cracker finds a hole that you DIDN'T know about?

    Any sysadmin that believes his/her network is impregnable is a poor
    sysadmin, because they delude themselves that they're better than they
    are. Remember the old adage "Pride goeth before a fall."
  • Well, if the EULA can be considered a contract (which depends on whether it can be read before purchased or returned without any kind of predjudiced once read but not used), then the lawyers will go after the next target. There's always somebody to be sued. For example the manufacturer who preinstalls Windows.

    What if you buy a used computer from somebody; does the MS EULA prevent you from transferring your license in this manner? If not, is the EULA in effect for the new licensee? If so, can you legally be held responsible for defects in the godos?
  • Simple -- you don't use the fingerprint to unlock the secure resource; you use a hardware key to unlock the resource and the fingerprint to unlock the hardware key. The hardware key has a self destruct so that if it is opened it forgets its passwords and has to be reprogrammed.

    If they got a digital version of your fingerprint, they'd have to either (1) break the encryption scheme in the hardware key, (2) get physical access to both your finger and your key.

    Suppose the bad guys will be able to break cyrptographic keys of length N at some point in the future. My hardware stores a cryptographic key of length 3N. I then break my cryptographic key into three strings of length N and store each piece in a different bank vault. Then the NSA will need to do at least two black bag jobs to be able to access my data.
  • Taking ideas from an RMS speech - I suggest that we should leave most systems wide open, and totally disconnect from the net anything that requires top-secrecy.

    Suppose that all systems were open, then:
    1. no one would hack any system, most files on most systems are not very interesting and if there were no challange in cracking them no one would bother.

    2. Sysadmins would have time for more interesting things than building barricades around the systems. more work would get done.

    3. if someone really needed to get access to some machine he wouldn't be stop by security measures (I have some files on that machine but I don't have access anymore and all my important work has to wait until tomorrow when the sysadmin comes bck in)


  • You'd actually probably need to use 2-3 biometrics methods, considering that you shed DNA constantly in the form of blood, skin cells, and cheek cells. DNA could be gathered from the dust in your cubicle, the fork you ate lunch with, or that bandaid you just threw in the garbage. Hell, if you donate blood, someone could rob the blood-bank if the information you access is that important.
    I don't personally think DNA is a good primary verification, considering you're shedding it everywhere you go.

    The problem with the retina is that, unlike the fingerprints, the retinal image changes with disease processes such as Diabetes, Hypertension, Hyperlipidemias, Hydrocephalus, etc. It might be beneficial to spot these early b/c you can't log into the system, but once these pathologies start, your retinal scan will change slowly over time.

    However, I would bet that fingerprint verification and retinal scans together would be easy enough to implement and quick enough for a computer to verify that they could be used together for very high accuracy.
  • ...and erase my hard drive while you're at it. ;-)

    --bdj

  • by El Volio ( 40489 ) on Sunday July 25, 1999 @02:31AM (#1785774) Homepage
    This was an outstanding article for the mainstream press that covered a number of key security issues that are fairly subtle to those who do not work in security (it even gets the "cracker"/"hacker" dichotomy right).

    It also makes an interesting point, one that I've had to deal with for a long time, and most security folks have as well: One of the difficulties in securing information is that these measures many times make life difficult for the users, and when those users are technically skilled themselves, life gets that much more difficult.

    The problem lies at the very essence of security. A secure system restricts the flow of information contained within it, but this is counterproductive to what users are trying to accomplish. Unfortunately for the users, sometimes it's more important to have secure information than ease of use. And as long as malicious individuals exist, this will be a "necessary evil".
  • I like this idea -- the mongo company I work for tends towards the "lock-it-down-then-review-user-needs-in-five-years " side of the spectrum. They use SMS to prevent installation of useful software, which interestingly did not save them from Melissa, CIH, or the Explorer worm. I say them because it takes very little time at all for the many power users to subvert the system, either by networking the computer to insecure but uable systems or by partitioning the disk and creating their own stealth version of the "standard desktop."

    All the crackdown really accomplishes is user hostility towards MIS, because it is inflexible and the reasoning behind it isn't open to discussion.
  • >By your criteria, the only computers that ought to be hooked to the Internet are the vast majority of home machines that are used for games and web-surfing.

    And how many of those have a copy of Quicken or MS Money on them?
  • I like this article. Its clueful, balanced, has the requisite number of quotes. There is the seminal quote by Spaf "...locked in a safe, surrounded by armed guards, and even then I wouldn't bet on it".

    It goes just deep enough to clarify a bunch of issues for those who have only seen the knee-jerk reactionary articles of the overworked sensationalist press. It does leave a few questions unanswered, and although I would like to see the answers, this article is right in not including them.

    So the FBI caught a teen aged hacker who stole a password and got into a bunch of sensitive computers at SFI, LANL, LLNL, and a few others, and they didn't call in a swat team lead by Janet Reno. That in itself is a revelation. The press hungry FBI actually did their jobs instead of sucking some columnists dick? Stop the presses! Makes you wonder what they did to the stupid guy who mailed his only password to all his cow-orkers where any script kiddie could pick it up. Did the FBI come down on him like a ton of bricks? Did he get a 5-10 year sentence for aiding and abetting a felony involving national security? Probably not.

    There is also a great section on connecting two secure networks together with an encrypted line, and then having one of the nets get compromised. It doesn't matter how strong the encryption is, the end systems are still the weak link in the chain.

    I'm going to have to get reprint permission for this article, third generation photocopies won't do it justice.

    the AC
  • So how do you do this on a weekly basis? Host based scanning, or network scanning?

    This is just out of curiosity, since I've been recently involved (actively avoiding) a discussion about which is better, host or net scanning. My position is that both are needed. An unpopular answer because that costs more money :-)

    the AC
  • Recently attended a big sales pitch on the new generation of home cable and DSL boxes. Idea is that a consumer can just buy one of these things, take it home and plug it into a cable TV system and be up and running.

    There was some technical details about how all unregistered boxes would always be directed to a sign in page, so the consumer would just have to enter a credit card number and the box would then reboot with a real IP address. Then the consumer could start surfing the web within minutes.

    Great idea, but I asked about setting passwords on the modems or the PCs. The horror and shock was obvious. Seems they did some studies, and found that if an average consumer has to enter a password to secure their system, they prefer not to buy or use the product. But the legal department had forced them to design their web site so the consumer would have to scroll through three pages of smallest type legalese, pressing accept at the bottom of each page. Buried in all that was a warning to set passwords. That was acceptable, but forcing it was not.

    So afterwards got a tour of the demo network, with some sample set top boxes and PCs. Whipped out the portable hacking/cracking laptop, and within a few minutes had control of every modem and PC. The big company is going back to the drawing board for the rollout plans, maybe to get each customer to set a line noise type password on their modems, and force them to write it down as part of the login process for the first day or two.

    People never learn, which is why crackers have life so easy.

    the AC
  • I expect the legal language behind the shrink wrap on your new copy of windows 2000 is getting slowly but surely beefed up as we speak.

    The NY times article had the amusing quote about cars: sure they would cost a penny, and do 400 miles an hour etc (the old analogy), but what if every day, someone on the other side of the world caused the car to explode, killing its occupants and several bystanders.

    What if when we buy a car, there is this piece of paper stuck behind the drivers glass, saying, "by opening this door, you agree not to hold Ford motor company liable for any drawbacks in the design of this car, and for any damages, monetary or otherwise, that you or your family should suffer through use of this car. We do not warrant this cars fitness for any purpose".

    Like it or not, we are moving to a virtual world, our assets are becoming digital not physical, and along with that comes the fruits of bad design: damages, responsibility, lawyers and so on, just like there is in the physical world. Microsoft, and every vendor better grasp that, and either hide behind a barricade of legalese (not a sustainable strategy), or behave as if they were making X-ray machines, cars, industrial saws, and other potentially deadly gadgets.

    So I agree. Although picking on microsoft isnt the whole issue, you can equally well pick on ebay, oracle, sun, HP, IBM and all of them. But microsoft does have the most arrogant attitude so by all means lets kick them first.

  • by asianflu ( 54533 ) on Sunday July 25, 1999 @05:39AM (#1785781)
    I have a test page that invites people to queue up their beloved home PC to get checked from "outside", and to have a few pings of death thrown at them. (www.dslreports.com/r3/dsl/secureme).

    You wouldnt believe what I find.. or maybe you would. many PCs have readable netbios usernames, back orifice was found twice out of 100 machines. Cisco 675 home DSL router/modems with NO password and NO enable password, open shares with guest logins, socks servers, firewalls with web configuration ports visible on the wrong side (my side), web servers meant for internal use with convenient displays of the internal network on them, visible from outside.
    And of course machines that blue screen after they get pinged with one of the many packets that cause Bills code to scribble where it shouldnt, but cant blame people for that.
    The current incidents reported of breakins to home PCs on fulltime net access, also in the NY times, (with a Linux box partially comprised through imapd I believe), could be reduced with some very basic external checking... Something ISPs should provide as a free service.
    Right now it would be trivial to construct with a bit of perl and a bad attitude, a sweeper that found enough PCs on DSL or cable to get straight to the top of the seti@home charts, or launch an attack against something harder, all from the bedrooms of guys who uses there PC to balance his checkbook.

    The far worse risk here:.. imagine somebody has VPN to their super secure office network, and its via internet DSL, and they are lax in security. How long before somebody writes a VPN scanner that finds insecure fulltime connected PCs and gets onto them to see if there is a VPN to a corporation that can be snooped/cracked/hijacked/watched. Companies think an end-to-end encrypted VPN is secure, but they dont think enough that the end of their tunnel is managed by an employee with little knowledge on security, and on a windows PC with a config that is by default insecure.

    -Justin
  • IMHO, the risk of hungry lawyers turning to security-related lawsuits once the Y2K issues are over seems high. Menacingly high, in fact.
    Lawyers have this ability to turn simple things into gigantic monsters. Put a lawyer to start working on security-related cases and one of them will likely make all of us look like the Devil incarnate through misunderstanding of the difference between Hacker and Cracker.
    But what can we do? I think we need to keep working hard at plugging the difference between Hacker and Cracker into the public awareness.
    If we don't do this job well enough, we might end up seeing unfortunate cases of public-opinion turning against us. Since I aspire to be a representative of Better_Operating_Systems.org and a member of the Open Source movement, I don't like that idea...
    Has anyone any idea just how well the public understands the Hacker/Cracker difference? How much work do we have in this field? Perhaps we can harness the Net itself to find out. Maybe a poll in the right place, or a letter to everyone you know asking them to ask their family and neighbors to see if they understand the difference...
    We ought to get started.
  • Think of what Y2k lawyers could do to the software landscape. Given the patterns of ignorance I see in the public, much less that shown by lawyers, I would not be surprised if the y2k lawyers attempted to attack linux, it does have so many large, tempting targets attached to it, SGI, IBM, and so forth. Linux can also be devilified, afterall, because its distributed with full source code, any hacker can scan it to find bugs to exploit.

    Even if they don't attack linux, I would lay good odds that they could change the software landscape where one cannot use linux or any other OSS in a corporate environment because it hasn't been 'audited' and studied by a VLO (Very Large Organization) for ISO complaince (or some such garbage). That alone limits software so that it may only be created and sold by a large organization. If that happens, then any corporation cannot obtain insurance and would be susceptable to a huge liability for using OSS.


    On another foot, any commercial entity that supports or distributes OSS coould be hit with liability?

    OSS as distributed now requires that the origional authors have no liability for any effects, planned, or unplanned on it. They won't distribute OSS if it opens them up to liability. If it opens them up to being sued any time in the future, they won't take that risk and will never release it.
  • Well don't forget the VPN technology. It seems that public/private key encryption over a virtual private network is something that would take years to decrypt... and you know journalists are always shooting for the apocalyptic kind of stories...
  • Private keys stored in a smart card are secure so the equation is now: VPN + public key + smartcard = security.
  • Who are "we?" If you are talking about the Free/Open source community then who are they going to sue? How can they sue someone for providing a program and saying "here, use it if you want but don't hold me responsible for any problems you have?" The fact that the program is "free" makes quite a bit of different to Microsoft, which charges for their programs...
  • Having a security token, in addition to your password, is the true answer to security. The only problem is that the vendors who make the tokens charge an arm and a leg for them. I believe the latest pricing I've seen listed a 3-year token at $39 USD. Seeing that it can't realistically cost them more than $1 USD to make, that seems like a bit too much margin for me.

    If you could get a token for $5 that lasted for three years then I think they would be much more prevalent and they would be incorporated into security schemes much more often.

    Of course I would only be supportive of their use in internal systems or "services" such as your ISP that you have to give personal information to. I would not support their use in general e-commerce, as it would be too easy to track everybody and the products they purchase. Besides, the way these devices work now is not condusive to e-commerce (you can't share your token ID and serial with multiple vendors because if they knew it they could pose as you -- there could be a service company that was setup that multiple vendors could authenticate to though).

  • No, you've gotten it all wrong. You wouldn't leave the money on the counter. There would be a ceramic piggy bank sitting on the counter to put the money into. But everybody would be ashamed to admit even think about breaking into it.
  • You know, I'd have to pity armed guards that have to protect something at the bottom of the ocean... :-)
  • (snip)Has anyone any idea just how well the public understands the Hacker/Cracker difference? How much work do we have in this field? Perhaps we can harness the Net itself to find out. Maybe a poll in the right place, or a letter to everyone you know asking them to ask their family and neighbors to see if they understand the difference...We ought to get started.(/snip)

    Naw, just send 3 of your friends an e-mail to the tune of:

    "The cDc and 2600 have decided that the portion of the public that knows the correct definitions of the words 'hacker' and 'cracker' should be rewarded. They are also testing out a new e-mail tracking program, and have decided to use this to reward the people whom they feel deserve it. Define the words 'hacker' and 'cracker', and then forward this message along with your definition to all of your friends. If you're right, 2600 and the cDc will mail you $5000 plus $5 for every person you sent the letter to."

    Then read all the definitions in the letter when you eventually get it back.

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...