Bringing OSS Into a Closed Source Organization? 427
Piranhaa writes "At the major corporation I work for, there is currently a single person who decides what software to approve and disapprove within the organization. I've noticed that requests from users for open source Windows programs get denied, nearly instantaneously, on a regular basis. Anything from Gimp, to Firefox, even to Vim don't make the cut due to the simple fact that they are open source. Closed source programs from unknown vendors have a much better chance at approval than Firefox does. The whole mentality here is that anybody can change the source of a project, submit it, and you never know what kind of compiled binary you're going to get. I'm a firm believer in open source code, but I also know closed source has its place. So what would be the best way for me to argue, with all the facts, to allow these people to come to their own conclusion that open source is actually good? Would presenting examples of other big companies moving to open source work, and if so what are some good examples? Or can you suggest any other good approaches?"
Follow the Money (Score:5, Interesting)
Sounds like this person has a deeply vested interest. I would guess that the real problem with open-source software is that it's free (as in "beer"!) so no chance to cash-in by playing favourites.
Find out where the kickbacks are coming from and blow the whistle.
Other concerns: OSS creep into commercial code (Score:5, Interesting)
While I was working for a former employer, we were engaged in negotiations with a very large company that would act as a distributor (to a certain market) of our products. Said unnamed company in the distribution contract wanted us to sign off that "no open source software products were used in the development process, and that no OSS was present in the product".
Why?
Frankly, I understand the concern. If you are a development shop, then if OSS creeps into your product (due to a careless (and thoughtless) developer copy-pasting code, for instance) then the legal ramifications may be grave. Potentially, depending on the license, you are required to disclose the entire source of your product, and provide a usage/distribution license to whomever receives that code -- basically, a single minute action can sign off your rights to your software. your distributors have also violated copyright, and are in similiar hot water (e.g. their efforts in promoting your product are now potentially worthless).
The result? Some companies are so afraid of this "poison pill", that they simply don't let any OSS in their gates. Does this promote OSS? Maybe. IIRC, I recall that some friends working for the dark side (M$) report that no OSS is allowed there (or in some parts thereof).
I use OSS extensively. The former company I worked for had a whole heap of OSS in its development process (but not in the developed chip/product). Actuallly, considering that a non-OSS company (Altera) used OSS in its supplied development chain (gcc, for instance) that we were using, there really was no conceivable way that the company I worked for could've signed off on the "no OSS" bit of the contract.
Re:Play the game or go to a higher authority (Score:5, Interesting)
If a company has a chief compliance officer, they are likely bound under some corporate regulation like Sarbanes-Oxley, HIPAA, or something else. To keep the officers from going to prison, one of the things they need to do is "due diligence".
This is making sure that every product in a chain is certified by a vendor in some way. For example, operating systems must be FIPS and Common Criteria certified, encryption products must be listed in the US Governments certified AES libraries, and so on.
Yes, some open source products make this list. SUSE and RedHat Enterprise Linux both have the certificates. However, not many open source solutions do, which is why businesses just go with a Microsoft stack for their applications.
For example, if a business is running a MS stack, and there is a serious data breach, said business can show their policies in place, show that they have done due diligence by using commercial software everywhere, with certified configurations, they will not have to worry about civil stuff like stockholder lawsuits, or criminal stuff like the SEC coming in with audit papers and handcuffs.
Unfortunately, should a similar breach happen with a company that has an open source stack, and can't really prove due diligence by showing that every piece of their IT puzzle was certified by someone (usually a US government agency)... well, they are facing a world of civil and criminal liability.
To be honest, the chance of getting open source software into an environment that has to be so heavily audited and regulated is almost zero. Commercial, closed source software dost cost, but part of the cost is insurance and the ability to blame someone else other than the company or its officers and staff should something bad happen.
Another legal issue of why businesses choose closed source solutions is patent indemnification. If a software company doesn't have this protection for its customers, should a patent violation occur with the software, not just the software company, but all its customers can wind up being sued for obnoxious amounts of money, and possibly shut down. Again, RedHat is one of the companies that offers this protection for an open source product, but few others do.
None of this is related in any way to the quality of programming of open source software. Its all security theater, but its what keeps a company in business and its officers out of prison with the regulations in the US.
Re:Play the game or go to a higher authority (Score:5, Interesting)
In my case it is the owner of the company where I work.
While I cannot speak for the personality of the OP's boss - mine is at least a very decent person.
So I walk into work and inherit an old Dell Latitude D600 running WinXP.
A month into the job I trash it and install Linux. I am now the only person in our company using Linux/OSS for everything I need to do.
I inherited a desktop PC that still runs XP - our control software is written in MS Access so I could not run that on Linux.
One day my boss remarked in a meeting that "You know you need to be able to run Windows dependent software on your laptop" which is his roundabout way laying down a kind of challenge to me.
So I set up our proxy server to allow me to SSH in and rdesktop to my desktop when I am on standby. The other tech's needed to make an offline backup of the control DB and then merge it with the "live" DB.
A week later in another meeting he reminded me to merge the database. "No need, I run the DB live"
So two months ago I was offered part ownership of the company and promoted to tech manager in the interim.
Sometime you need to play on the ragged edge for a bit in order to get your point accross.
I still run Linux on my laptop, and my whole tech team goes for weekly training on Linux with our sister company who is a Linux solutions provider.
Re:Don't bother (Score:5, Interesting)
Forgive me if I'm being stupid, but this is actually something I worry about. I'm a heavy user of open source, but surely it is true that "anybody can change the source of a project, submit it, and you never know what kind of compiled binary you're going to get" - isn't that kinda the point of open source? And we just hope that someone else notices if the changes are bad?
I know this sounds like I'm trolling, but I'm not - it's a serious question. How do you know you can trust open source projects? I've always assumed that large projects - particularly linux distros and their package repositories - have some kind of QA and code audit system in place, but how do they work? Are a couple of naughty obsfucated lines really going to get caught?
Sure, many eyes on the source code and all that, and there would be the same risk from employees at closed source organisations - only difference being it's easier to get to work on an open source project, and if you get caught adding bad code, you don't lose your job.
This sort of thing is becoming an even bigger problem with the web in general; facebook apps, igoogle gadgets, even things like firefox and jquery plugins - the more I think about it, the more paranoid I become.
What processes are in place to protect users from malicious code?
Re:Play the game or go to a higher authority (Score:4, Interesting)
SOx has actually produced almost the opposite reaction, with OSS you can validate the code path, but with CSS you cannot and almost every vendor in existance has explicity information in their EULA that states that they are not responsible for anything basically related to any type of "protection"
Re:Don't bother (Score:3, Interesting)
I know it's a cliche, but unless you actually audit the code (and don't miss something) you can't really trust it. The best that you can do is trust a group like the OpenBSD guys to perform code audits for you.
I didn't see anyone mention the infamous Debian SSL bug, so here's a link: Debian Bug Leaves Private SSL/SSH Keys Guessable [slashdot.org]. The gist of the story is that some random package maintainer was getting warnings about a memory region containing an uninitialized value in some OpenSSL code. Rather than actually looking at the code and trying to understand what was going on, the maintainer incorrectly assumed that their debugging/profiling tool was flagging an actual problem and simply initialized the region to 0. The problem was that this memory region was intended to be used as a source of entropy. To make matters worse, this bug went unnoticed for about two years.
So, to answer your question, yes, the QA/audit process is probably broken; it's most likely geared towards testing application functionality versus testing for correctness. And no, two lines of incorrect code are probably not going to be noticed.
As far as real solutions go, I suppose it depends on your level of paranoia. Sure you can use an OpenBSD based firewall at home and limit your inbound/outbound traffic, but as soon as you connect to a remote service, you have to trust them as well. In "Secrets and Lies", Bruce Schneier comes to the conclusion that technical measures are simply not enough and that you have to manage sofware-related risks the same way you would manage risks to your home, automobile, or life, with insurance.