Big Data's Invisible Open Source Community 49
itwbennett writes "Hadoop, Hive, Lucene, and Solr are all open source projects, but if you were expecting the floors of the Strata Conference to be packed with intense, boostrapping hackers you'd be sorely disappointed. Instead, says Brian Proffitt, 'community' where Big Data is concerned is 'acknowledged as a corporate resource', something companies need to contribute back to. 'There is no sense of the grass-roots, hacker-dominated communities that were so much a part of the Linux community's DNA,' says Proffitt."
Sorry (Score:4, Insightful)
My basem^H^H^H^H^H hacker cave simply doesn't have any room for a storage array in the PB order.
Re:Sorry (Score:4, Interesting)
Parent poster nailed it.
Try to get support from "the community" when you discover a bug in a code path that nobody except you encounters. Suddenly the community becomes very small indeed.
There just aren't that many geeks out there who handle petabyte datasets. Prove me wrong, dear reader.
Re: (Score:3, Insightful)
In general, you should always develop code on a tiny sample of the dataset. Once it's fully debugged and works correctly, then you apply it on your petabyte dataset.
overflow and "working correctly" (Score:3)
For a trivial example of my point, try using 32-bit signed integers to calculate the Collatz iteration of 113,383.
Re: (Score:3)
Incidentally, people who don't know about computer architecture wouldn't be aware about overflows, so wouldn't know to check these conditions. Something about semi-educated programmers and their ability to debug code?
Re: (Score:2)
True, but again, this overflow will show up much sooner in a smaller setting, say when the algorithm is compiled with 16-bit or even 8-bit integer variables. You haven't shown that 2^64 is an inherent lower bound for the appearance of the overflow bug.
I picked 2^64 only because I'm currently using an AMD64X2. It will vary from one architecture to another, anyway, unless the code uses types with explicit bit-widths, like "uint64" or "float80". The point is, know the hardware and software specs, and their accompanying limitations, and make sure you don't exceed them.
Incidentally, people who don't know about computer architecture wouldn't be aware about overflows, so wouldn't know to check these conditions. Something about semi-educated programmers and their ability to debug code?
More like their ability to develop quality code to begin with. I seriously doubt they would get hired, or their software used, by the Big Data described in the article.
Re: (Score:2)
I grok this discussion as an exchange between a user who is experiencing a real problem and needs help with it but is unable to find useful answers, and a programmer who is patiently trying to explain to the user that they are somehow asking the wrong questions, while insinuating that the user
Re: (Score:3)
Well, you really shouldn't be debugging code on petabyte datasets to begin with. If there's a bug that shows, there's a minimal dataset on which the bug shows, and that's the dataset you can ask help with.
In general, you should always develop code on a tiny sample of the dataset. Once it's fully debugged and works correctly, then you apply it on your petabyte dataset.
Some bugs and issues don't show up until you get to a certain scale. Consider race conditions that only occur so often, unless you hit a certain scale you may never see it. To give a another pertinent example consider something that corrupts one byte in a PB (maybe it's a very infrequent condition or something), until your dataset grows to multiple PB, you may not even see it. Or consider the issue that occurs on raid arrays where you get a second drive failure when rebuilding an array after a drive has
Re: (Score:2)
Try to get support from "the community" when you discover a bug in a code path that nobody except you encounters. Suddenly the community becomes very small indeed.
I disagree. If you know how to identify the bug properly and present a solution on how to solve it, show that you did a little research and aren't just a) totally lazy, b) incompetent or c) whining that it doesn't solve all your problems out of the box without understanding it then you will often find the folks helpful. The open source community aren't any different to say the folks that support the software in your office. If you start talking to a tech with "I can't send email, can you fix my windows?" yo
Re: (Score:2, Interesting)
http://adequacy.org/stories/2001.10.2.33542.4010.html [adequacy.org]
Re: (Score:1)
Re: (Score:2)
Everything was lost here:
Why won't my fucking Linux computer print?
The rest could have been easily avoided by doing a kick/ban at that point.
Re: (Score:1)
Trolololol!
Re: (Score:1)
My point about the community becoming small was not that you get shut out because you don't know how to ask questions politely or properly, but because you genuinely are encountering behaviour so rare that almost nobody in the mainstream community knows how to help you.
For what it's worth, I do my research, including reading the source and using gdb to interrupt running processes.
Re: (Score:3)
Well yes, that is primarily how you do it.
Bigdata work is much more closer to academic research than it is to casual software development work. As is ML and the such.
It is quite obvious that at higher stratas of specialization the specialists are less. Ask any, seriously involved in research, scientist
where he finds community specialists to discuss various bugs. The fact is that they don't. They go around mostly asking for
opinions and fix the bug themselves (which usually includes writing some documentation
How small is your basement? (Score:4, Informative)
Internet Archive's last published generation Petabox [archive.org] (now more than a year old, so they were using smaller drives), would take two racks ... which is still reasonable, but you could probably fit it in a single rack with today's drives. A Backblaze Pod [backblaze.com] is 42 disks in 4U, so you could do it yourself and assuming you can get enough large disks after that whole flooding thing, be able to get a TB in a single rack easily. The Sun Thumper took 48 disks in 4U ... I don't know if the X4540 ever supported larger than 1TB disks, though.
My department just got a Nexsan E60 in yesterday ... 60 3TB disks in 4U, so you can squeeze 1.8PB raw in a 42U rack. (usable space ... still more than a PB, even with spares.)
So, space isn't the issue ... power and cooling way be, though.
So... I read the article... (Score:5, Interesting)
And I have to ask...
What was the point of the article? That the trade show is like every trade show ever?
Really, I'll write a report the next time I go to EASTEC and whine about the lack of "Makers" (in the geek culture sense of the word) among the vendors of Big Machinery.
--
BMO
Some open source advocates... (Score:4, Insightful)
... must face the fact that lots of code is boring to maintain and update. Not to mention unless you are independently wealthy contributing to open source is a drain one ones time and resources. No one should really be concerned that many corporations see value in open source, it's like seeing value in roads or sewers. There is much code that is just like roads and sewers that which would be hard to maintain on a volunteer basis.
Scratching Itches (Score:2, Interesting)
A big part of the grass-roots movement that Linux and other open-source projects benefit from comes about because hackers (in the good sense) contribute to software that they themselves want or need. There probably aren't many programmers that want (or can afford) to store and analyze petabytes of data in their free time. That's important to corporations, though, so I suspect that's why you see primarly corporate interests in open-source Big Data projects.
So, in other words... (Score:3)
A very simple explanation (Score:5, Insightful)
"There is no sense of the grass-roots, hacker-dominated communities that were so much a part of the Linux community's DNA"
This is for one simple reason: most hackers don't need "BigData".
Perhaps if the typical hacker had a cluster of servers to play with, this would change. But as long as most hackers are bound to using a single personal computer, they're just not going to be very concerned with clusterware.
They're also not concerned with plenty of other things that are essential to big corporations, like payroll software and CRM (customer relationship managment) software.
Re: (Score:1)
That's generally true, but some of the cluster management software out there installs in pretty low end environments.
The Apache Incubator Tashi project for example allows for fast startup of VMs. These can be used to run a virtual cluster for a specific purpose, at the end of which the instances can be thrown away. This saves on having one-off installs polluting your main machine.
I had it provide VMs inside a single VMware Fusion instance, as well as run a real cluster with >100 large nodes and many diff
Re: (Score:1)
There's a lot of startup activity in the big data area, along with job opportunities for software engineers. But it seems that the majority of it is about mining behavioral trends in consumer activity and enabling targeted ads and other personalized online experiences. It's a little bit creepy.
OTOH I'm sure hadoop and friends would be very useful for the LHC and other big science projects, but they have are mostly taxpayer funded and are fighting to keep the dollars they're getting, not looking for new wa
Re:A very simple explanation (Score:4, Insightful)
OTOH I'm sure hadoop and friends would be very useful for the LHC and other big science projects, but they have are mostly taxpayer funded and are fighting to keep the dollars they're getting, not looking for new ways to spend it.
HDFS is already used by CMS (one of the detectors at the LHC) to store and manage distributed filesystems at various regional centers. After all, when you are generating multiple petabytes each year and need to process it and keep various subsets of it around for analysis by various groups, you need filesystems that can handle multiple PB of files. And yes, I believe patches are being fed upstream as necessary. Other filesystems being used in the US include lustre, dcache, and xrootdfs.
Although funding is an issue, continuing to run and analyze data from the LHC means that money needs to be spent to buy more storage and servers as needed and to pay people to develop and maintain the systems needed to distribute and analyze all the data being generated . Having multiple PB of particle collision data is useless if you can't analyze it and look for interesting events.
Re:A very simple explanation (Score:5, Informative)
"Most hackers" don't need a lot of things that are, never-the-less developed as successful open source projects. Anybody think there's a huge audience for DReaM?
Storage is getting big... Even a tiny shop can afford obscene amounts of storage. Each 2U server can have 6 x 2TB SATA (3.5") drives pretty inexpensively. As soon as you've got a dataset that needs more space than you can store on one of those, you'd benefit from thesee "big data" solutions, rather than the standby (more expensive) solution of "throw in a monster SAN".
And you don't even need that much infrastructure. The virtual servers (cloud) service providers aren't very expensive, particularly when you don't care about SLA, and will give you as big of a cluster "to play with" as you could want.
Where's the big data (Score:1)
Given an individual can get their hands on storage and clusters ... Where is the interesting data?
Where is PB sized data of interest to a hacker they can download?
Where's the fun payoff ?
Re: (Score:2)
Google's "big data" is just web pages. Start a spider, feed the output to Solr, and see if you can beat Google at web search.
Another reason: JAVA (Score:1)
Those programs named are all written in Java, which is more of interest to corporate programmers than hackers.
Data is big (Score:1)
Really Really big
You just won't believe how vastly hugely mindbogglingly big it is.
Re: (Score:2)
It's even bigger.
Wrong Conference (Score:2)
I really hate the reporting around Hadoop. Most of these people have absolutely no clue what they are talking about, and this article is just another example of that. Any bit of simple research would have revealed that the actual open source community of developers around Hadoop, Hive, Solr, etc, can be found at ApacheCon. Of course Strata is amazingly commercial: O'Reilly, being a corporate entity, is trying to make cash around the latest craze. If they weren't, they'd make sure the ASF and the other O
The Cloud (Score:2)
Do you want Big Data to take off? (Score:2)
Do you want Big Data solutions to appeal to the masses? For open source hackers to tackle petabyte-size problems? Hundreds or thousands of possible solutions for each variation of a problem, like what is found on SourceForge?
It's dead simple.
Rename the problem to Big Porn and create a couple of frameworks as examples. The technology will just take right off.