Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Open Source News

Big Data's Invisible Open Source Community 49

itwbennett writes "Hadoop, Hive, Lucene, and Solr are all open source projects, but if you were expecting the floors of the Strata Conference to be packed with intense, boostrapping hackers you'd be sorely disappointed. Instead, says Brian Proffitt, 'community' where Big Data is concerned is 'acknowledged as a corporate resource', something companies need to contribute back to. 'There is no sense of the grass-roots, hacker-dominated communities that were so much a part of the Linux community's DNA,' says Proffitt."
This discussion has been archived. No new comments can be posted.

Big Data's Invisible Open Source Community

Comments Filter:
  • Sorry (Score:4, Insightful)

    by discord5 ( 798235 ) on Thursday March 01, 2012 @08:03PM (#39216311)

    My basem^H^H^H^H^H hacker cave simply doesn't have any room for a storage array in the PB order.

    • Re:Sorry (Score:4, Interesting)

      by Anonymous Coward on Thursday March 01, 2012 @08:14PM (#39216371)

      Parent poster nailed it.
      Try to get support from "the community" when you discover a bug in a code path that nobody except you encounters. Suddenly the community becomes very small indeed.
      There just aren't that many geeks out there who handle petabyte datasets. Prove me wrong, dear reader.

      • Re: (Score:3, Insightful)

        Well, you really shouldn't be debugging code on petabyte datasets to begin with. If there's a bug that shows, there's a minimal dataset on which the bug shows, and that's the dataset you can ask help with.

        In general, you should always develop code on a tiny sample of the dataset. Once it's fully debugged and works correctly, then you apply it on your petabyte dataset.

        • If it isn't working correctly on a petabyte dataset, then it isn't "working correctly", period, no matter how well-hidden the bugs are with gigabyte and terabyte datasets. An unhandled overflow error that doesn't manifest until you exceed 2^64, is still an unhandled overflow error.

          For a trivial example of my point, try using 32-bit signed integers to calculate the Collatz iteration of 113,383.
          • True, but again, this overflow will show up much sooner in a smaller setting, say when the algorithm is compiled with 16-bit or even 8-bit integer variables. You haven't shown that 2^64 is an inherent lower bound for the appearance of the overflow bug.

            Incidentally, people who don't know about computer architecture wouldn't be aware about overflows, so wouldn't know to check these conditions. Something about semi-educated programmers and their ability to debug code?

            • True, but again, this overflow will show up much sooner in a smaller setting, say when the algorithm is compiled with 16-bit or even 8-bit integer variables. You haven't shown that 2^64 is an inherent lower bound for the appearance of the overflow bug.

              I picked 2^64 only because I'm currently using an AMD64X2. It will vary from one architecture to another, anyway, unless the code uses types with explicit bit-widths, like "uint64" or "float80". The point is, know the hardware and software specs, and their accompanying limitations, and make sure you don't exceed them.

              Incidentally, people who don't know about computer architecture wouldn't be aware about overflows, so wouldn't know to check these conditions. Something about semi-educated programmers and their ability to debug code?

              More like their ability to develop quality code to begin with. I seriously doubt they would get hired, or their software used, by the Big Data described in the article.

            • by adolf ( 21054 )

              Incidentally, people who don't know about computer architecture wouldn't be aware about overflows, so wouldn't know to check these conditions. Something about semi-educated programmers and their ability to debug code?

              I grok this discussion as an exchange between a user who is experiencing a real problem and needs help with it but is unable to find useful answers, and a programmer who is patiently trying to explain to the user that they are somehow asking the wrong questions, while insinuating that the user

        • by scheme ( 19778 )

          Well, you really shouldn't be debugging code on petabyte datasets to begin with. If there's a bug that shows, there's a minimal dataset on which the bug shows, and that's the dataset you can ask help with.

          In general, you should always develop code on a tiny sample of the dataset. Once it's fully debugged and works correctly, then you apply it on your petabyte dataset.

          Some bugs and issues don't show up until you get to a certain scale. Consider race conditions that only occur so often, unless you hit a certain scale you may never see it. To give a another pertinent example consider something that corrupts one byte in a PB (maybe it's a very infrequent condition or something), until your dataset grows to multiple PB, you may not even see it. Or consider the issue that occurs on raid arrays where you get a second drive failure when rebuilding an array after a drive has

      • Try to get support from "the community" when you discover a bug in a code path that nobody except you encounters. Suddenly the community becomes very small indeed.

        I disagree. If you know how to identify the bug properly and present a solution on how to solve it, show that you did a little research and aren't just a) totally lazy, b) incompetent or c) whining that it doesn't solve all your problems out of the box without understanding it then you will often find the folks helpful. The open source community aren't any different to say the folks that support the software in your office. If you start talking to a tech with "I can't send email, can you fix my windows?" yo

        • Re: (Score:2, Interesting)

          by Hal_Porter ( 817932 )

          http://adequacy.org/stories/2001.10.2.33542.4010.html [adequacy.org]

          The Linux Fault Threshold is the point in any conversation about Linux at which your interlocutor stops talking about how your problem might be solved under Linux and starts talking about how it isn't Linux's fault that your problem cannot be solved under Linux. Half the time, the LFT is reached because there is genuinely no solution (or no solution has been developed yet), while half the time, the LFT is reached because your apologist has floundered wa

          • by Anonymous Coward
            I don't see where the problem is. The solution for jsm's problem was pretty clear from the start. All he had do to was type in the binary driver using gestures from his infrared mouse. All that swearing seemed uncalled for.
          • by amorsen ( 7485 )

            Everything was lost here:

            Why won't my fucking Linux computer print?

            The rest could have been easily avoided by doing a kick/ban at that point.

        • by Anonymous Coward

          My point about the community becoming small was not that you get shut out because you don't know how to ask questions politely or properly, but because you genuinely are encountering behaviour so rare that almost nobody in the mainstream community knows how to help you.
          For what it's worth, I do my research, including reading the source and using gdb to interrupt running processes.

          • Well yes, that is primarily how you do it.

            Bigdata work is much more closer to academic research than it is to casual software development work. As is ML and the such.
            It is quite obvious that at higher stratas of specialization the specialists are less. Ask any, seriously involved in research, scientist
            where he finds community specialists to discuss various bugs. The fact is that they don't. They go around mostly asking for
            opinions and fix the bug themselves (which usually includes writing some documentation

    • by oneiros27 ( 46144 ) on Thursday March 01, 2012 @09:57PM (#39216873) Homepage

      Internet Archive's last published generation Petabox [archive.org] (now more than a year old, so they were using smaller drives), would take two racks ... which is still reasonable, but you could probably fit it in a single rack with today's drives. A Backblaze Pod [backblaze.com] is 42 disks in 4U, so you could do it yourself and assuming you can get enough large disks after that whole flooding thing, be able to get a TB in a single rack easily. The Sun Thumper took 48 disks in 4U ... I don't know if the X4540 ever supported larger than 1TB disks, though.

      My department just got a Nexsan E60 in yesterday ... 60 3TB disks in 4U, so you can squeeze 1.8PB raw in a 42U rack. (usable space ... still more than a PB, even with spares.)

      So, space isn't the issue ... power and cooling way be, though.

  • by bmo ( 77928 ) on Thursday March 01, 2012 @08:13PM (#39216363)

    And I have to ask...

    What was the point of the article? That the trade show is like every trade show ever?

    Really, I'll write a report the next time I go to EASTEC and whine about the lack of "Makers" (in the geek culture sense of the word) among the vendors of Big Machinery.

    --
    BMO

  • by blahplusplus ( 757119 ) on Thursday March 01, 2012 @08:13PM (#39216365)

    ... must face the fact that lots of code is boring to maintain and update. Not to mention unless you are independently wealthy contributing to open source is a drain one ones time and resources. No one should really be concerned that many corporations see value in open source, it's like seeing value in roads or sewers. There is much code that is just like roads and sewers that which would be hard to maintain on a volunteer basis.

  • Scratching Itches (Score:2, Interesting)

    by Anonymous Coward

    A big part of the grass-roots movement that Linux and other open-source projects benefit from comes about because hackers (in the good sense) contribute to software that they themselves want or need. There probably aren't many programmers that want (or can afford) to store and analyze petabytes of data in their free time. That's important to corporations, though, so I suspect that's why you see primarly corporate interests in open-source Big Data projects.

  • by king neckbeard ( 1801738 ) on Thursday March 01, 2012 @08:30PM (#39216447)
    It's pretty much a purely open source community instead of a free software community.
  • by Anonymous Coward on Thursday March 01, 2012 @08:46PM (#39216505)

    "There is no sense of the grass-roots, hacker-dominated communities that were so much a part of the Linux community's DNA"

    This is for one simple reason: most hackers don't need "BigData".

    Perhaps if the typical hacker had a cluster of servers to play with, this would change. But as long as most hackers are bound to using a single personal computer, they're just not going to be very concerned with clusterware.

    They're also not concerned with plenty of other things that are essential to big corporations, like payroll software and CRM (customer relationship managment) software.

    • by Anonymous Coward

      That's generally true, but some of the cluster management software out there installs in pretty low end environments.

      The Apache Incubator Tashi project for example allows for fast startup of VMs. These can be used to run a virtual cluster for a specific purpose, at the end of which the instances can be thrown away. This saves on having one-off installs polluting your main machine.

      I had it provide VMs inside a single VMware Fusion instance, as well as run a real cluster with >100 large nodes and many diff

    • by Anonymous Coward

      There's a lot of startup activity in the big data area, along with job opportunities for software engineers. But it seems that the majority of it is about mining behavioral trends in consumer activity and enabling targeted ads and other personalized online experiences. It's a little bit creepy.

      OTOH I'm sure hadoop and friends would be very useful for the LHC and other big science projects, but they have are mostly taxpayer funded and are fighting to keep the dollars they're getting, not looking for new wa

      • by scheme ( 19778 ) on Thursday March 01, 2012 @11:06PM (#39217247)

        OTOH I'm sure hadoop and friends would be very useful for the LHC and other big science projects, but they have are mostly taxpayer funded and are fighting to keep the dollars they're getting, not looking for new ways to spend it.

        HDFS is already used by CMS (one of the detectors at the LHC) to store and manage distributed filesystems at various regional centers. After all, when you are generating multiple petabytes each year and need to process it and keep various subsets of it around for analysis by various groups, you need filesystems that can handle multiple PB of files. And yes, I believe patches are being fed upstream as necessary. Other filesystems being used in the US include lustre, dcache, and xrootdfs.

        Although funding is an issue, continuing to run and analyze data from the LHC means that money needs to be spent to buy more storage and servers as needed and to pay people to develop and maintain the systems needed to distribute and analyze all the data being generated . Having multiple PB of particle collision data is useless if you can't analyze it and look for interesting events.

    • by evilviper ( 135110 ) on Thursday March 01, 2012 @09:58PM (#39216875) Journal

      This is for one simple reason: most hackers don't need "BigData".

      Perhaps if the typical hacker had a cluster of servers to play with, this would change.

      "Most hackers" don't need a lot of things that are, never-the-less developed as successful open source projects. Anybody think there's a huge audience for DReaM?

      Storage is getting big... Even a tiny shop can afford obscene amounts of storage. Each 2U server can have 6 x 2TB SATA (3.5") drives pretty inexpensively. As soon as you've got a dataset that needs more space than you can store on one of those, you'd benefit from thesee "big data" solutions, rather than the standby (more expensive) solution of "throw in a monster SAN".

      And you don't even need that much infrastructure. The virtual servers (cloud) service providers aren't very expensive, particularly when you don't care about SLA, and will give you as big of a cluster "to play with" as you could want.

      • by Anonymous Coward

        Given an individual can get their hands on storage and clusters ... Where is the interesting data?
        Where is PB sized data of interest to a hacker they can download?
        Where's the fun payoff ?

        • Google's "big data" is just web pages. Start a spider, feed the output to Solr, and see if you can beat Google at web search.

    • by Anonymous Coward

      Those programs named are all written in Java, which is more of interest to corporate programmers than hackers.

  • I really hate the reporting around Hadoop. Most of these people have absolutely no clue what they are talking about, and this article is just another example of that. Any bit of simple research would have revealed that the actual open source community of developers around Hadoop, Hive, Solr, etc, can be found at ApacheCon. Of course Strata is amazingly commercial: O'Reilly, being a corporate entity, is trying to make cash around the latest craze. If they weren't, they'd make sure the ASF and the other O

  • Sure, most hackers don't have a personal cluster at their disposal to really test the limits of their BigData, web-scale and - insert buzzword here - deployment. There are however a some free 'cloud' alternatives (PaaS) (OpenShift by Red-Hat for example: http://openshift.redhat.com/ [redhat.com] that give you the opportunity to play around a bit.
  • Do you want Big Data solutions to appeal to the masses? For open source hackers to tackle petabyte-size problems? Hundreds or thousands of possible solutions for each variation of a problem, like what is found on SourceForge?

    It's dead simple.

    Rename the problem to Big Porn and create a couple of frameworks as examples. The technology will just take right off.

We are Microsoft. Unix is irrelevant. Openness is futile. Prepare to be assimilated.

Working...