Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage SourceForge News Slashdot.org IT

A Note On Thursday's Downtime 75

If you were browsing the site on Thursday, you may have noticed that we went static for a big chunk of the day. A few of you asked what the deal was, so here's quick follow-up. The short version is that a storage fault led to significant filesystem corruption, and we had to restore a massive amount of data from backups. There's a post at the SourceForge blog going into a bit more detail, and describing the steps our Siteops team took (and is still taking) to restore service. (Slashdot and SourceForge share a corporate overlord, as well as a fair bit of infrastructure.)
This discussion has been archived. No new comments can be posted.

A Note On Thursday's Downtime

Comments Filter:
  • oh okay (Score:5, Insightful)

    by Anonymous Coward on Saturday July 18, 2015 @11:30PM (#50138189)

    oh, I thought some of that shitware they sling got loose and bit them in the ass

  • by nickweller ( 4108905 ) on Saturday July 18, 2015 @11:41PM (#50138233)
    Sourceforge is Badware risks .. http://i.imgur.com/Hhtgv0H.png [imgur.com]
    • Re: (Score:2, Informative)

      by Anonymous Coward

      Sourceforge use to be great but has been serving crapware for a couple of years. You'd have to be off your rocker to use it if you have any choice, either as an author or an end user.,

      https://forum.filezilla-project.org/viewtopic.php?t=30240&start=90
      http://www.theregister.co.uk/2015/06/03/sourceforge_to_offer_only_optin_adware_after_gimp_grump/

  • by the_humeister ( 922869 ) on Saturday July 18, 2015 @11:49PM (#50138259)

    like unicode support and ipv6.

  • by arglebargle_xiv ( 2212710 ) on Saturday July 18, 2015 @11:51PM (#50138275)
    Could have been far worse...
    • by Anonymous Coward

      Beta is coming. ...basically the meeting went like this:

      "what do you mean they didn't like the change to beta?"

      "fools, they don't know what's good for them"

      "I know, the idiot users are like frogs. We can boil them slowly. Let's start making all of the beta changes gradually over 6-12 months."

      "Genius! That'll show them, they won't even notice that we've changed anything at all"

      "Raises all round?"

      "Sounds good to me chaps!" ...etc...etc... beta is coming whether you like it or not.

    • by Tablizer ( 95088 )

      If it were Beta, we wouldn't know the difference.

  • by cold fjord ( 826450 ) on Saturday July 18, 2015 @11:57PM (#50138295)

    All right! Nobody moves, or the storage gets it! .... Help me! Help me! .... Shut down! ..... Won't somebody help that bad drive?!

    The reboot is near.

  • by cold fjord ( 826450 ) on Sunday July 19, 2015 @12:04AM (#50138323)

    I clicked on a "firehose" link and the most recent story was "YouTube's ready to select a winner" from March 2013.

    But the "help us select the next story" link was ok, as was directly entering Slashdot.org/recent.

    Good luck with the restore / clean up / troubleshooting. That's not a fun way to spend a weekend.

  • by decaffeinated ( 70626 ) on Sunday July 19, 2015 @12:04AM (#50138325)
    Serious question: Just out of curiosity, who pays the bills for all of the infrastructure that keeps Sourceforge running?

    Hardware isn't free and employees aren't free. I seriously don't understand how Sourceforge has kept the lights on all these years.

    And by the way, I'm a very satisfied user of their services. But I do worry about their future.

  • Thank you. (Score:5, Insightful)

    by Etherwalk ( 681268 ) on Sunday July 19, 2015 @12:31AM (#50138401)

    Thank you to the Slashdot team. Bringing systems back up like that is emergency-mode-fun, but a lot of work, and we appreciate it.

    • by mlts ( 1038732 )

      Have to agree here. Lot of people appreciate /. being up and going.

      One can armchair quarterback and talk about how corruption wouldn't happen with this filesystem or this SAN, but corruption and problems happen no matter what the platform.

    • Amen. I've been visiting this site around user ID 110.000 or so, and I've actually never experienced a full blackout. Static version every now and then, that's all.

  • It was fast as hell!

    here [youtube.com]

  • And right before the Pluto flyby.

    Seriously, though, imagine the thoughts going through NASA minds when the probe crapped out a week before the big encounter. Their toilets must have been full of bricks.

    It's not like rover problems where you can continue where you left off after you fix it. New Horizons couldn't stop.

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Sunday July 19, 2015 @01:56AM (#50138603)
    Comment removed based on user account deletion
    • by swb ( 14022 )

      The blog post was pretty content free about what exactly went wrong.

      I would have guessed they would have the functional ability to either restore a storage snapshot to get back an entire LUN or a VM from a VM-based backup, and maybe they did.

  • by darkain ( 749283 )

    Serious question: how much of this could have either had been prevented, or restored much more quickly if they were using ZFS with proper parity, checksuming, snapshotting, and sending (backups)? This really is the one-size-fits-all storage solution at this point.

    • Kinda depends on the failure. If your raid controller decides to die in a spasmodic on off on off way you can easily corrupt all your file systems in one go, zfs or otherwise. At that point if you didn't have redundant live storage pools it gets harder.

      Or of course there is the issue where someone does something stupid, like deleting files from live machines without thinking about what they are.

      • by Anonymous Coward

        Do you have any idea how ZFS works? Since ZFS is copy-on-write, you cannot corrupt already written data, unless your controller writes completely unrelated blocks or some crazy shit like that which I've personally never seen before.

        Also, a good setup separates the redundancy domains into separate hardware, i.e. if you run RAID10, no two disks of a mirror live on the same controller, for example.

        Deleting files is trivially defeated by regular snapshots.

        The best thing about ZFS: You always know the state of y

        • Given I have seen sysadmin delete the backups to free up space you cannot always handle stupidity.

          And seriously? You cannot corrupt already written data? WTF. ZFS has a whole system built into it to periodically check if data has corrupted once on the disk. Its called scrub. Do you think they would have gone to a huge load of effort if no on disk corruption ever happened?!?!?

          ZFS is very good at ensuring that there has been no "in transit" corruption by doing a crc check of the written file before removi

          • I say this as someone that runs ZFS on his backup/file server; if you do have to restore or resilver it can take a long while! A single slow drive in a vdev will limit the entire pool's IO (the extent of which is entirely dependent on topology, but the weakest link always crushes you in ZFS). After a handful of TB of data, even with a pool of mirrored vdevs and a flash cache device, the resilver for a single drive can take a day unless you've got some serious spindle count at high RPMs. Even SAS drives d
            • It gets orders of magnitudes worse if you have two vdevs joined together in a single pool. I have 5 x 1.5g and 5 x 2g in a joint pool and I lost a 1.5. The re silvering process was days.

  • [... [sourceforge.net]] This incident impacted all block devices on our Ceph cluster.

    Power/communications/routing down event? Was monitor quorum lost? Inquiring minds that are not trolls are curious and grateful that the path to restoration was clear. Best wishes.

  • I've negligible experience in this sort of failure and recovery, but...
    Shouldn't slashdot and sourceforge be entirely separate, so that the failure of one can't bring down the other?
    Shouldn't there be live redundant systems, so that when one fails, one of the redundant systems is switched online in minutes? I don't mean just redundant storage, but 3 or 4 systems running concurrently, taking the same input and monitoring to confirm that the output is the same.

    Is this too expensive or not technically feasib

  • Your important files encryption produced on this computer: photos, videos, documents, etc. Here is a complete list of encrypted files, and you can personally verify this.

    Encryption was produced using a unique public key RSA-2048 generated for this computer. To decrypt files you need to obtain a private key. The single copy of the private key, which will allow you to decrypt the files, located on a secret server on the Internet; the server will destroy the key after a time specified in this window. After
  • "Storage corruption" is fairly vague. I've been bit by it in the past - once due to a vendor software bug (Oracle block corruption), and once due to hardware (flaky storage controller chip writing garbage (Supermicro MB)) I would like to hear more about the root cause.

    RM

  • by Streetlight ( 1102081 ) on Sunday July 19, 2015 @10:24AM (#50139795) Journal
    It looks like /. had a Plan B ready in the case of a catastrophic failure. For some sites one just gets a blank page with some strange message when that happens. /. did the right thing letting users know they had a problem and were working on it and then let us know a bit about what happened. Thanks, /. techs.
  • On the same Thursday that Slashdot experienced data storage corruption, the 1TB hard drive on my Windows gaming PC crashed, reporting 4GB of free space available and unresponsive to IO block commands. (I've seen that behavior on USB sticks, but never on a hard drive.) Except for several years of email, all my data was on the file server. Oh, well. I got a good excuse to rebuild my eight-year-old PC, especially with Windows 10 around the corner. Meanwhile, I'm using a $250 Dell laptop for everything except g
  • (Slashdot and SourceForge share a corporate overlord, as well as a fair bit of infrastructure.)

    Nice to see that blurb of text again. Can we get this to happen every time you post a Nerval's Lobster/Dice slashvertisement, too?

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...