Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Education Cloud Databases The Internet

The 32-Bit Dog Ate 16 Million Kids' CS Homework (code.org) 161

"Any student progress from 9:19 to 10:33 a.m. on Friday was not saved..." explained the embarrassed CTO of the educational non-profit Code.org, "and unfortunately cannot be recovered." Slashdot reader theodp writes: Code.org CTO Jeremy Stone gave the kids an impromptu lesson on the powers of two with his explanation of why The Cloud ate their homework. "The way we store student coding activity is in a table that until today had a 32-bit index... The database table could only store 4 billion rows of coding activity information [and] we didn't realize we were running up to the limit, and the table got full. We have now made a new student activity table that is storing progress by students. With the new table, we are switching to a 64-bit index which will hold up to 18 quintillion rows of information.
The issue also took the site offline, temporarily making the work of 16 million K-12 students who have used the nonprofit's Code Studio disappear. "On the plus side, this new table will be able to store student coding information for millions of years," explains the site's CTO. But besides Friday's missing saves, "On the down side, until we've moved everything over to the new table, some students' code from before today may temporarily not appear, so please be patient with us as we fix it."
This discussion has been archived. No new comments can be posted.

The 32-Bit Dog Ate 16 Million Kids' CS Homework

Comments Filter:
  • Well then. (Score:5, Funny)

    by Anonymous Coward on Sunday January 22, 2017 @03:48PM (#53716427)

    That doesn't inspire a whole lot of trust in the system. Who did they get to code this thing, elementary school kids?!?

    • by mmell ( 832646 ) on Sunday January 22, 2017 @03:55PM (#53716467)
      Consider this a real-world lesson for our youth in the ways that design choices can have unanticipated effects on implementation, manageability and viability of software in the long haul. For extra credit, the kids that are affected should be encouraged to explore what they could have done to mitigate the risk caused by some grown-up's oversight.
      • by Anonymous Coward
        The people that originally made the decision to go with a 32 bit table are probably long gone. The real lesson here is don't waste time worrying about things you won't be around to have to deal with.
        • Those of us who worked the overnight shift for the Millennium Bug remember it well. Shouldn't we be coming up to another rollover point some time soon? Unix Epoch hits 2^32 seconds, or something?
    • Wasn't that long ago that Youtube ran into the same problem for view counts (on Gangnam Style). They were even using signed integers - gotta be prepared for negative views, I guess.

    • Who did they get to code this thing, elementary school kids?!?

      Larry Ellison personally set it up.

      ... But he did have to ask his next door neighbor's kid for advice on a few occasions.

    • Note that the WTF isn't the use of a 32-bit index, it's HowTF you can code a system that requires 4 billion rows in a database. Announcing they've fixed it by moving to a 64-bit row index is like announcing that you're fixed a problem with plugging a 100A device into a 10A circuit by replacing the fuse that keeps blowing with a piece of 00AWG wire current-welded across the terminals.
    • by doccus ( 2020662 )

      32 bits outa be enough for anyone... ;-)

  • by QuietLagoon ( 813062 ) on Sunday January 22, 2017 @03:51PM (#53716439)
    At least there was a back-up... Or not... Not even a 24-hour transaction log... Or not... Way to go code.org... set that example...
    • by halivar ( 535827 ) <`moc.liamg' `ta' `reglefb'> on Sunday January 22, 2017 @04:01PM (#53716493)

      How do you back up data that was never stored? Or logs for transactions that never completed? And how, even if you had those transactions, would you meaningfully restore them when the restoration process itself would simply repeat the result of overflowing the available indexes?

      This isn't a typical disaster recovery scenario. The architecture itself is at fault, and the data is lost.

      • Uh, what kind of data are you talking about that was never stored?
        The obvious thing is to restore at the most recent backup. Some data will be lost of course, but that's better than losing all data, which apparently these people did.
        • Re: (Score:2, Informative)

          by Anonymous Coward

          They didn't lose all data. The lost every every insert into a table the occurred after its index reached it's maximum value. As the database insert was the method of storing the data, there's nothing to recover.

        • Some data will be lost of course, but that's better than losing all data, which apparently these people did.

          No they didn't.

          Any student progress from 9:19 to 10:33 a.m

          So a grand total of about 74 minutes' work was lost.

          • So a grand total of about 74 minutes' work was lost.

            Which, from the way some programmers on here talk, could have been a whole three lines of code.
          • by fisted ( 2295862 )

            So a grand total of about 74 minutes' work was lost.

            Times the number of kids working at that time.

            • Doesn't really matter. The loss could be more or less undone in 74 minutes. Doesn't matter whether it's ten kids or a billion kids.

              • by rtb61 ( 674572 )

                Technically speaking a whole lot less than 74 minutes if they were coding properly and taking notes and keeping track of their work. So maybe 20 minutes and for some particular skilled coders maybe 10, depending how long it took them to figure out the original code structure.

      • ok, so apparently not all data was lost, I misread the info. Bad on me, I should have read more carefully, and my other comment should be modded down for being wrong.
      • If you were to run the transactions again it would be against the new table which has the 64-bit value instead of the 32-bit value and they would succeed. Of course this would have all been avoided if the person who decided on the original data types had spent five minutes thinking in the first place.

      • Or logs for transactions that never completed?

        You log the transaction just before you execute it. The point of the log is that you can re-roll it to reconstruct the database in the event of failure. That doesn't work if you only log things that didn't fail.

      • How do you back up data that was never stored? Or logs for transactions that never completed?...

        There's an input-logging capability that can be replayed in times like this. Since it is separate from the part that caused this problem, the data would be retrievable. Simple.

      • You are more correct than you realize about the architecture issue.

        The architecture should have been taking everything, dumping it to a "log" to be processed, and then processed. Once the processing is done, the "logs" could be deleted. If they had done this, recovery would have been easy.

        I mean really, "you" expect data to always be processed perfectly? Not a chance. There is always room for hiccups. Store the raw data until you can process it. Simple. Obvious. And clearly not done in this instance.

        Please

      • by Holi ( 250190 )
        Why did the device report a successful save? I mean, if they need to tell you your files weren't saved then what kind of caching operation do they have that allows for over an hour of saves failing and no notification sent to the user?
        • by halivar ( 535827 )

          It's probably something as simple and stupid as catchless try (which I have seen on DB ops more times than I care to think about) in misguided attempt at "graceful degradation."

    • The prudent student stores the files locally, copies and pastes them into the browser then hits "send".

      I would love to be able to say I had not personally learned that lesson the hard way.

      • by tepples ( 727027 )

        The prudent student stores the files locally

        Provided said prudent student owns a device with a text editor capable of storing files locally. Does an old hand-me-down iPad with a Bluetooth keyboard count?

        • by Dog-Cow ( 21281 )

          How many kids use a hand-me-down iPad and BT keyboard to submit work to code.org? And yes, the iPad has an app pre-installed which can save text files locally.

          You probably deserve to have the iPad shoved up your anus until it rips through your colon.

  • by aix tom ( 902140 ) on Sunday January 22, 2017 @03:52PM (#53716451)

    Don't trust the cloud as the only place you store your work.

  • by Xarin ( 320264 ) on Sunday January 22, 2017 @03:58PM (#53716483)

    4 billion rows of coding activity is all we will ever need

    • by newcastlejon ( 1483695 ) on Sunday January 22, 2017 @05:14PM (#53716837)
      What sort of DBMS are they using that doesn't notify the admin when a table is nearly full? What sort of client are they using that doesn't tell the user when an attempt to write to a DB fails?
      • by dwywit ( 1109409 )

        Maybe that's how they found out. Tech support tickets start flowing in a bit after 9:30 - "I can't insert my changes." They finally suspend activity and investigate after a few dozen tickets all show the same symptoms.

        There's no excuse for not notifying an Admin that a table is about to reach limit.

      • What sort of client are they using that doesn't tell the user when an attempt to write to a DB fails?

        The kind that is written by code.org developers.

    • by Mashiki ( 184564 )

      If you can't make it fit onto a 8-bit eeprom chip you're doing it wrong?

    • Funny you should mentioned that. The college instructor for the Introduction to CIS class told us in the early 1990's that 4GB (32-bit) was all the memory anyone would ever need for a PC. For the most part, he was right. When I upgraded my PCs last year, I finally broke through the 4GB barrier. Not because 4GB wasn't enough. I had to get new memory modules and 8GB (two 4GB sticks) was on sale.
    • by AmiMoJo ( 196126 )

      Didn't Slashdot have a problem with 32 bit post IDs one time? I can't find the story now, but I could swear I remember a temporary fall back to static pages when the 32 bit counter overflowed many years ago.

  • It is no surprise to me that the ones creating and operating this platform are just as incompetent as the "graduates" they produce. Mediocrity breeds mediocrity...

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Sunday January 22, 2017 @04:04PM (#53716511)
    Comment removed based on user account deletion
    • and youre oblivious to the fact that a table with eighteen quintillion rows would never load

      I think you're oblivious to the fact that no one person is going to fetch every single row. You do know how a database works, don't you?

      so not only were you incapable of scaling your infrastructure or your program to handle four billion rows --something every sysadmin on the planet is capable of-- you weren't even competent enough to set up monitoring for it.

      They were capable of the first part, and are now doing it. They just didn't realise it needed to be done.

      the ones that lost all their data dont care.

      No-one lost all their data. At most they lost 74 minutes' work.

      Read the whole thing next time, eh?

    • by Morpeth ( 577066 )

      Wow, sorry someone pissed in your cornflakes.

      There's a lot of good people working on code.org (many volunteers), the whole concept is to teach kids about basic algorithms and logic, and get them excited about programming and tech in general. LOTS of schools and kids have benefited from their efforts.

      Yes, they had something unfortunate happen, they owned up to it, they fixed it, they moved on -- so should you...

    • Way to fly off the handle. You see it doesn't matter if coding is a skill people rarely use in their careers, in fact most of the stuff they learn in school won't be part of their career. But if it helps them see that computers are more capable than what is presented in a stupdified GUI they will be able to automate tasks, not be afraid of the command line, etc. and that is a very good thing given the prevalence of computers in the workplace. Otherwise economies of scale favor the simplified "computers as d
    • you dont get it. no one fucking cares about your SQL table limits

      Okay okay, you're over stimulated. Have a beer and go back to bed.

    • by Dog-Cow ( 21281 )

      if it can only store four billion rows, it isnt "the cloud." its just a KVM instance running on a shared hosting facility then, isnt it.

      There is no relationship between the index's datatype size and the kind of system the RDMS is hosted on. Also, a KVM instance running in a shared hosting facility (on? is it on the roof of the building?) is running in the cloud. That's what the cloud is... shared (virtual) servers, optionally maintained by someone else.

      so not only were you incapable of scaling your infrastructure or your program to handle four billion rows --something every sysadmin on the planet is capable of-- you weren't even competent enough to set up monitoring for it.

      Sysadmins are not responsible for database schema design or implementation. The issue was not a matter of scaling.

      You have demonstrated that you are more of a fucked-up shit than the code

  • by saboosh ( 1863538 ) on Sunday January 22, 2017 @04:10PM (#53716539)
    Thank you for teaching the kids the importance of taking responsibility and being honest and open about your mistakes. It's okay to make mistakes as long as we learn from them. Too many people today are afraid of making mistakes and cover them up.
    • by saboosh ( 1863538 ) on Sunday January 22, 2017 @04:24PM (#53716631)
      I find people like anecdotes here so please allow me to add: I was raised by very "tough" parents with a very "tough" form of discipline. Mistakes meant punishment. Today I have a 9 year old daughter who, like any other human being, makes mistakes. A few years ago I noticed a very strange phenomena with regards to "dealing" with her mistakes". When I would get upset with her and punish her for spilling on the couch or forgetting to clean her room I would see her make it again and as time went on she would get, either, more defensive about it or try to lie about it. At some point my fiancee asked that I try a different approach: Try being kind and loving with my response and take time with her to show empathy, to share that Im not perfect either and to figure out another way of handling whatever the mistake was.... the taking-time part is probably the toughest for me because it means work, im sure many can relate.... but, strangely, I noticed that she was making the mistakes I handled the new way a lot less... and she seemed to be ok with handling them a new way. She started to clean her room on her own and even though her coordination did not allow her to stop from spilling she was more careful about where she took her drinks and cleaned them up more quickly.... its really ass backwards to me... and to top it off, she seems less anxious around me and my responses and seems less defensive... Im not a psychologist and wont pretend to understand the how or why of it, I just know she seems less distracted and anxious and I seem to get more hugs from here and I will take that over trying to "force" her to learn anyday.
      • by Anonymous Coward

        It's not ass backwards, (to me at least) it's obvious. Humans are very social animals, they live and work in groups with sophisticated and complex social interactions. Actively wanting to be useful and helpful is a natural part of that as much as selfishness is. Punisment als the only way to respond to mistakes assumes that avoiding punishment is the only thing that motivates, and overlooks the fact that cooperation in itself is a strong motivator for social beings, which is what you can see with your daugh

      • by AmiMoJo ( 196126 )

        I noticed something similar with the way Japanese parents tend to talk to their kids. Not all of them, but it seems to be the more normal way of doing things over there.

        By treating them more like an adult, not getting angry and shouty but instead helping them to understand why soiling the sofa is a problem, involving them in correcting the error (cleaning up) and seeing mistakes as something to learn from and aid in personal improvement their kids seem to be a lot more responsible and calmer.

      • I had parents like this and I would do the same thing. I would just get better at hiding my mistakes rather than taking the time to improve, because no matter what I did, it wouldn't be enough.
    • by Morpeth ( 577066 )

      Sorry don't have any mods points to give you, but I would.

      Code.org does a lot of good, they had an issues, they talked about it openly and addressed it -- not sure why some people are acting like code.org started murdering puppies or something.

    • by Dutch Gun ( 899105 ) on Sunday January 22, 2017 @05:31PM (#53716913)

      Caveat: It's okay to make mistakes as long as no one was hurt or killed by easily preventable errors. Obviously, that doesn't apply here, so I definitely agree. Sharing your experience and turning it into a teachable moment ensures others learn from it as well.

      It would have been less embarrassing for them to just make up some excuse about a temporary outage, or blame a DDOS attack, or Russian hackers. It's good to remember that when lambasting them about what idiots they are for not noticing this before their DB puked on them. It's tempting to do, but really does nothing but stroke your own ego while at the same time encouraging people to try to hide their mistakes to avoid this sort of public shaming.

      So, yeah, kudos for them for owning up to their own mistake.

  • by El Cubano ( 631386 ) on Sunday January 22, 2017 @04:11PM (#53716547)

    Seriously, was not a single developer or architect from Code.org around when Slashdot overflowed its 24-bit index? I know it has been a few years now, but I'm sure there are folks here who remember threading breaking and all other sorts of problems when it happened. Remember: https://slashdot.org/story/06/11/09/1534204/slashdot-posting-bug-infuriates-haggard-admins [slashdot.org]

    Granted, that was Slashdot, and while annoying, it was hardly the end of the world This problem with Code.org clearly reinforces "cloud bad" to people who are already fearful of putting their data in the cloud.

    I am guessing that Code.org didn't bother tracking things like how to close to various limits they were getting, but I bet that they are now. In any event, when this happened to Slashdot 10+ years ago, I suppose you could argue that we weren't as advanced. In 2016-2017 there is no excuse for such a critical architectural flaw. To me, it completely undermines my confidence in their entire platform. What other time bombs are ticking under the surface there?

    • Code.org correctly reinforces "cloud bad" to people who should be fearful of putting their data in the cloud.

      FTFY.

    • by rhazz ( 2853871 )

      To me, it completely undermines my confidence in their entire platform

      So do you avoid all companies who have ever had a free product down one time for at least 74 minutes and was completely open and honest about it?

  • Well duh (Score:5, Funny)

    by cyber-vandal ( 148830 ) on Sunday January 22, 2017 @04:12PM (#53716549) Homepage

    It's code.org not databasedesign.org

  • I admit, I've mostly done it for speed purposes, but my understanding is that the record limit is per partition, so you could also use it to deal with record limits.

    They could either partition based on user IDs (might be faster to select by for the bulk of the queries), or by date (making it easier to manage autonumber fields).

  • 64bit (Score:5, Informative)

    by ledow ( 319597 ) on Sunday January 22, 2017 @05:17PM (#53716849) Homepage

    Honestly don't get why everything these days isn't just 64-bit by default.

    You can hit 32-bit limits just buying a memory chip, or bog-standard storage. 4 billion is not a big number in those terms.

    32-bit times are dead.
    32-bit filesizes are dead.
    32-bit memory sizes are dead.
    32-bit file counters are dead.
    Hell, it's not inconceivable that in some things 32-bit user counters could die - with account recreation and spam accounts, surely the big people are having to deal with that.

    Just stop faffing about and use 64-bit for everything, by default, from the start. 8 bytes isn't a huge amount of overhead nowadays.

    But starting with the assumption "4 billion is enough" when some people have more than 4bn in their bank account, some services have more than 4bn users, and people can buy 4bn-whatevers in their local electronics store is stupid.

    But 4 billions lots of 4 billion is not a limit that you will hit for a very, very, very long time. Even 128-bit isn't unseen - IPv6, ZFS, GPUs - and that's 4 billion lots of 4 billion 64-bit numbers each of which is capable of holding 4 billion lots of 4 billion.

    Supercomputer architectures did this a long time ago, translating and assuming everything is 128-bit so that you never have to worry about a limit.

    Why does it take so long for basics like web servers and databases to get there? 64-bit by default, MINIMUM. Anything that incurs a performance hit on that is old, and up to the user to resolve.

    • Re: (Score:3, Insightful)

      by thegarbz ( 1787294 )

      But starting with the assumption "4 billion is enough" when some people have more than 4bn in their bank account

      Yep, I should bog down my computer processes because someone else is rich. Incidentally how many bits does it take to represent the number 4bn? While we're at it do you realise that the number of planets that humans have colonised is 1? Let's build a database with a 25 year life expectancy, how many bits would you assign to the index? 64bits? Your approach is the reason computers are frigging slow. It's the reason why I wait for ages to open up Chrome on a Quad 1.4Ghz Snapdragon.

      How about instead of just bl

    • by Anonymous Coward
      I work for a massive entity. We bought thousands of computers with 8GB of RAM. We then installed 32-bit Windows 7 on them, effectively rendering more than half of that RAM useless. Why? Because 64-bit Windows 7 broke ONE *WEB* in-house designed, written, and maintained application we use, and we allegedly couldn't afford updating it. People do dumb shit.
    • by tepples ( 727027 )

      Why does it take so long for basics like web servers and databases to get there?

      Because the PHP language on 32-bit architectures doesn't support 64-bit integers. All you get are 32-bit actual integers and the 52-bit type you get by (ab)using a double-precision floating point value as an integer.

  • Oh the irony (Score:4, Insightful)

    by luis_a_espinal ( 1810296 ) on Sunday January 22, 2017 @05:18PM (#53716855)

    Code.org CTO Jeremy Stone gave the kids an impromptu lesson on the powers of two with his explanation of why The Cloud ate their homework. "The way we store student coding activity is in a table that until today had a 32-bit index... The database table could only store 4 billion rows of coding activity information [and] we didn't realize we were running up to the limit, and the table got full. We have now made a new student activity table that is storing progress by students. With the new table, we are switching to a 64-bit index which will hold up to 18 quintillion rows of information.

    The of seeing a programming education site using 32-bit indexes without any form of index space monitoring is both hilarious and surreal.

    Who the hell runs a cloud-based, massively accessible operation with 32-bit indexes? And who the hell runs a production system without database monitoring?

  • Deja vu (Score:4, Funny)

    by jdavidb ( 449077 ) on Sunday January 22, 2017 @05:35PM (#53716941) Homepage Journal
    I remember when Slashdot had this exact same problem with comment ids!
  • by Anonymous Coward

    For trusting the "cloud".

  • According to TFS, nothing was lost. They just can't access their stuff until it's moved over to the new database. No disaster. No lesson. No dog. Just off line for a few days.

    BFD

    • According to TFS, nothing was lost.

      Well, except for any data generated from 9:19 to 10:33 a.m

  • Why to avoid trusting cloud services with any data that you can't afford to lose.

  • by gb7djk ( 857694 ) * on Sunday January 22, 2017 @07:11PM (#53717395) Homepage
    Perhaps this will kick someone into looking at the database, as a whole, on a periodic basis to check other limits. Maybe do the odd test transaction or spot trends in other tables which are unexpected? Maybe run some regression tests? Then use this information to tweak the data model in controlled fashion before it breaks.

    You know, like grown ups do...
  • by dbIII ( 701233 ) on Sunday January 22, 2017 @09:53PM (#53718161)
    Not long ago there were some posts here about programmers not needing to know any mathematics.
    It didn't take very long for an article to appear that showed the consequences of not cracking open some books.

    Who would have thought - Knuth seems to have a bit more of a point than the guy who taught himself PHP.
  • Glad to see it would never happen to slashdot.

  • With the new table, we are switching to a 64-bit index which will hold up to 18 quintillion rows of information.

    Is that bigger than a bajillion?

  • in a few million years, the table will be full again. And then nobody expected it, again.

  • A singular index seems like a weird thing to have in this case anyways. Wouldn't it be better to have a multi-column index on something like userid+item rather than an index of all items?

  • Contrary to popular belief, don't use integers for primary indexes. Multi column "natural" indexes can handle way more rows.

    Just because the old databases used "record numbers", doesn't mean you have to... ;-)

  • This the sort of thing that happens when engineers (especially software engineers) don't think outside of the box and considering the consequences of the code they write.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...