Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
News Technology

'Leap Seconds' May Be Eliminated From UTC 470

angry tapir writes "Sparking a fresh round of debate over an ongoing issue in time-keeping circles, the International Telecommunications Union is considering eliminating leap seconds from the time scale used by most computer systems, Coordinated Universal Time (UTC). Since their introduction in 1971, leap seconds have proved problematic for at least a few software programs. The leap second added on to the end of 2008, for instance, caused Oracle cluster software to reboot unexpectedly in some cases."
This discussion has been archived. No new comments can be posted.

'Leap Seconds' May Be Eliminated From UTC

Comments Filter:
  • Hmmm (Score:5, Funny)

    by HappyClown ( 668699 ) on Tuesday August 24, 2010 @01:10AM (#33351428)
    Now waaaaaait just one second! Oh, scratch that...
    • by Joce640k ( 829181 ) on Tuesday August 24, 2010 @03:15AM (#33352164) Homepage

      We have to make every clock in the world inaccurate because Oracle's software is crap...?

      • Re: (Score:3, Funny)

        by Chrisq ( 894406 )
        Careful ... bitching about Oracle's patented.
      • by TheRaven64 ( 641858 ) on Tuesday August 24, 2010 @05:50AM (#33352856) Journal

        Do we actually care about that level of accuracy? The leap second is a stupid idea to start with. We have leap years because a calendar year is about a quarter of a day shorter than a solar day. Without them, you'd have the seasons slowly moving around the calendar. The equinox would move by one day every four years, and so on. This was a problem for pre-technical societies, which depended heavily on the calendar for planting crops and avoiding starving, but it's irrelevant now. We're stuck with it though, and it does make it a bit easier to remember where the seasons are, although they won't change by much over a person's lifetime.

        Leap seconds, in contrast, are completely pointless. They exist because the SI day is slightly shorter than the solar day, by a tiny fraction of a second. This means that, after a few years, the sun will not quite be at its apex precisely at midday. How much is the variation? We've had 24 leap seconds since they were introduced in 1972, but a lot of these were to slowly correct the already-wrong time. In the last decade, we've had two. At that rate, it will take 300 years for the sun to be a minute off. It will take 18,000 years for it to be an hour off. These numbers are slightly wrong. The solar day becomes a bit under 2ms longer every hundred years, so we'd need leap seconds more often later.

        In the short term, they introduce a lot of disruption (see air traffic control problems for a good reason why we shouldn't have them - safety-critical systems that depend on time synchronisation and don't reliably work with leap seconds. Great). They don't provide any noticeable benefit. Maybe after a thousand or so years, UTC time will be offset enough from the solar day that it will be irritating, but if people still care about the relationship between noon and midday then they can add a leap-minute or two to compensate. Or they can just let it drift. I'd like to think that a significant proportion of the human population will not be on the Earth by that point, and so purely local inconsistencies with the time won't matter to them.

        • The solar day becomes a bit under 2ms longer every hundred years, so we'd need leap seconds more often later.

          Or, given that the Earth started spinning a bit faster after the Chile earthquake [nasa.gov], we'll likely not need any leap seconds whatsoever.

        • by WiglyWorm ( 1139035 ) on Tuesday August 24, 2010 @07:07AM (#33353318) Homepage
          Pretty typical "let future generations deal with it" thinking. Why don't we just have Oracle fix their code?
          • by stdarg ( 456557 ) on Tuesday August 24, 2010 @07:24AM (#33353454)

            Eh, let a future generation fix Oracle's code.

        • Re: (Score:3, Interesting)

          ...and The Noonday sun is probably not overhead anyway

          I live in the UK, I am 2 degrees off the meridian, It is summertime so we are on BST - So at noon the sun is 1:08:00 away from being overhead .. a few seconds is largely irrelevant at this point ...

        • Re: (Score:3, Insightful)

          by paeanblack ( 191171 )

          see air traffic control problems for a good reason why we shouldn't have them - safety-critical systems that depend on time synchronisation and don't reliably work with leap seconds. Great

          The same argument can be made for leap-days.

          December 31 23:59:60 is no less valid than February 29. Throwing out accurate timekeeping because some software designers didn't do their homework is not a good solution. Throw out the bad designers instead...or at least keep them away from "safety-critical systems"

          • by TheRaven64 ( 641858 ) on Tuesday August 24, 2010 @08:48AM (#33354310) Journal

            No it can't, for three reasons. Firstly, leap days are deterministic. They happen based on a set of simple rules. If the year is divisible by 4, you get a leap year, unless the year is divisible by 100 but not by 400. Leap seconds, in contrast depend on a variable that changes daily, so are not predictable. They are fudged in with a few months notice, requiring every computer that needs to deal with them to be updated regularly.

            Secondly, leap years don't violate basic sanity checking rules. You can assert that every month is 28-31 days, and that's not broken by leap years. You can assert that every minute contains 60 seconds. In the last decade, that has been true for 2,106,718 minutes, and false for 2. In every year before 1970, it was true.

            Finally, leap years solve a real problem. The point of a solar calendar is that the seasons are in the same place every years. Having the seasons move made it difficult for people to plan when to plant crops. It's less important now, but having the seasons move around would be noticeable for everyone and irritating for a lot of people. In contrast, the only 'problem' that leap seconds solve is that the sun is not at its highest above the meridian at precisely 12:00. As the poster above you pointed out, the skew from not having leap seconds for a thousand years makes less of a difference to the position of the sun than simply not living quite on the meridian.

            Leap seconds are a (very) complex solution looking for a problem.

        • by mangu ( 126918 ) on Tuesday August 24, 2010 @08:47AM (#33354294)

          They exist because the SI day is slightly shorter than the solar day, by a tiny fraction of a second.

          Wrong. They exist because the solar day is getting longer every time. The tides caused by the moon are slowing down the earth's rotation rate.

          safety-critical systems that depend on time synchronisation and don't reliably work with leap seconds

          They should. If a programmer is so incompetent he can't get leap seconds right, I shudder to think what else he did wrong.

        • by cizoozic ( 1196001 ) on Tuesday August 24, 2010 @09:08AM (#33354636)

          Leap seconds, in contrast, are completely pointless. They exist because the SI day is slightly shorter than the solar day, by a tiny fraction of a second. This means that, after a few years, the sun will not quite be at its apex precisely at midday. How much is the variation? We've had 24 leap seconds since they were introduced in 1972, but a lot of these were to slowly correct the already-wrong time. In the last decade, we've had two. At that rate, it will take 300 years for the sun to be a minute off. It will take 18,000 years for it to be an hour off. These numbers are slightly wrong. The solar day becomes a bit under 2ms longer every hundred years, so we'd need leap seconds more often later.

          Well in that case it's probably easier for Oracle to just buy the Sun.

      • by Burdell ( 228580 ) on Tuesday August 24, 2010 @07:58AM (#33353734)

        It wasn't just Oracle. The Linux kernel would deadlock if the system was under load when the leap second happened. I only had one server hang, but a customer with a rack of busy servers had about half of them freeze. Lots of "fun" on New Year's Eve. Even more annoying was that the problem wasn't in handling the leap second, it was in printing a message that the leap second had been handled.

  • Stupid (Score:2, Interesting)

    by Anonymous Coward

    Yeah, leap seconds suck, but the proposed solution (to let UTC drift farther and farther away from reality) sucks even harder. UTC should just be abolished in favor of UT1. Computer clocks are so crude anyway (mine is off by 3 seconds right now) that the supposed benefits of UTC's constant second are really non-existent, every computer needs to have its time adjusted now and then no matter what.

    • Re:Stupid (Score:5, Informative)

      by tagno25 ( 1518033 ) on Tuesday August 24, 2010 @01:26AM (#33351534)

      Yeah, leap seconds suck, but the proposed solution (to let UTC drift farther and farther away from reality) sucks even harder. UTC should just be abolished in favor of UT1. Computer clocks are so crude anyway (mine is off by 3 seconds right now) that the supposed benefits of UTC's constant second are really non-existent, every computer needs to have its time adjusted now and then no matter what.

      And that is what NTP is for. To automatically adjust the computers clock to account for drift.

    • Re:Stupid (Score:5, Insightful)

      by bickerdyke ( 670000 ) on Tuesday August 24, 2010 @04:01AM (#33352372)

      Why abolish it?

      You're free to CHOOSE your timescale! GPS, UTC, UT1, TIA.....

      So if leap seconds confuse you, use a timescale without them. Thats what they're for. But keep the timescale that's supposed to be in sync with earth rotation in sync with earth rotation!

    • Re: (Score:3, Interesting)

      by ultranova ( 717540 )

      Yeah, leap seconds suck, but the proposed solution (to let UTC drift farther and farther away from reality) sucks even harder.

      No, it doesn't. Simply use UTC as an abstract "seconds after a certain point" and use time zone data to adjust for local Solar time. It's simpler, less likely to result in weird bugs, and allows anyone who wants adjust their local time every day if they so wish.

      • Re: (Score:3, Informative)

        by Tacvek ( 948259 )

        That timescale already exists. It is called TIA. It is identical to UTC except for having no leap seconds, and an initial deviation of exactly 10 seconds. The second ticks occur at exactly the same time as UTC. It is always an exact number of seconds off from UTC, that delta increases or decreases as leap seconds are inserted in UTC. It is currently offset by exactly 34 seconds.

  • by zebslash ( 1107957 ) on Tuesday August 24, 2010 @01:15AM (#33351454)

    Isn't the problem with Oracle here? It should not be that difficult to fix their software. What's the difference with Summer time change?

    • Isn't the problem with Oracle here? It should not be that difficult to fix their software. What's the difference with Summer time change?

      The difference with spring/fall time changes is that although the local time may change, the UTC time does not. In other words, your offset from UTC (eg: GMT-8) may get adjusted depending on your location's observance of daylight savings time but UTC itself simply marches on oblivious to anything. The leap second is the one exception.

      • Re: (Score:3, Interesting)

        by carini ( 555484 )
        Insted of using "leap" seconds why NTP dosn't use a longer interval to adjust the time in small steps?. With 1/1000s adjustment every 1024 seconds (which is the polling interval for most stable ntp client) the leap seconds adjustment need less than 2 week to complete.
        • Re: (Score:3, Informative)

          by rjch ( 544288 )

          Insted of using "leap" seconds why NTP dosn't use a longer interval to adjust the time in small steps?. With 1/1000s adjustment every 1024 seconds (which is the polling interval for most stable ntp client) the leap seconds adjustment need less than 2 week to complete.

          The answer can be found in the Wikipedia [wikipedia.org] article on leap seconds - the need for leap seconds isn't constant and predictable.

          • Re: (Score:3, Insightful)

            by Just Some Guy ( 3352 )

            The answer can be found in the Wikipedia article on leap seconds - the need for leap seconds isn't constant and predictable.

            That doesn't really address his question, though. His proposition is a different way to implement leap seconds, not a way to determine if they're needed. I don't like his idea either, but for different reasons.

        • by Nicolas MONNET ( 4727 ) <<nicoaltiva> <at> <gmail.com>> on Tuesday August 24, 2010 @05:02AM (#33352678) Journal

          Clocks should strive to give the most accurate measurement, not lie to their users.

          The solution exists, it's TAI. You use TAI internally and convert to UTC (or your TZ) when displaying, similar to unix time.

          • Mod parent up. (Score:3, Insightful)

            by John Hasler ( 414242 )

            The solution, as the parent says, is to continue publishing leap second announcements but to start distributing TAI. Those who feel a need to track UTC can then insert the leap seconds themselves while other can track TAI and provide lookup tables for conversion to UTC or local time for display just as we do now for DST an local time zones.

            And no, this does not mean putting the correction off for some future generation to deal with. It means realizing that there is no need for a correction at all and that

        • by mangu ( 126918 )

          With 1/1000s adjustment every 1024 seconds (which is the polling interval for most stable ntp client) the leap seconds adjustment need less than 2 week to complete

          The problem is that it would introduce variable seconds, which would cause much worse problems.

          One example is electric power systems. The frequency in the AC power system is what determines how much power should be generated, if the frequency is above or below 60 Hz (or 50 Hz) then each power station should decrease or increase their generation by

    • by TheRaven64 ( 641858 ) on Tuesday August 24, 2010 @05:54AM (#33352884) Journal
      Oracle is just being used as an example in the summary. They are not the only people to develop software that doesn't properly work with leap seconds. Check the Slashdot archives, and you'll see a story about how a lot of air traffic control software doesn't either. ATC software is safety critical - if it goes wrong, planes can crash - and it depends heavily on synchronising clocks with a variety of different places. And these are just the examples that people have already found - how much other code do you think has been tested against an event one second long that's only happened twice in the last decade?
  • Poor solution (Score:5, Insightful)

    by LostMyBeaver ( 1226054 ) on Tuesday August 24, 2010 @01:15AM (#33351456)
    The proper solution is to make programmers aware of leap seconds. There are 86400 seconds in a normal day, however there is an additional second added once or twice a year to adjust for solar time.

    Wikipedia documents it quite well and programmers in modern times should be heading to wikipedia almost constantly anyway. The real problem occurs when the date/time is given in seconds since an "event" such as Jan 1, 1970. Most programmers don't know about leap seconds and I must admit, I don't generally bother calculating for them. But if it were necessary, it would be relatively trivial to do so.

    We shouldn't remove fixes to the clock just because programmers are undereducated. I'm quite convinced that just posting this on Slashdot will raise awareness across a high percentage of the programming world.

    I also always wondered why undergraduate studies for computer science didn't make time a relevant issue. It seems as if it's one of the more complex topics and yet, we don't pay any attention to it. Last formal education I had on time (not talking about physic related, but calendar) was in primary school. There are so many time systems out there that we should pay more attention to educating programmers on it.
    • Re: (Score:2, Insightful)

      by mrnobo1024 ( 464702 )

      Christ, as if programmers don't have enough damn complexity to deal with already. For the purposes of timekeeping, a second should just be defined as 1/86400 of a day. There, problem solved, we never have to screw with the calendar again for thousands of years.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        But the reason we need leap seconds is because "a day" is changing. The earth's rotation is slowing.

        Defining a fundamental physical unit in terms of a moving target isn't a fantastic idea.

        • Re:Poor solution (Score:5, Interesting)

          by LostMyBeaver ( 1226054 ) on Tuesday August 24, 2010 @01:32AM (#33351578)
          Nobo has a point... but it would make it so that the hardware engineers would suffer instead of the software ones. 1/86,4000 of a day = 1 second could be a fair solution. All we would need to do then would be to come up with a new atomic clock which allows for the alteration and then come up with computer crystals that are accurate to the new system (hey, let's get ones that are accurate to begin with, that would be great).

          But, since respectable companies tend to run their own SNTP servers and they themselves adjust against national servers (we hope), it could simply be a good idea to ditch the leap second in favor of fixing all the clocks.

          But, I think the real issue of the article is the occasions where "17:59:60" is a valid time. I think for presentation (and databases), it would in fact have been better to simply prolong 17:59:59 or progressively added a millisecond for the next 1000 seconds for example. Although it might through off scientific calculations during that period, the impact would be less critical.
          • You people ... (Score:4, Informative)

            by Nicolas MONNET ( 4727 ) <<nicoaltiva> <at> <gmail.com>> on Tuesday August 24, 2010 @04:51AM (#33352630) Journal

            There's a reason why the second is defined based on an atomic phenomenon. An earth day is something hilariously unreliable; it varies all the time. A near earth asteroid would measurably alter it. Today we can measure time with accuracies in the 10^-15 or something, possibly even less. And besides, you're confusing the problem of defining the base unit (second) with choosing its scale and keeping a calendar. The SI second was scaled to look like the standard second used for centuries, just defined more precisely. The problem here is that the "real" second in the historical definition (one nth of a day) varies because of astronomical phenomenons that cannot be predicted (unless you can solve the n body problem for n very large and have inventoried the whole solar system), it's not a time keeping problem.

            There's a solution to all this, it's called TAI. There is no reason not to use it but ignorance and incompetence. Every other "solution" that has been advanced here was completely, utterly stupid.

      • Sorry but that's an "april fools" soultion, it won't fix calender drift and it would screw up physics. The second is the SI unit of time, it's definition has nothing to do with the motion of the Earth.
        • The SI second should continue to be what it's been since 1967: 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom. But there's no reason such an arbitrary unit needs to be part of our calendar.

          It's bad enough that the day doesn't fit into the year evenly -- there's no way around that, so we need leap days to fix it. But why do we have to introduce another annoyance, one that is even worse as it needs constant ma

          • Re: (Score:3, Insightful)

            by AGMW ( 594303 )

            But why do we have to introduce another annoyance, one that is even worse as it needs constant maintenance (unlike the leap-day system which hasn't needed adjustment since 1752), by trying to shoehorn the SI second into the day? As far as I can tell, this accomplishes nothing but making life harder for people.

            OK, now hands up why the wonderful leap-day system hasn't needed adjustment since 1752?
            Anyone?
            Yes, you at the back ... "mumble mumble leap seconds mumble mumble?"

            Speak up lad!

            "Sorry Sir - is it because the sensible use of regular leap seconds means a leap day is only required once every for years and they are actually both part of the same time adjustment because a leap day is actually made up of many leap seconds?"

            Yes indeed, well done. Yes the time adjustment required to keep our calendar in sync

      • by Eivind ( 15695 )

        A day isn't constant length.

        Sounds fairly complex to me, to deal with a second in 1900 being a different time-period from a second in 2000.

      • Every day is a slightly different length due to tides, etc. Even strong winds can shave off (or add) a microsecond or two.

        The BBC did a documentary on it [bbc.co.uk].

    • Re: (Score:3, Interesting)

      by toastar ( 573882 )
      Why adjust for solar time?

      if you were to count the number of days since the 0AD, Would you ignore leap days? UTC is a count of seconds since a specific time. All computers do is count time. It's the user interface that should adjust for leap seconds when it converts to local time.

      Why can't Computer people and farmers use different time metrics?
      • by olden ( 772043 )
        There's really no need to redefine UTC -- especially if it's just because some programmers are ignorant of alternatives.
        Absence of leap seconds is exactly what already-existing scales like TAI [wikipedia.org] are for.
      • Re:Poor solution (Score:5, Insightful)

        by Thorsett ( 5255 ) on Tuesday August 24, 2010 @01:49AM (#33351692) Homepage

        Why adjust for solar time?

        We adjust for solar time because UTC is an astronomical timescale, not a "count of seconds since a specific time." If "computer people" want a timescale that ignores leap seconds, they can use an atomic timescale like TAI (or GPS time, which is a constant offset from TAI). But choosing to standardize the internet on UTC and then complaining it is too hard to do the programming right is a little like buying a house next door to a turkey farm and complaining about the smell.

        • mod parent up.
        • by at10u8 ( 179705 )
          There is no international regulation which specifies either TAI or GPS time, and the agencies which provide those do not want such regulation. For those who are constrained by regulation, only a time scale specified by the ITU-R will suffice.
      • if you were to count the number of days since the 0AD

        You'd get very confused - there was no 0 AD (or BC for that matter).

        1 BC was followed by 1 AD.

        • Re:Poor solution (Score:4, Informative)

          by toastar ( 573882 ) on Tuesday August 24, 2010 @02:46AM (#33352008)

          if you were to count the number of days since the 0AD

          You'd get very confused - there was no 0 AD (or BC for that matter).

          1 BC was followed by 1 AD.

          Well if you want to get technical it was neither.

          I think it was called 753 AUC

          • Re:Poor solution (Score:4, Informative)

            by Dwonis ( 52652 ) * on Tuesday August 24, 2010 @06:26AM (#33353058)

            I think it was called 753 AUC

            According to Wikipedia [wikipedia.org], it tended only to be called that by later historians:

            Renaissance editors sometimes added AUC to Roman manuscripts they published, giving the false impression that the Romans usually numbered their years using the AUC system. In fact, modern historians use AUC much more frequently than the Romans themselves did. The dominant method of identifying Roman years in Roman times was to name the two consuls who held office that year. The regnal year of the emperor was also used to identify years, especially in the Byzantine Empire after 537 when Justinian required its use.

        • Re:Poor solution (Score:4, Informative)

          by Chrisq ( 894406 ) on Tuesday August 24, 2010 @04:11AM (#33352416)

          1 BC was followed by 1 AD.

          Not with ISO 8601 [wikipedia.org] time representation, which is more logical having a year zero before year one.

      • Re: (Score:3, Interesting)

        by at10u8 ( 179705 )
        The reason we adjust for solar time is that two standing international agreements demand that we define the day [ucolick.org] as a "mean solar day". Computer people and farmers can use different times if mean solar days are not made illegal and replaced with atomic days, that's what zoneinfo is for [ucolick.org].
    • Re: (Score:3, Interesting)

      by TapeCutter ( 624760 ) *
      "I also always wondered why undergraduate studies for computer science didn't make time a relevant issue."

      They did when I was at uni, I'd never heard of leap seconds or the 100yr & 400yr rules for leap days until I had to redo the same dammned calender assignment every time a new language was introduced.
    • The last thing we want is every programmer inventing his or her own way to deal with this - imagine the mess of mutually incompatible implementations. Moreover, just how do you propose to change time-validation code to accept 23:59:60? With leap-days, it is quite clear when February 29th is acceptable and when not (and look how many people still get it wrong!). With leap-seconds, there are no rules. Just imagine the myriad of ways creative people will be able to foul this up! Leap seconds should be comple
    • non-idiot programmers use a library for this sort of thing. Solve it ONCE.

    • by Zoxed ( 676559 )

      > The proper solution is to make programmers aware of leap seconds. There are 86400 seconds in a normal day, however there is an additional second added once or twice a year to adjust for solar time.

      You should have checked your favoured Wikipedia first (!!): leap seconds *can* be added in June or December but they are actually only needed every few years, and none have been added for some time.
      (When I studied Computer Studies I did not learn about Leap Seconds. But when I started in the space industry th

    • by Chuck Chunder ( 21021 ) on Tuesday August 24, 2010 @02:10AM (#33351830) Journal
      I think it makes absolutely no sense for most computers or programmers to have to account for leap seconds.

      The reality is that computers already have to allow for their clock drifting from universal time, that's why we have NTP. There's no point getting individual computers account for leap seconds, it would be easier and less error prone if reference clocks transparently accomodate leap seconds (ie without sending a 23:59:60 to the world) and everyone else can just drift back in sync with them when one occurs.

      There may be a few applications where a computer really does need to accomodate leap seconds (such as a reference clock!) but for the rest of us the additional complexity gives no advantage whatsoever.
    • ``I also always wondered why undergraduate studies for computer science didn't make time a relevant issue. It seems as if it's one of the more complex topics and yet, we don't pay any attention to it.''

      I couldn't agree more. Now that most programmers I know have moved on from languages with broken type systems and manual memory management, one of the few recurring issues I see in every project is time. Time zones, in particular, often aren't specified, or not picked up by people reading the specification. S

    • I also always wondered why undergraduate studies for computer science didn't make time a relevant issue. It seems as if it's one of the more complex topics and yet, we don't pay any attention to it. Last formal education I had on time (not talking about physic related, but calendar) was in primary school. There are so many time systems out there that we should pay more attention to educating programmers on it.

      Time-keeping and even leap-seconds are covered as far back as the frickin' K&R*. A sibling mentioned the ancient (and for some odd reason still dreaded) calendar and timer exercises that a huge chunk of CompSci students have had to face. It's not like leap seconds are some sort of big and sudden surprise that just popped up in the last couple years (now their implementation schedule seems to be, but still...)

      * My copy (2nd ed.) is at home, but I'm somewhat sure that sure someone on /. can reference one

    • I also always wondered why undergraduate studies for computer science didn't make time a relevant issue. It seems as if it's one of the more complex topics and yet, we don't pay any attention to it. Last formal education I had on time (not talking about physic related, but calendar) was in primary school. There are so many time systems out there that we should pay more attention to educating programmers on it.

      Time-keeping and even leap-seconds are covered as far back as the frickin' K&R*. A sibling ment

    • Accounting for leap seconds is complicated and error prone. It's also completely useless; the solution exists and it's called TAI [wikipedia.org].

      Example: how do you figure how long an operation has taken? With TAI:
      starttime := tai()
      dosomething()
      duration := tai() - starttime

      That's it. How do you do that with leap seconds sneaking on you? It's impossible to do, at least reliably.

  • Oracle (Score:5, Interesting)

    by Anonymous Coward on Tuesday August 24, 2010 @01:15AM (#33351460)

    Perhaps Oracle should concentrate more on making their software reliable, and less on lawsuits.

    From what I recall Digital VMS didn't have that problem, and even had no problems migrating an always on system over different processors, and keeping the cluster running over more than 15 years. One second and Oracle crashes.

    It's a pity which of those companies survived.

    • Re: (Score:3, Interesting)

      by williamhb ( 758070 )

      Perhaps Oracle should concentrate more on making their software reliable, and less on lawsuits.

      From what I recall Digital VMS didn't have that problem, and even had no problems migrating an always on system over different processors, and keeping the cluster running over more than 15 years. One second and Oracle crashes.

      It's a pity which of those companies survived.

      Speaking empirically (and somewhat cheekily), isn't the lesson from your example that Digital should have concentrated less on making their software reliable, and more on lawsuits, in order to survive then?

      • Re: (Score:3, Interesting)

        by TheRaven64 ( 641858 )

        Actually, Digital should have known better than to bet their long-term strategy on a competitor. They had the fastest (by a significant margin) CPU architecture, in the form of the Alpha. They had an amazing OS in the form of VMS and a decent UNIX (some horrible bits, some really impressive ones); Tru64. Then Intel came along and said they, and the other 64-bit CPU vendors, could reduce costs by consolidating their platforms on Intel's new Itanium chips.

        Unfortunately, Itanium sucked. It wasn't cheape

  • by NixieBunny ( 859050 ) on Tuesday August 24, 2010 @01:20AM (#33351482) Homepage
    They aren't predictable in advance. They are basically the noise in the solar system's timekeeping. It's impossible to write code that knows in advance when they will occur, since they are only announced six months ahead of time. So any clock that wants to stay in sync with UTC must be connected to either NTP or GPS or similar timekeeping service.
    If only those darn astronomers didn't care so much about keeping the sun at Greenwich precisely at the meridian at high noon, we wouldn't have this problem.
    • by joe_frisch ( 1366229 ) on Tuesday August 24, 2010 @01:41AM (#33351640)

      We could fix this tricky programming issue by regularly adjusting the earth's orbit....

    • by Yvanhoe ( 564877 )
      It seems reasonable to say that a computer program that needs to stay precise to the second on a several years interval must have a NTP or GPS.
      It seems reasonable in such case to add a test in your test suite to acknowledge for the "one leap second event on 31th of december"
    • by at10u8 ( 179705 )
      Neither are changes to civil time as decreed by local authorities predictable, but zoneinfo manages to handle the problem. If leap seconds went into zoneinfo with the underlying time_t being uniform then the handling of leaps would be in user space, not in kernel space.
  • by koinu ( 472851 ) on Tuesday August 24, 2010 @01:24AM (#33351520)

    Leap seconds are handled well, when the system supports it well and the software is not utter crap.

    I am always annoyed when people break basic things to make software work (e.g. hardware, also see ACPI). Now they are not only breaking hardware, but redefining measurements to make buggy software work. What comes next?

    I can understand when something is changed for convenience purposes (to have simpler calculations), but justified with buggy software is plain wrong. And I surely don't care if an Oracle database "reboots"... whatever that might mean.

  • by Joosy ( 787747 ) on Tuesday August 24, 2010 @01:33AM (#33351582)

    The original article has a quote from one person who sees through the mess to the root of the problem:

    The revision "doesn't resolve the underlying geophysical issue"

    Simply resolve the "underlying geophysical issue" and the problem will be solved.

  • Ok... (Score:5, Insightful)

    by Chyeld ( 713439 ) <`moc.liamg' `ta' `dleyhc'> on Tuesday August 24, 2010 @01:34AM (#33351586)

    Isn't this like legislating that PI is 3.14 because some people have problems with the idea of irrational numbers? If programs have issues with leap seconds, it sounds like programs weren't written properly, not that the spec needs to be rewritten to accommodate this flaw. Would these same people have demanded that it be 1999 again to avoid all the costs of the Y2K fixes?

  • This is Bull* ... (Score:2, Interesting)

    by garry_g ( 106621 )

    Sounds to me like some programmers are putting the blame on anyone else than themselves ... I'm wondering, how do computer systems cope with re-syncing the local clock with a remote time source, e.g. NTP server? Computer RTCs are _never_ exact, so updating the local time is necessary in regular intervals, which will always lead to time jumps of milli-, micro- or even complete seconds and more. If your software can't cope with that, fix your software, but don't expect the universe will adapt to fix your shor

  • by at10u8 ( 179705 ) on Tuesday August 24, 2010 @01:51AM (#33351708)
    The historical record of time_t is already ambiguous [ucolick.org] and cannot be corrected by abandoning leap seconds. There is a way to get leap seconds out of the kernel and into user space [ucolick.org] which amounts to reclassifying them as decrees of change of civil time and putting them into zoneinfo while letting the broadcast time scale not have leaps. It's a matter for posterity whether the word "day" will be re-defined by the ITU-R, changed from the current treaty-specified "mean solar day" to a technically-defined "atomic day".
  • That is utc with choice of correction factors. Never more than 10ns correction but at unpredictable times, a few 100s of ns every week at midnight (old UTC) on Sunday or an occasional annual update of a few seconds. Pick the update scheme that suits your situation best. They could be called UTCa, UTCb etc. To most people that don't care, it will all still be UTC.
  • ``Since their introduction in 1971, leap seconds have proved problematic for at least a few software programs. The leap second added on to the end of 2008, for instance, caused Oracle cluster software to reboot unexpectedly in some cases.''

    That just means that the software contains invalid assumptions. And, in this case, it seems to me that it was quite poorly worked out. Time being off by a second causes the system to reboot? I don't think that's what the customers ordered.

  • Getting rid of leap seconds in the representation would be a mistake in the long run. A much superior fix would be to have computers keep track of TAI internally and then convert to UTC with a leap second table, much the same way we convert to local time with a time zone table.

    Cluster software should be running off of a leap second free distributed clock, and TAI or the equivalent is the best one we have, handily provided (within a constant) on a world wide basis courtesy of the GPS system.

    POSIX time_t val

  • They ought to have abandoned leap seconds in the year 2000, which would have made a dandy new epoch, and simplified all date calculations for a millennium or so. There is absolutely no reason to inflict leap seconds on civil time; the amount I'm off from the center of my current time zone already introduces more error than that. It's just not important to anyone but astronomers or masochists. Well, and maybe sadists (especially standards wonks).

  • by Terje Mathisen ( 128806 ) on Tuesday August 24, 2010 @03:09AM (#33352142)

    I've worked with NTP for nearly 20 years now, and the leap second adjustments isn't a real problem.

    The crux of the matter is that we've insisted (in both Unix and Windows) on measuring calendar events in seconds:

    The proper solution is to use Julian Day Number as the basic unit for calendars and (SI) seconds for any kind of short-term measurement. If you really need second/sub-second precision for long-term (multi-day) measurements, then you have to accept that the conversion is not just a matter of multiplying/dividing by 86400.

    Calendar appointments and similar things should be defined by day number and some form of fractional day, not SI seconds.

    NTP is somewhat to blame though: Even though it has full support for leap second handling (both adding and subtracting), the core of the protocol pretends that UTC seconds (without leap adjustments) is sufficient, i.e. NTP timestamps are defined to be in a 64-bit fixed-point format with 32 bits counting seconds since 1900-01-01 and 32 bit for the fractional seconds, i.e. sufficient to handle a 136-year block with a quarter of a ns resolution.

    http://www.eecis.udel.edu/~mills/ntp/html/ntpd.html#leap [udel.edu]

    This causes small hiccups for an hour or so after each adjustment: The primary servers and those that either have a leap-second aware source or a cluefull operator keep in sync throughout the adjustment, while the remainder will slowly detect that their local clocks seems to be a full second off. Since this is way more than the default +/- 128 ms which NTP is willing to handle with gradual adjustments, NTPD will instead step the clock (backwards for an added leap second) and restart the protocol engine, after discarding all history.

    Modern versions of NTP have been rewritten to use a vote between all otherwise good servers: If a majority claim that there will be a leap second at the end of the current day, then the local deamon will believe them, and thereby stay in sync even during the actual leap second itself.

    Terje

  • by kthreadd ( 1558445 ) on Tuesday August 24, 2010 @06:31AM (#33353092)
    Well obviously the Oracle software worked properly and noticed that the customer had not payed their license to include the extra unlicensed second of operation.
  • by rossdee ( 243626 ) on Tuesday August 24, 2010 @08:16AM (#33353918)

    why don't they drop the silly daylight saving time thing.
    Its been proved that nowdays it doesn't save any electricity, and just messes up peoples schedules and biological clocks.
    In the latest issue of SciAm its listed and one of the inventions humanity would be better off without. (along with the space shuttle)

A good supervisor can step on your toes without messing up your shine.

Working...