'Leap Seconds' May Be Eliminated From UTC 470
angry tapir writes "Sparking a fresh round of debate over an ongoing issue in time-keeping circles, the International Telecommunications Union is considering eliminating leap seconds from the time scale used by most computer systems, Coordinated Universal Time (UTC). Since their introduction in 1971, leap seconds have proved problematic for at least a few software programs. The leap second added on to the end of 2008, for instance, caused Oracle cluster software to reboot unexpectedly in some cases."
Stupid (Score:2, Interesting)
Yeah, leap seconds suck, but the proposed solution (to let UTC drift farther and farther away from reality) sucks even harder. UTC should just be abolished in favor of UT1. Computer clocks are so crude anyway (mine is off by 3 seconds right now) that the supposed benefits of UTC's constant second are really non-existent, every computer needs to have its time adjusted now and then no matter what.
Oracle (Score:5, Interesting)
Perhaps Oracle should concentrate more on making their software reliable, and less on lawsuits.
From what I recall Digital VMS didn't have that problem, and even had no problems migrating an always on system over different processors, and keeping the cluster running over more than 15 years. One second and Oracle crashes.
It's a pity which of those companies survived.
Re:Poor solution (Score:3, Interesting)
if you were to count the number of days since the 0AD, Would you ignore leap days? UTC is a count of seconds since a specific time. All computers do is count time. It's the user interface that should adjust for leap seconds when it converts to local time.
Why can't Computer people and farmers use different time metrics?
Re:Poor solution (Score:5, Interesting)
But, since respectable companies tend to run their own SNTP servers and they themselves adjust against national servers (we hope), it could simply be a good idea to ditch the leap second in favor of fixing all the clocks.
But, I think the real issue of the article is the occasions where "17:59:60" is a valid time. I think for presentation (and databases), it would in fact have been better to simply prolong 17:59:59 or progressively added a millisecond for the next 1000 seconds for example. Although it might through off scientific calculations during that period, the impact would be less critical.
Re:Poor solution (Score:3, Interesting)
They did when I was at uni, I'd never heard of leap seconds or the 100yr & 400yr rules for leap days until I had to redo the same dammned calender assignment every time a new language was introduced.
This is Bull* ... (Score:2, Interesting)
Sounds to me like some programmers are putting the blame on anyone else than themselves ... I'm wondering, how do computer systems cope with re-syncing the local clock with a remote time source, e.g. NTP server? Computer RTCs are _never_ exact, so updating the local time is necessary in regular intervals, which will always lead to time jumps of milli-, micro- or even complete seconds and more. If your software can't cope with that, fix your software, but don't expect the universe will adapt to fix your shortcomings!
Re:Poor solution (Score:3, Interesting)
The best solution is a robust solution (Score:4, Interesting)
The reality is that computers already have to allow for their clock drifting from universal time, that's why we have NTP. There's no point getting individual computers account for leap seconds, it would be easier and less error prone if reference clocks transparently accomodate leap seconds (ie without sending a 23:59:60 to the world) and everyone else can just drift back in sync with them when one occurs.
There may be a few applications where a computer really does need to accomodate leap seconds (such as a reference clock!) but for the rest of us the additional complexity gives no advantage whatsoever.
Re:Unreliable (Score:4, Interesting)
So you knew that leap seconds should be tested for, did you?
I'm not defending Oracle, but at least give them this much credit - leap seconds don't exactly spring to mind when you're planning a test suite for software. Certainly after this incident I can't imagine they would miss it again, but I'd have been surprised if anyone can claim they knew to test for these beforehand.
Re:Oracle (Score:3, Interesting)
Perhaps Oracle should concentrate more on making their software reliable, and less on lawsuits.
From what I recall Digital VMS didn't have that problem, and even had no problems migrating an always on system over different processors, and keeping the cluster running over more than 15 years. One second and Oracle crashes.
It's a pity which of those companies survived.
Speaking empirically (and somewhat cheekily), isn't the lesson from your example that Digital should have concentrated less on making their software reliable, and more on lawsuits, in order to survive then?
Re:Oracle sholuld simply fix their software... (Score:3, Interesting)
Re:Ok... (Score:2, Interesting)
No, it is not the same thing. PI can be determined algorithmically but there is no algorithmic way of predicting leap seconds since they are added whenever it is deemed necessary.
Thus, you can not write a program that "handles" leap seconds without constant outside input (for example internet connection and wikipedia). I would be much annoyed if my wrist watch wanted to connect to wikipedia every couple of months...
I consider the leap second a big mistake since it makes it inaccurate to talk about future time. I can state that "X seconds have passed since 2008" but not "Y seconds will pass until 2012".
A leap hour every couple of 100 years (as the article mentions) would cause much less trouble.
The real problem is using seconds for everything (Score:5, Interesting)
I've worked with NTP for nearly 20 years now, and the leap second adjustments isn't a real problem.
The crux of the matter is that we've insisted (in both Unix and Windows) on measuring calendar events in seconds:
The proper solution is to use Julian Day Number as the basic unit for calendars and (SI) seconds for any kind of short-term measurement. If you really need second/sub-second precision for long-term (multi-day) measurements, then you have to accept that the conversion is not just a matter of multiplying/dividing by 86400.
Calendar appointments and similar things should be defined by day number and some form of fractional day, not SI seconds.
NTP is somewhat to blame though: Even though it has full support for leap second handling (both adding and subtracting), the core of the protocol pretends that UTC seconds (without leap adjustments) is sufficient, i.e. NTP timestamps are defined to be in a 64-bit fixed-point format with 32 bits counting seconds since 1900-01-01 and 32 bit for the fractional seconds, i.e. sufficient to handle a 136-year block with a quarter of a ns resolution.
http://www.eecis.udel.edu/~mills/ntp/html/ntpd.html#leap [udel.edu]
This causes small hiccups for an hour or so after each adjustment: The primary servers and those that either have a leap-second aware source or a cluefull operator keep in sync throughout the adjustment, while the remainder will slowly detect that their local clocks seems to be a full second off. Since this is way more than the default +/- 128 ms which NTP is willing to handle with gradual adjustments, NTPD will instead step the clock (backwards for an added leap second) and restart the protocol engine, after discarding all history.
Modern versions of NTP have been rewritten to use a vote between all otherwise good servers: If a majority claim that there will be a leap second at the end of the current day, then the local deamon will believe them, and thereby stay in sync even during the actual leap second itself.
Terje
Re:The problem with leap seconds... (Score:1, Interesting)
I'm an astronomer, and I hate the inclusion of leap seconds in UTC. If I have UTC timestamps for two observations, a few decades apart, and I want to find the exact interval between them (to track changes in the rotation period of a star, for example), then I need to look up all the leap seconds that occurred between them.
Honestly, is there anyone who cares about both (a) keeping the sun at Greenwich at the meridian at high noon, and (b) timing to a precision of less than 1 second/year? Removing leap seconds from UTC would making doing either (a) or (b) easier - it only makes things harder for people who want to do both.
This is ridiculous. Use TAI, problem solved (Score:3, Interesting)
Accounting for leap seconds is complicated and error prone. It's also completely useless; the solution exists and it's called TAI [wikipedia.org].
Example: how do you figure how long an operation has taken? With TAI: := tai() := tai() - starttime
starttime
dosomething()
duration
That's it. How do you do that with leap seconds sneaking on you? It's impossible to do, at least reliably.
Re:Let's see if I've got this right (Score:5, Interesting)
Do we actually care about that level of accuracy? The leap second is a stupid idea to start with. We have leap years because a calendar year is about a quarter of a day shorter than a solar day. Without them, you'd have the seasons slowly moving around the calendar. The equinox would move by one day every four years, and so on. This was a problem for pre-technical societies, which depended heavily on the calendar for planting crops and avoiding starving, but it's irrelevant now. We're stuck with it though, and it does make it a bit easier to remember where the seasons are, although they won't change by much over a person's lifetime.
Leap seconds, in contrast, are completely pointless. They exist because the SI day is slightly shorter than the solar day, by a tiny fraction of a second. This means that, after a few years, the sun will not quite be at its apex precisely at midday. How much is the variation? We've had 24 leap seconds since they were introduced in 1972, but a lot of these were to slowly correct the already-wrong time. In the last decade, we've had two. At that rate, it will take 300 years for the sun to be a minute off. It will take 18,000 years for it to be an hour off. These numbers are slightly wrong. The solar day becomes a bit under 2ms longer every hundred years, so we'd need leap seconds more often later.
In the short term, they introduce a lot of disruption (see air traffic control problems for a good reason why we shouldn't have them - safety-critical systems that depend on time synchronisation and don't reliably work with leap seconds. Great). They don't provide any noticeable benefit. Maybe after a thousand or so years, UTC time will be offset enough from the solar day that it will be irritating, but if people still care about the relationship between noon and midday then they can add a leap-minute or two to compensate. Or they can just let it drift. I'd like to think that a significant proportion of the human population will not be on the Earth by that point, and so purely local inconsistencies with the time won't matter to them.
Re:Oracle (Score:3, Interesting)
Actually, Digital should have known better than to bet their long-term strategy on a competitor. They had the fastest (by a significant margin) CPU architecture, in the form of the Alpha. They had an amazing OS in the form of VMS and a decent UNIX (some horrible bits, some really impressive ones); Tru64. Then Intel came along and said they, and the other 64-bit CPU vendors, could reduce costs by consolidating their platforms on Intel's new Itanium chips.
Unfortunately, Itanium sucked. It wasn't cheaper than the Alpha, and performance was better eventually, but only after the Alpha had been without active development for several years. VMS stayed on VAX, Alpha, and Itanium, reducing it to a niche role. Tru64 went the same way as most other commercial UNIX systems.
If they'd kept developing Alpha and ported VMS to x86, they'd probably still be around. They'd probably also have been in a good position to partner with ARM to produce high performance, low power, chips.
Re:Poor computer clocks (Score:4, Interesting)
No, they don't. For many low end computers, the clock chip is quite inexpensive, and they're under pretty harsh thermal conditions (dependent on layout, airflow, and heat from the CPU or other energy devouring components. The quartz crystal on your wrist doesn't experience anything like those thermal variations: this is why most computers are expected to synchronize with a master clock, such as an NTP service.
Re:Stupid (Score:3, Interesting)
No, it doesn't. Simply use UTC as an abstract "seconds after a certain point" and use time zone data to adjust for local Solar time. It's simpler, less likely to result in weird bugs, and allows anyone who wants adjust their local time every day if they so wish.
Re:Let's see if I've got this right (Score:3, Interesting)
...and The Noonday sun is probably not overhead anyway
I live in the UK, I am 2 degrees off the meridian, It is summertime so we are on BST - So at noon the sun is 1:08:00 away from being overhead .. a few seconds is largely irrelevant at this point ...
Re:Let's see if I've got this right (Score:5, Interesting)
It wasn't just Oracle. The Linux kernel would deadlock if the system was under load when the leap second happened. I only had one server hang, but a customer with a rack of busy servers had about half of them freeze. Lots of "fun" on New Year's Eve. Even more annoying was that the problem wasn't in handling the leap second, it was in printing a message that the leap second had been handled.
Re:Let's see if I've got this right (Score:2, Interesting)
It sounds like the problem is due to the fact that the leap second is implemented as a step function and not a slew.
Couldn't the problem be fixed by the affected systems slewing over a long period, say one whose individual updates are much smaller than the maximum tolerance for clock synchronization for the system?
Seems to me it would make much more sense for these systems to count:
58 .01 .02 .. .98 .99
59
00 +
01 +
xx +
yy +
zz
instead of
58
59
60
00
01
02
Re:Let's see if I've got this right (Score:3, Interesting)
We don't rely on the sun anymore. There's no good reason to go through all this trouble trying to keep it in the sky between 7am and 7pm, with a high point at 12. Those are just arbitrary numbers. Lets fix time so it's the same world-wide. Then we can get up when it's light, go to bed when it's dark, and stop screwing around with the numbers. Really, there are a lot of better things we can do with our lives.
Re:Let's see if I've got this right (Score:2, Interesting)
The usual objection I see to this is that you've then introduce a very special event that no-one will have programmed for at all. It's far better to have more regular such events so programmers know they have to handle them...
... not that it's worked so far....
Re:Stupid (Score:3, Interesting)
Beats me. For computers I would recommend using TIA, with the Unix epoch being defined as 1970-01-01T00:00:00 (TIA), with the stored value being the number of seconds elapsed since that time. Display time for human consumption could be in UTC, using a timezone file to determine the offset, much like is currently done for local time, and the inane Daylight Saving Time rules. Displaying values in the future for human consumption would be a bit problematic, if the exact seconds are important, since leap seconds (or changes to timezones or DST rules) cannot be accurately predicted ahead of time. However, for most human purposes, giving a time of day accurate to the second for some future event is not necessary or useful.
Of course, time in general is a very tricky thing. Special and General relativity do indicate that the concept of keeping a universally synchronized timebase is never going to work the way we would hope. We will continue to have more and more time standards created in the future, as we need them. It is just the way of things.
Re:Let's see if I've got this right (Score:2, Interesting)
The length of the second in GMT is not fixed. It's really hard to use for anything which requires precise accuracy. A second in GMT is defined based on the length of the day in Greenwich, so if the Earth's spin speed increases (see the Chile earthquake) then a GMT second decreases.
NASA and others would be much better off using TAI than UTC