Getting around the problem
Watches do not normally account for the second of number 60, so some alternatives should be taken in this regard. Some possibilities are:
- Some Linux kernels implement a jump behind 1s, repeating the 59°second. For more information: Resolve Leap Second Issues in Red Hat Enterprise Linux (in English);
- Windows servers ignore the second 60, making it synchronize again with the atomic clocks soon after it passes. This means that they count twice the second 0 of the day 1° of July. For more information: How the Windows Time service Treats a Leap Second (in English);
- Some organizations, including Amazon Web Services, plan to divide and spread the extra second for several hours by making each second a little longer (the English term is "Leap Smear");
- If the clock does not connect to a synchronization system, it simply does not implement any kind of adjustment to it.
Source: Look Before You Leap - The Coming Leap Second and AWS
Possible complications
Many technological devices synchronize their watches with an atomic clock. However, many of them were not programmed to consider the possibility of the second extra happening, so when the system identifies it it presents an unforeseen result, which may result in crashes of servers and consequently in decline of their services.
In 2012, Mozilla, Reddit, Foursquare, Yelp, Linkedin and Stumbleupon presented crashes system when the second extra was added. Already Google, which used the tactic of "Leap Smear", escaped unscathed from this situation.
This year it is expected that some servers present this problem again.
Source: Daily News - 'Leap Second' Coming up June 30 may cause computer system problems
Update: What were the damage of the second extra 2015?
Although AWS said it was not the fault of the second extra, their services were off the air for just over 40 minutes, but no ace 00:00 UTC
and yes of 00:25
at 01:07 UTC
, leaving off-air services like Slack, Netflix, Pinterest and thousands of other websites and services.
The news:
Between 5:25 PM and 6:07 PM PDT we Experienced an Internet Connectivity Issue with a Provider Outside of our network which affected Traffic from some end-user Networks. The Issue has been resolved and the service is Operating normally.
The root cause of this Issue was an External Internet service Provider incorrectly Accepting a set of Routes for some AWS Addresses from a third-party who inadvertently advertised These Routes. Providers should normally Reject These Routes by policy, but in this case the Routes Were accepted and propagated to other Isps affecting some end-user’s Ability to access AWS Resources. Once we identified the Provider and third-party network, we Took action to route Traffic Around this incorrect routing Configuration. We have worked with this External Internet service Provider to ensure that this does not reoccur.
Source: AWS Service Health Dashboard
According to them, the fault was external servers who incorrectly accepted a set of routes to some AWS addresses that were inadvertently announced by third parties... Is that not clear to you? Not to me either. The fact is that many people suspect that the problem was yes of the second extra, although AWS states that.
Source: Mashable - Slack, Netflix, Pinterest crash and you can’t Blame the Leap Second
Thanks for the update with today’s news. I asked the question precisely because I believed that we would encounter "strange" behaviors in the following days.
– Guilherme de Jesus Santos
You’re welcome. I took a look at the news but I haven’t seen anything more serious, I think you’re learning to deal with this situation :)
– Math