tag:blogger.com,1999:blog-50188691732029375752024-02-20T10:41:32.586-08:00Pinkston's Law"Most outages begin as upgrades"Unknownnoreply@blogger.comBlogger12125tag:blogger.com,1999:blog-5018869173202937575.post-63141822651136978882015-07-10T07:28:00.004-07:002015-07-10T07:42:23.901-07:00NYSE says 3.5 hour outage caused by software updateOn a day filled with stories of hacks and outages, this one seemed to get the most attention.<br />
<blockquote class="tr_bq">
<span style="background-color: white; color: #636868; font-family: Roboto, Helvetica, Arial, sans-serif; font-size: 14px; line-height: 20px;">On Tuesday evening, the NYSE began the rollout of a software release in preparation for the July 11 industry test of the upcoming SIP timestamp requirement. As is standard NYSE practice, the initial release was deployed on one trading unit. As customers began connecting after 7am on Wednesday morning, there were communication issues between customer gateways and the trading unit with the new release. It was determined that the NYSE and NYSE MKT customer gateways were not loaded with the proper configuration compatible with the new release.</span></blockquote>
The "SIP timestamp requirement" mentioned in the statement is an interesting topic in itself. Bloomberg has a bit more detail about this bit of esoterica here: <a href="http://www.bloombergview.com/articles/2015-07-09/market-complexity-broke-the-nyse-before-saving-it" target="_blank">http://www.bloombergview.com/articles/2015-07-09/market-complexity-broke-the-nyse-before-saving-it</a>.<br />
<br />
Source: <a href="https://www.nyse.com/market-status/history" target="_blank">https://www.nyse.com/market-status/history</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5018869173202937575.post-75461533793250337092014-06-21T07:37:00.000-07:002015-07-10T07:44:25.873-07:00Facebook worldwide outageFacebook was down worldwide for a half hour, and it was widely suspected to be due to a DDoS attack on the popular social media site. But - as usual - it was Pinkston's Law at work. Here's an official spokes-droid's statement:<br />
<br />
<blockquote class="tr_bq">
We ran into an issue while updating the configuration of one of our software systems. Not long after we made the change, some people started to have trouble accessing Facebook. We quickly spotted and fixed the problem, and in less than 30 minutes Facebook was back to 100% for everyone. This doesn't happen often, but when it does we make sure we learn from the experience so we can make Facebook that much more reliable. Nothing is more important to us than making sure Facebook is there when people need it, and we apologize to anyone who may have had trouble connecting.</blockquote>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5018869173202937575.post-17681631192128110992013-02-07T11:30:00.002-08:002013-02-07T11:30:33.943-08:00Super Bowl blackout - not caused by Beyonce, but possibly by Pinkston's Law?There is a lot of finger-pointing down in NOLA-land with regard to the 35-minute power failure during the Super Bowl. So far, it does not look like we can blame it on Beyonce's <strike>lip-syncing</strike> awesome performance at half-time or <a href="http://www.beyonce-illuminati.com/" target="_blank">her membership in the Illuminati</a>.<br />
<br />
Instead, it looks like Pinkston's Law may have had a hand:<br />
<blockquote class="tr_bq">
A recent electrical system upgrade at the Superdome may have contributed
to the blackout during the Super Bowl Sunday, officials say.</blockquote>
Full article is at <a href="http://www.upi.com/Science_News/Technology/2013/02/05/Upgrades-linked-to-Super-Bowl-power-outage/UPI-93711360096170/" target="_blank">the UPI website</a>.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5018869173202937575.post-57899536003126790062011-04-30T11:13:00.000-07:002013-02-07T11:14:34.061-08:00Amazon's Elastic Compute Cloud goes down hardThe bigger they are...<br />
<br />
On April 21, 2011, Amazon's Elastic Compute Cloud went down when a planned upgrade was "executed incorrectly":<br />
<blockquote class="tr_bq">
The goal "was to upgrade the capacity of the primary network," Amazon says. "During the change one of the standard steps is to shift traffic off of one of the redundant routers in the primary EBS [Elastic Block Store] network to allow the upgrade to happen. The traffic shift was executed incorrectly and rather than routing the traffic to the other router on the primary network, the traffic was routed onto the lower capacity redundant EBS network."<br />
<br />
Ultimately, this meant a portion of the storage cluster "did not have a functioning primary or secondary network because traffic was purposely shifted away from the primary network and the secondary network couldn't handle the traffic level it was receiving." </blockquote>
Read the full article at <a href="http://www.networkworld.com/news/2011/042911-amazon-explanation.html" target="_blank">Network World</a>.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5018869173202937575.post-44031238049634348342009-12-25T16:18:00.000-08:002009-12-26T18:18:31.542-08:00Oregon Employment Division Servers and Phones Crash: 10/04/2009<ul><li> <p><b>Length of outage: Officially, 10 hours. In reality, 24+ hours</b></p> </li><li><b>Number of people affected: 165,000 Unemployment recipients, plus OED staffers. </b></li></ul> <p>I happened to be one of those affected by this outage, because at the time, I was drawing unemployment!</p> <p>The original article in the Oregonian the following Monday spun the story to make it sound as if it was the extra load of new people applying for benefits that crashed the system. Even in this later, edited version, you don't find the truth until well down the page:</p> <p>Original news story <a href="http://www.oregonlive.com/education/index.ssf/2009/10/employment_department_phones_a.html"><b>HERE</b></a><b>.</b></p> <p>Here's where the truth comes out:</p> <blockquote>Problems started Sunday when a computer server crashed while state workers were doing maintenance on the state's computer network. The 60 percent of unemployed who usually file online for their weekly checks turned to the telephone to file their claims on the state's interactive voice response system. At the same time, the group looking for emergency extensions also were swamping the phone lines.</blockquote> <p>So, they don't <i> explicitly</i> say it was an upgrade, but the system was down when I tried to use it early on Sunday morning, indicating that they had been working on the system during the overnight shift. This smells suspiciously like an upgrade was being applied. <b> Pinkston's Law!</b></p> <p>Also, it is an interesting example of the cascading failure effect; when people could not file online, they moved to the phones to file on Monday (so much for the 10-hour outage -- the system was still down Monday morning). The phone system is not sized to handle all of the traffic that the online system handles, so it crashed, too.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5018869173202937575.post-79660449853170479142009-12-25T16:17:00.000-08:002009-12-26T18:29:48.935-08:00Google's Gmail Outage Caused by Upgrade Error: 09/01/2009<ul><li> <br />
<b>Length of outage: Two hours</b><br />
<br />
</li>
<li><b>Number of people affected: Unknown - certainly millions</b></li>
</ul>Gmail is a very popular free webmail services that many people use daily. What is less well-known is that Gmail is also used extensively in business as a paid, enterprise-grade services.<br />
So, while folks who use the free personal email side of Gmail are annoyed when it goes down, business users are -- understandably -- furious.<br />
News story:<br />
<a href="http://www.eweekeurope.co.uk/news/google-s-gmail-outage-caused-by-upgrade-error-1738">http://www.eweekeurope.co.uk/news/google-s-gmail-outage-caused-by-upgrade-error-1738</a><br />
While Google does have the admirable mission statement, "Don't be evil," they are sometimes quite tight-lipped about specific causes of outages. This time they made it clear, in a statement by Ben Treynor, "VP Engineering and Site Reliability Czar":<br />
<a href="http://gmailblog.blogspot.com/2009/09/more-on-todays-gmail-issue.html">http://gmailblog.blogspot.com/2009/09/more-on-todays-gmail-issue.html </a><br />
<blockquote>Here's what happened: This morning (Pacific Time) we took a small fraction of Gmail's servers offline to perform routine upgrades. This isn't in itself a problem — we do this all the time, and Gmail's web interface runs in many locations and just sends traffic to other locations when one is offline.<br />
<br />
However, as we now know, we had slightly underestimated the load which some recent changes (ironically, some designed to improve service availability) placed on the request routers — servers which direct web queries to the appropriate Gmail server for response. At about 12:30 pm Pacific a few of the request routers became overloaded and in effect told the rest of the system "stop sending us traffic, we're too slow!". This transferred the load onto the remaining request routers, causing a few more of them to also become overloaded, and within minutes nearly all of the request routers were overloaded. As a result, people couldn't access Gmail via the web interface because their requests couldn't be routed to a Gmail server. IMAP/POP access and mail processing continued to work normally because these requests don't use the same routers.<br />
<br />
The Gmail engineering team was alerted to the failures within seconds (we take monitoring very seriously). After establishing that the core problem was insufficient available capacity, the team brought a LOT of additional request routers online (flexible capacity is one of the advantages of Google's architecture), distributed the traffic across the request routers, and the Gmail web interface came back online.<br />
</blockquote>I have to commend Google -- and Mr. Treynor in particular -- for being forthright about the outage, and providing a textbook case of Pinkston's Law. This case also illustrates the tendency for failures in one part of a network to cascade to other parts, often in an unexpected fashion.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5018869173202937575.post-9118352101634090292009-12-25T14:32:00.000-08:002009-12-26T18:30:39.179-08:00Tesco IT upgrade causes till outage: May 11, 2009<ul><li> <br />
<b>Length of outage: </b>4-24 hours<br />
<br />
</li>
<li><b>Number of people affected: </b>100 retail stores forced to close</li>
</ul><a href="http://www.tesco.com/">Tesco</a> is a major grocery and general merchandise retailer in the UK. North American readers might compare it to Wal-mart or Costco. Tesco launched a big "loyalty scheme" promotion in UK newspapers to its millions of Clubcard holders, which required an upgrade of their software, which caused their tills (cash registers) to malfunction just as the stores opened at 8:00 AM.<br />
Original news story is <a href="http://www.computerworlduk.com/management/it-business/it-department/news/index.cfm?newsid=14708"><b>HERE</b></a>.<br />
The official statement from Tesco was terse but candid:<br />
<blockquote>"A number of stores were affected by a routine IT upgrade this morning at various locations in the country,” said a Tesco spokesperson. <br />
</blockquote>She might just as well have said, "Blimey! We were struck down by Pinkston's Law!"Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5018869173202937575.post-90358141850039952512009-12-25T14:31:00.000-08:002009-12-26T18:31:39.226-08:00Florida Keys Electric Cooperative Power Outage: Oct. 11, 2004<ul><li> <br />
<b>Length of outage: </b>Approx. 1 hour<br />
<br />
</li>
<li><b>Number of people affected: </b>Unknown -- most residents of Florida Keys</li>
</ul>Here is a relatively rare instance of a <i>hardware</i> upgrade causing an outage. The unique geography and climate of the Florida Keys was clearly a factor.<br />
Read the original article <b><a href="http:///">HERE</a></b>.<br />
Here, I think it best to quote directly from the article to give you a sense of what happened:<br />
<blockquote>One strand of a corroded shield wire unraveled during its removal from service today, causing a power outage from Islamorada to Key West. Florida Keys Electric Cooperative was pulling the wire for replacement when one of its seven twisted strands failed.<br />
<br />
The broken strand swung into the energized transmission lines below it, causing a short in the transmission line. The shorted line caused a power outage beginning at 12:40 p.m. The outage began south of Snake Creek Bridge at mile marker 86.<br />
<br />
The strand of shield wire failed over water while being pulled along Long Key Channel, complicating correction of the problem. <br />
</blockquote>As a little background, the "shield wire" is the un-insulated wire that runs from pole to pole above the wires that carry the actual current. It is intended to reduce service interruptions and equipment damage by intercepting lightning strikes. In a salt-air environment such as one finds along coastlines, these conductors tend to corrode fairly quickly.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5018869173202937575.post-22229578912279403502009-12-25T14:29:00.000-08:002009-12-26T18:32:28.491-08:00PayPal Upgrade Causes Major Outage, Affects Debit-Card Users: Oct 8, 2004<ul><li> <br />
<b>Length of outage: </b>At least 4 days<br />
<br />
</li>
<li><b>Number of people affected: </b>Unknown, but clearly many hundreds of thousands.</li>
</ul>PayPal has become such a major part of our lives for online commerce that we often think of it as something that is just "always there to use" like ATMs. But, of course, it runs on a complex network of servers and other equipment, and with those come upgrades.<br />
Read the original article <b><a href="http://www.auctionbytes.com/cab/abn/y04/m10/i12/s01">HERE</a></b>.<br />
Here's the official word from PayPal (I find it interesting that companies in this situation invariably send out a <i>female</i> staffer to read the official statement to the press. Perhaps they reason that it puts a sweeter face on their <strike>weasel</strike> carefully-chosen words?) :<br />
<blockquote>PayPal spokesperson Amanda Pires said in addition to the new home page, PayPal "added some features on the backend" on Friday that were the cause of the problem. Pires said, "Everyone is working fast and furiously to get it all fixed." The problems are intermittent, she said, but declined to describe their nature or reveal the features that were added on Friday. <br />
</blockquote>Paypal is owned by eBay, and they now have little in the way of competition to keep them on their toes. As something of an "insider" in one of my jobs, I witnessed some PayPal outages and service degradations that were never publicly acknowledged, so I will not cover them here.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5018869173202937575.post-11035589236378637522009-12-25T14:26:00.000-08:002009-12-26T18:33:25.032-08:00Newly Installed Software Causes Outages in MIT's 411 Directory Services: Feb, 1998<ul><li> <br />
<b>Length of outage: </b>Several outages, up to four weeks<br />
<br />
</li>
<li><b>Number of people affected: </b>Unknown. All MIT campus phone services affected</li>
</ul>This older article chronicles the problems MIT was having with Bell Atlantic's 411 (directory assistance) services in 1997-1998. Apparently there had been a number of failures leading up to the major one in February, 1998.<br />
Read the original article <b><a href="http://tech.mit.edu/V118/N24/bsoftware.24n.html">HERE</a></b>.<br />
Here's a statement from MIT's point of view:<br />
<blockquote>"This was caused by a software change. Since the new software did not interface with ours, we had to reroute traffic," said Valerie L. Hartt, Supervisor of Operator Services in Information Systems. <br />
</blockquote>It seems that Bell Atlantic would periodically perform upgrades on their own equipment which would render it incompatible with the calls they were receiving from MIT's system.<br />
<blockquote>"Part of the problem with this was that Bell Atlantic never informed MIT's 5ESS service team that it would be performing this [upgrade] service ... Therefore, we could not inform the community, nor be available during the upgrade to perform our own testing." <br />
</blockquote>For me, the funniest part of this outage is that fact that both Bell Atlantic and MIT were using <i>identical</i> telephone switches: The <a href="http://en.wikipedia.org/wiki/5ESS_switch">AT&T 5ESS</a>, which is still in widespread use.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5018869173202937575.post-55412190539166679552009-12-25T14:23:00.000-08:002009-12-26T18:33:57.034-08:00Batelco (Bahrain) Cellular Network Outage: May 19-20, 2007<ul><li> <br />
<b>Length of outage: Unknown</b><br />
<br />
</li>
<li><b>Number of people affected: Unknown, up to 600,000 possible</b></li>
</ul>Link to the original news story, which quotes the <i>Gulf Daily News</i>:<br />
<a href="http://www.cellular-news.com/story/23887.php">http://www.cellular-news.com/story/23887.php</a><br />
The outage caused a bit of out<i>rage</i>:<br />
<blockquote>An influential business source told the newspaper that "a company with nearly BD100 million net profit should have a back-up service because what happened affected the communications of thousands of mobile owners. This is not acceptable nowadays," he said<br />
</blockquote>The outage was blamed on "migration to a New Generation Network (NGN)." I wonder how one says <i>Pinkston's Law</i> in the local language...Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5018869173202937575.post-71040111564862707382009-12-25T09:36:00.000-08:002009-12-26T18:35:12.889-08:00Blackberry E-mail Outage: 02/11/2008<ul><li> <br />
<b>Length of outage: 3 hours</b><br />
<br />
</li>
<li><b>Number of people affected: Unknown, North American users of Blackberry's email </b></li>
</ul>Original Story: <a href="http://www.cnbc.com/id/23134603">http://www.cnbc.com/id/23134603</a><br />
This outage should probably count as at least two examples of Pinkston's Law, based on this quote:<br />
<blockquote>It was the second major outage for the service in less than a year. In April, a minor software upgrade crashed the system for all users. A smaller disruption in September also was caused by a software glitch. <br />
</blockquote>I find it interesting that at least one analyst zeroed in on the existence of a Network Operations Center (NOC) as a contributing factor in the outage:<br />
<blockquote>Any time you got a system that's got a NOC, a Network Operations Center, you have the potential for a single point of failure. What's a bit surprising to me is that with all the work they've been doing over time ... that they haven't been able to have enough redundancy in the NOC so that there isn't a single point of failure. <br />
</blockquote>Unknownnoreply@blogger.com0