Trading Suspended on the NYSE for More Than 3 Hours

By ThinkReliability Staff

On July 8, 2015, trading was suspended on the New York Stock Exchange (NYSE) at 11:32 AM. According to the NYSE president Tom Farley, “the root cause was determined to be a configuration issue.” This still leaves many questions unanswered. This issue can be examined in a Cause Map, a visual form of root cause analysis.

There are three steps to the Cause Mapping problem-solving method. First, the problem is defined with respect to the impact to the goals. The basic problem information is captured – the what, when, and where. In a case such as this, where the problem unfolded over hours, a timeline can be useful to provide an overview of the incident. Problems with the NYSE began when a system upgrade to meet timestamp requirements began on the evening of July 7. As traders attempted to connect to the system early the next morning, communication issues were found and worsened until the NYSE suspended trading. The system was restarted and full trading resumed at 3:10 PM.

The impacts to the goals are also documented as part of the basic problem information. In this case, there were no impacts to safety or the environment as a result of this issue. Additionally, there was no impact to customers, whose trades automatically shifted to other exchanges. However, an investigation by the Securities & Exchange Commission (SEC) and political hearings are expected as a result of the outage, impacting the regulatory goal. The outage itself is an impact to the production goal, and the time spent on response and repairs is an impact to the labor/time goal.

The cause-and-effect relationships that led to these impacts to the goals can be developed by asking “why” questions. This can be done even for positive impacts to the goals. For example, in this case customer service was NOT impacted adversely because customers were able to continue making trades even through the NYSE outage. This occurred because there are 13 exchanges, and current technology automatically transfers the trades to other exchanges. Because of this, the outage was nearly transparent to the general public.

In the case of the outage itself, as discussed above, the NYSE has stated it was due to a configuration issue. Specifically, the gateways were not loaded with the proper configuration for the outage that was rolled out July 7. However, information about what exactly the configuration issue was or what checks failed to result in the improper configuration being loaded is not currently available. (Although some have said that the chance of this failure happening on the same date as two other large-scale outages could not be coincidental, the NYSE and government have ruled out hacking.) According to NYSE president Tom Farley, “We found what was wrong and we fixed what was wrong and we have no evidence whatsoever to suspect that it was external. Tonight and overnight starts the investigation of what exactly we need to change. Do we need to change those protocols? Absolutely. Exactly what those changes are I’m not prepared to say.”

Another concern is the backup plan in place for these types of issues. Says Harvey Pitt, SEC Chairman 2001 to 2003, “This kind of stuff is inevitable. But if it’s inevitable, that means you can plan for it. What confidence are we going to have that this isn’t going to happen anymore, or that what did happen was handled as good as anyone could have expected?” The backup plan in place appeared to be shifting operations to a disaster recovery center. This was not done because it was felt that requiring traders to reconnect would be disruptive. Other backup plans (if any) were not discussed. This has led some to question the oversight role of the SEC and its ability to prevent issues like this from recurring.

To view the investigation file, including the problem outline, Cause Map, and timeline, click on “Download PDF” above. To view the NYSE statement on the outage, click here.