Avoiding Catastrophic Outages: Lessons for Insurance Executives from Telstra's Triple Zero Outage
As an insurance executive, you understand the importance of being prepared for the unexpected. But what happens when the unexpected happens to your own systems? Telstra, Australia’s largest telecommunications company, recently experienced a 90-minute outage of their triple zero emergency service, leaving over 100 Australians impacted and one man tragically passing away. In this post, we’ll examine the cascading failures that led to this outage and provide actionable steps for insurance executives to avoid similar catastrophic events.
According to Telstra CEO Vicki Brady, the outage was caused by a litany of factors, including issues with Calling Line Identification (CLI) and subsequent errors in Telstra’s processes and communication. The incident started at 3.30am on March 1, when an issue with CLI was discovered affecting calls coming through to triple zero. Telstra’s technical team immediately began investigating the cause of the issue and working on a fix, while their triple-zero team enacted their backup process. This backup process involves their operator asking for the caller’s location and then manually connecting them to the relevant emergency service. This was successful for 346 of the 494 calls made during the incident.
However, 127 of the remaining 148 calls had to wait for a manual email transfer and callback process because Telstra had the wrong emergency service numbers on file. This delay caused chaos for paramedics, and unfortunately, a man in his 50s from the Melbourne suburb of Fitzroy suffered a cardiac arrest and later died after his family tried calling triple zero four times before being able to get through.
So, what can insurance executives learn from this catastrophic event? Here are some actionable steps to avoid similar outages:
-
Have a backup plan in place: Telstra’s triple-zero team enacted their backup process, which helped connect 346 of the 494 calls made during the incident. As an insurance executive, you should have a backup plan in place for your systems in case of unexpected outages.
-
Ensure accurate contact information: Telstra had the wrong emergency service numbers on file, causing delays for paramedics. As an insurance executive, you should ensure that your contact information for emergency services is accurate and up-to-date.
-
Test your systems regularly: Telstra reproduced the CLI issue in their lab environment and issued software fixes to improve their systems. As an insurance executive, you should regularly test your systems to identify and fix any potential issues before they become catastrophic.
-
Consider parametric insurance: Telstra’s outage was caused by requests from internet-connected medical alert devices that overwhelmed the system. Parametric insurance uses real-time data and dynamic risk modeling to enable insurers to build and operate insurance at scale. With parametric insurance, you can turn real-time data into insurance and be better prepared for unexpected events.
In conclusion, the Telstra triple zero outage serves as a cautionary tale for insurance executives. By having a backup plan in place, ensuring accurate contact information, testing your systems regularly, and considering parametric insurance, you can avoid catastrophic outages and be better prepared for the unexpected. Don’t take chances with attention, get in touch with Riskwolf to develop parametric insurance for your case. With Riskwolf, you can turn real-time data into insurance. Using unique real-time data and dynamic risk modeling, we enable insurers to build and operate parametric insurance at scale. Simple. Reliable. Fast.
Source: Telstra lifts lid on ‘unacceptable’ triple zero outage