- ResolvedResolved
Following the API incident on March 13, we have taken corrective actions to avoid similar issues with Amazon ElastiCache, as detailed in the previous update. All systems have been stable since then.
- UpdateUpdate
While the investigation of the recent API unavailability incident still continues, we would like to provide an update on our most recent findings. Service stability is our highest priority, which is why we want to be as transparent as possible, and a full analysis will be provided as soon as the investigation is complete. --- Summary --- We have identified that the downtime was caused by a rare and unfortunate combination of issues in the underlying AWS services (ElastiCache), AWS infrastructural components (EBS volumes / EC2 AMI) in combination with an issue in the AppGrid codebase. We are working closely together with AWS support to find the root cause of the issue, and have taken necessary steps on our side to protect us from similar issues in the future. Even though the API has been stable since the issue was resolved, we will continue to have extra monitoring in place as a precautionary action until the final root cause has been established and addressed. We have also updated our emergency procedures to effectively deal with this scenario going forward. --- Event details --- 1. We have identified an issue in AWS ElastiCache connection handling which caused abnormal connection retention. Specifically, Amazon ElastiCache kept all of the connections open even though machines connecting to it were terminated. This issue has been escalated with AWS Support and we are working actively with them to find the root cause of this issue. 2. While our systems do not rely on ElastiCache availability (and we are continuously testing the ability of AppGrid services to continue functioning without the ElastiCache mechanism and several other supporting services), we have identified that this particular issue with ElastiCache was not accounted for in our tests and AppGrid connection handling code. As this is an abnormal behavior that did not effectively identify ElastiCache as "unavailable", connections were stuck in a waiting state, which after some time led to failed health checks of AppGrid services. 3. As part of a normal self-healing procedure, AppGrid machines were immediately replaced by new ones. Even though new instances of the API used an identical underlying AMI (Machine Image) that has been used for a long time, these machines appeared to have corrupted filesystems and failed to launch AppGrid API services. This caused an endless loop of attempts to bring new machines and tearing down faulty ones, which in turn prevented the system to recover. This issue has also been escalated with AWS Support, as the system had been successfully rotating machines with this exact configuration for several months without any issues. --- Actions taken --- Moving forward, we are continuing our investigations working with AWS Support around the clock to find the underlying issue(s) - as well as ways to ensure we are not affected by similar issues in the future. We are currently updating AppGrid connection handling to mitigate communication issues with ElastiCache in this unresponsive state with open, faulty connections, which will protect us from this issue in the future. Finally, AppGrid is currently hosted in multiple availability zones in North America, meaning that AppGrid would still operate normally even in the rare event that a complete AWS data center goes down. In the long term, we are evaluating multi-regional load-balancing of AppGrid APIs, which would provide even higher redundancy and a global distribution of service availability.
- UpdateUpdate
Yesterday (March 13th) between 14:38:59 and 15:32:34 CET we experienced an unexpected issue with the AppGrid API. During this time interval, the majority of API requests could not be processed. The extent of the impact on client applications varied depending on the client caching strategy in place. In parallel to investigating the root cause, several measures were taken to resume normal operations for our services. Unfortunately, the outage was directly related to an issue with the underlying infrastructure and therefore we were not able to restore or re-create the affected services using our standard procedures. We are working closely with Amazon support to determine the root cause of the problem with the infrastructure. In the meantime, we have extra monitoring in place in order to secure stability. As soon as the root cause has been identified, we will provide a more detailed post mortem analysis as well as an overview on what steps we will be taking to avoid similar issues in the future.
- MonitoringMonitoring
The API is available again since 15:33 CET. We are still investigating the root cause of the outage and will provide more information shortly.
- UpdateUpdate
We are still investigating the major API outage effecting all regions and will provide updates continuously.
- InvestigatingInvestigating
We're experiencing an elevated level of API errors since 14:38 CET and are currently looking into the issue.