December 7, 2021 won't be a day that will live in infamy, but it is a day that will annoy manyAmazon Web Services (AWS) users. It will also vex many more people who didn't realize that Disney+, Venmo, and Robinhood all rely on AWS. No AWS, no Star Wars: The Bad Batch.
A few brilliant strokes of ingenuity, combined with a large dose of capitalism, made the e-retailer into the world's cloud services leader.
Read nowThe problem? According to the AWS Service Health Dashboard:
We are seeing an impact on multiple AWS APIs in the US-EAST-1 Region. This issue is also affecting some of our monitoring and incident response tooling, which is delaying our ability to provide updates. We have identified the root cause and are actively working towards recovery.
At that time, a little after 1 PM ET, it looked like we'd be back to business soon. It turns out we were wrong.
At 5:04 PM ET, AWS reported that while they "have executed a mitigation which is showing significant recovery in the US-EAST-1 Region," they "still do not have an ETA for full recovery at this time."
At 7:45 PM ET, AWS reported: "With the network device issues resolved, we are now working towards recovery of any impaired services. We will provide additional updates for impaired services within the appropriate entry in the Service Health Dashboard."
As of 7:30 AM this morning, AWS's Service Health Dashboard showed all services operating normally.
The problem first manifested at about 10:45 AM ET. It got its start in the major US-East-1 AWS region hosted in Virginia.
It may have been sparked there, but problems showed up across AWS. Internet administrators reported that there were issues with AWS Identity and Access Management (IAM) , a web service that securely controls access globally to AWS resources. Adding insult to injury, AWS customer service was down. So, even if your service or site wasn't at US-East-1, you could still feel the problem's effects.
Some companies are having fits about this failure. They'd put all their eggs into one cloud basket, and now that basket's broken. Some of them could have remediated this by placing their cloud resources into more than one AWS region or zone, but many have not. This has left them dead in the water for now.
Other unforeseen side effects have also appeared, including the fact that newer Roomba robot vacuum clearers aren't working.
According to DownDetector results, AWS's troubles started getting better on the evening of December 7th. By the morning of December 8th, all appears back to normal.