| What actually caused the Amazon outage from a technical perspective
 
 
 |  | https://imgur.com/a/o2g8xYK | 10/21/25 |  | ,.,.,.,.,..,.,.,.,..,..,..,..,.,.,,..,.,,. | 10/21/25 |  | https://imgur.com/a/o2g8xYK | 10/21/25 |  | i gave my cousin head | 10/21/25 |  | https://imgur.com/a/o2g8xYK | 10/21/25 |  | SneakersSO | 10/21/25 | 
 
 
  Poast new message in this thread 
 
 
 | 
       
 Date:  October 21st, 2025 5:50 PM
 
 Author: https://imgur.com/a/o2g8xYK 
 Amazon's internal DNS server at East Coast 1 stopped working intermittently for west coast customers around 12-2am PST. Amazon uses some shit called DynamoDB to handle all the DNS calls. If there's even a brief outage, it causes a backlog of requests to the DynamoDB while the system gets rebooted. In this case they got the shit rebooted fast as fuck, but the intermittent outages created a sufficient backlog of requests that it took several hours for DynamoDB to fully populate its cache and respond to all the requests. That's the best I can explain it.
 
 (http://www.autoadmit.com/thread.php?thread_id=5788430&forum_id=2/#49364761)
 | 
 
 | 
         
 Date:  October 21st, 2025 5:55 PM
 
 Author: ,.,.,.,.,..,.,.,.,..,..,..,..,.,.,,..,.,,. 
 All it did is remind everyone how large Amazon’s market share is. Stock is up quite a bit since the outage
 
 (http://www.autoadmit.com/thread.php?thread_id=5788430&forum_id=2/#49364773)
 | 
 
 
 
 
 
 |  |