Previous incidents
Slower dequeuing and API responses
Resolved Sep 15 at 06:40pm BST
The database load is back to normal.
We're still investigating why this happened, the top theory at the moment is a an auto-vacuuming issue possibly to do with transaction wraparound.
1 previous update
V4 runs are slow to dequeue in the us-nyc-3 region
Resolved Sep 15 at 01:45pm BST
This issue was caused by our us-nyc-3 cloud provider taking an abnormally long time to spin up new servers and capacity issues, along with some runs getting stuck after restoring from a snapshot and not completing in under 2 minutes, which also mostly happened in us-nyc-3.
2 previous updates
eu-central-1 runs are slow to dequeue
Resolved Sep 10 at 06:18pm BST
We were seeing crashes on multiple servers in the EU only. It was related to an out of memory issue in our “supervisor” which meant some servers weren’t dequeuing consistently. We’ve changed some settings to allow for more memory. We're still investigating why these supervisor processes were using up too much memory and crashing, and we're monitoring the situation. us-east-1 runs have not been impacted.
1 previous update