Previous incidents
v4 dequeue performance degradation
Resolved May 26 at 02:44pm BST
v4 dequeue performance has now improved again, and we're working on two things:
A short term fix to prevent this from happening again in the short term, to be deployed today.
A long term fix for dequeue performance that will be worked on and hopefully shipped this week, which will vastly improve dequeue performance and scaling.
1 previous update
v3 runs dequeuing slower than normal
Resolved May 06 at 03:28pm BST
Queues are back to nominal length and have been for some time.
This issue was caused by a huge influx of queues, which meant we weren't considering them all when selecting queues for dequeuing.
We have increased some settings to make this better and we're looking at what we can do in the future to make this scale better for the next 10–100x multiple of queues.
3 previous updates
v3 runs dequeuing slower than normal
Resolved Apr 24 at 05:05pm BST
Queues have been operating at full speed since 16:05 UTC.
We have found an edge case in the dequeue algorithm that can cause slower dequeue times. We're looking into a fix.
2 previous updates
Run are dequeuing slower than normal
Resolved Apr 07 at 08:09pm BST
Runs have been dequeuing quickly for some time now, so marking this as resolved. We're continuing to monitor it closely.
Runs dequeued for the entire period but queue times were longer than normal, across all customers.
The vast majority of queues have reduced back to normal length already or will soon.
We suspect this was caused by an underlying Digital Ocean networking issue, that meant our Kubernetes control plane nodes were slow to create and delete pods. We are trying to figure out i...
1 previous update
Tasks with large payloads or outputs are sometimes failing
Resolved Mar 21 at 10:48pm GMT
Cloudflare R2 is back online and uploads of large payloads and outputs have resumed. We'll continue to monitor the situation
4 previous updates
Significant disruption to run starts (runs stuck in queueing)
Resolved Mar 07 at 09:15pm GMT
We are confident that most queues have caught up again but are still monitoring the situation.
If you are experiencing unexpected queue times this is most likely due to plan or custom queue limits. Should this persist, please get in touch.
5 previous updates
Uncached deploys are causing runs to be queued
Resolved Mar 04 at 12:40pm GMT
We tracked this down to a broken deploy pipeline which reverted one of our internal components to a previous version. This caused a required environment variable to be ignored.
We have applied a hotfix and will be making more permanent changes to prevent this from happening again.
2 previous updates