Previous incidents
v3 runs dequeuing slower than normal
Resolved Apr 24 at 05:05pm BST
Queues have been operating at full speed since 16:05 UTC.
We have found an edge case in the dequeue algorithm that can cause slower dequeue times. We're looking into a fix.
2 previous updates
Run are dequeuing slower than normal
Resolved Apr 07 at 08:09pm BST
Runs have been dequeuing quickly for some time now, so marking this as resolved. We're continuing to monitor it closely.
Runs dequeued for the entire period but queue times were longer than normal, across all customers.
The vast majority of queues have reduced back to normal length already or will soon.
We suspect this was caused by an underlying Digital Ocean networking issue, that meant our Kubernetes control plane nodes were slow to create and delete pods. We are trying to figure out i...
1 previous update
Tasks with large payloads or outputs are sometimes failing
Resolved Mar 21 at 10:48pm GMT
Cloudflare R2 is back online and uploads of large payloads and outputs have resumed. We'll continue to monitor the situation
4 previous updates
Significant disruption to run starts (runs stuck in queueing)
Resolved Mar 07 at 09:15pm GMT
We are confident that most queues have caught up again but are still monitoring the situation.
If you are experiencing unexpected queue times this is most likely due to plan or custom queue limits. Should this persist, please get in touch.
5 previous updates
Uncached deploys are causing runs to be queued
Resolved Mar 04 at 12:40pm GMT
We tracked this down to a broken deploy pipeline which reverted one of our internal components to a previous version. This caused a required environment variable to be ignored.
We have applied a hotfix and will be making more permanent changes to prevent this from happening again.
2 previous updates