Previous incidents
Tasks with large payloads or outputs are sometimes failing
Resolved Mar 21 at 10:48pm GMT
Cloudflare R2 is back online and uploads of large payloads and outputs have resumed. We'll continue to monitor the situation
4 previous updates
Significant disruption to run starts (runs stuck in queueing)
Resolved Mar 07 at 09:15pm GMT
We are confident that most queues have caught up again but are still monitoring the situation.
If you are experiencing unexpected queue times this is most likely due to plan or custom queue limits. Should this persist, please get in touch.
5 previous updates
Uncached deploys are causing runs to be queued
Resolved Mar 04 at 12:40pm GMT
We tracked this down to a broken deploy pipeline which reverted one of our internal components to a previous version. This caused a required environment variable to be ignored.
We have applied a hotfix and will be making more permanent changes to prevent this from happening again.
2 previous updates
Slow queue times
Resolved Jan 30 at 10:10am GMT
Queue processing performance is back to normal because there's been a reduction in demand.
We have identified the underlying bottleneck and are working on a permanent fix. This shouldn't be a major change and should be live. There is a high degree of contention on an update when a single queue's concurrencyLimit is different on every call to trigger a task. This is an edge case we haven't seen anyone do before.
Deploys are failing with a 520 status code
Resolved Jan 24 at 07:28pm GMT
Important: Upgrade to 3.3.12+ in order to deploy again
If you use npx you can upgrade the CLI and all of the packages by running:
npx trigger.dev@latest update
This should download 3.3.12 (or newer) of the CLI and then prompt you to update the other packages too.
If you have pinned a specific version (e.g. in GitHub Actions) you may need to manually update your package.json file or a workflow file.
Read our full package upgrading guide here: https://trigger.dev/docs/upgrading-packages
3 previous updates