Incident History

Service degradation on Logs Read path in AWS US West (us-west-0)

We continue to observe a continued period of stability since 19:40 UTC. At this time, we are considering this issue resolved

1772979452 - 1773001873 Resolved

Outage for prod-eu-central-0 due to AWS S3 outage.

This incident has been resolved.

1772914057 - 1773046795 Resolved

Some Grafana Instances Unavailable

This incident has been resolved.

1772809421 - 1772814671 Resolved

Write failures in prod-eu-west-0

We continue to observe a continued period of recovery. At this time, we are considering this issue resolved. No further updates.

1772749639 - 1772753770 Resolved

Elevated rate of errors for Fleet Management in prod-us-central-0

This incident has been resolved.

1772610458 - 1772616551 Resolved

Test Run Browser Screenshot Upload Failing

Test run browser screenshot upload experienced failures from 13:12 to 14:51 UTC.

The issue has been resolved

1772562955 - 1772562955 Resolved

Grafana Cloud Logs - Write degradation in Azure Netherlands (eu-west-3)

This incident has been resolved.

1772539643 - 1772735479 Resolved

Write outage for logs in prod-eu-west-3

This incident has been resolved.

1772437055 - 1772466536 Resolved

Complete outage in prod-me-central-1

We are continuing to investigate this issue.

1772433809 Ongoing

Increased Latency for Small Subset of Customers

A recent rollout caused the AuthZ (RBAC) service to perform many redundant folder-tree fetches for each authorization check. For a small number of tenants in the prod-us-east-0 and prod-eu-west-2 regions with very large folder trees. This added a few milliseconds to every check, which increased request latency.

The approximate timeframe of the impact is:

2026-02-26 17:24:43 UTC to 2026-02-27 14:33:53 UTC.

This has now been resolved.

1772209516 - 1772209516 Resolved
⮜ Previous Next ⮞