Grafana OnCall degraded perfomance
This incident has been resolved.
This incident has been resolved.
This incident has been resolved.
Engineering has released a fix and as of Thu Oct 3rd, around 23:00 UTC, customers should no longer experience delays while accessing OnCall UI. At this time, we are considering this issue resolved. No further updates.
This incident has been resolved.
No further instances of this issue have been detected following our mitigation. Customers should no longer experience any downtime. At this time, we are considering this issue resolved. No further updates.
A transient stability issue in our infrastructure caused public probes to report MultiHTTP and Scripted checks as failures in:
The error has been addressed and all probes should now be operating normally.
Between 0016Z and 0120Z and 0433Z and 0530Z a cloud networking component facilitating cross-region communication to and from this region experienced an outage. Users experienced errors modifying access policies in addition to elevated error rates for those who had recently modified an access policy. This also resulted in false positive synthetic monitoring alerts for probes in this region.
We're continuing to work with our cloud service provider to determine a root cause for the outage of this component.
Engineering deployed something which broke a component managed by one our cloud service providers. We rolled that back and the component now works. Customers should no longer experience nodes not fully booting. The cloud service provider is still trying to figure out why it broke. At this time, we are considering this issue resolved. No further updates.
From 15:15 to 15:31 UTC, the Grafana Cloud k6 App could not load. New tests could not be started from the App nor the k6 CLI and already running tests were aborted or timed out.
The immediate issue has been resolved. We are investigating the root cause.
From 18:46:16 to 18:46:26 UTC, we were alerted to an issue that cased a restart of Tempo Ingestors in the US-East region.
During this time, users may have noticed 404 or 500 errors in their agent logging, potentially resulting in a small amount of discarded Tempo traces for the time the ingesters were not available.
Our Engineers were able to identify the cause and a solution was implemented to resolve the issue. Please contact our support team if you notice any discrepancies of have questions.