Tempo Query Failures in AWS Clusters
The fix has been rolled out and issue has been reoslved.
The fix has been rolled out and issue has been reoslved.
At approximately ~11:30 UTC some customers in the prod-us-east-0 region experienced either extremely slow Loki queries, or Loki queries not returning at all. This lasted until ~ 15:30 UTC when the fix was rolled out.
This incident has been resolved.
This incident has been resolved.
Engineering has released a fix and as of Thu Oct 3rd, around 23:00 UTC, customers should no longer experience delays while accessing OnCall UI. At this time, we are considering this issue resolved. No further updates.
This incident has been resolved.
No further instances of this issue have been detected following our mitigation. Customers should no longer experience any downtime. At this time, we are considering this issue resolved. No further updates.
A transient stability issue in our infrastructure caused public probes to report MultiHTTP and Scripted checks as failures in:
The error has been addressed and all probes should now be operating normally.
Between 0016Z and 0120Z and 0433Z and 0530Z a cloud networking component facilitating cross-region communication to and from this region experienced an outage. Users experienced errors modifying access policies in addition to elevated error rates for those who had recently modified an access policy. This also resulted in false positive synthetic monitoring alerts for probes in this region.
We're continuing to work with our cloud service provider to determine a root cause for the outage of this component.
Engineering deployed something which broke a component managed by one our cloud service providers. We rolled that back and the component now works. Customers should no longer experience nodes not fully booting. The cloud service provider is still trying to figure out why it broke. At this time, we are considering this issue resolved. No further updates.