Cloud Metrics -High Write Latency and Errors in prod-us-central-7
We have continued to observe stability.
This incident is now being considered as resolved. Thank you for your patience.
We have continued to observe stability.
This incident is now being considered as resolved. Thank you for your patience.
We observed an underlying hardware failure on our CSP which triggered an automatic live VM migration. The situation caused a degradation in write performance for Grafana Cloud Metrics on prod-us-west-0 between 05:26 UTC and 05:43 UTC
At this time, we have confirmed that the query errors have gone and we are considering this issue resolved.
We’re currently investigating an issue affecting Datasource query performance in prod-us-east-4. Our team is actively working to identify the cause. Thank you for your patience.
This incident has been resolved. Thank you for your patience.
This incident has been resolved. Thank you for your patience.
This incident has been resolved.
We experienced degraded performance affecting Performance Testing from 13:10 UTC to 13:20 UTC. During this time, users may not have been able to start new test runs.
The issue has been resolved, and the service is now operating normally.
We apologize for any disruption this may have caused and appreciate your patience.
We were facing an incident with AWS Metrics Streaming integration in us-east-3 region manifesting in elevated ingestion latency. The incident started at around 10:45 UTC and was resolved at around 12:30 UTC. Some tenants could see an elevated write latency, but all requests were being processed and we don't expect any data loss during the time of the incident.
The incident is now resolved, but we keep monitoring the system's health.
This incident has been resolved. Thank you for your patience.