Confluent integration issues in all regions
This incident has been resolved.
This incident has been resolved.
This incident has been resolved.
This incident has been resolved.
The incident is fully resolved now, root cause was determined and mitigated. The particular cell affected had a legacy config present, which caused a panic during the regular rollout. The cell was rolled back as a precaution and the mentioned config is now corrected.
Between 10:45 UTC and 11:05 UTC we experienced an outage with scripted and multi-http test executions across all public probe locations. These two check types were not executing tests, so you may see gaps in the synthetics metrics and logs in that window. Protocol-level test executions (ping, dns, tcp, http, traceroute) were unaffected. As of 11:05 UTC the incident is resolved and all check types are executing normally.
This incident has been resolved.
We continue to observe a continued period of recovery. At this time, we are considering this issue resolved.
This incident has been resolved.
We continue to observe a continued period of recovery. At this time, we are considering this issue resolved. No further updates.
Today between 07:45 UTC and 08:50 UTC, the Singapore probe experienced a regional networking issue and lost connectivity with the AWS US West region. During this time Singapore was unable to publish metric and log results from synthetic test executions to AWS US West (us-west-0). Synthetic tests targeting that region also saw elevated error rates. We haven’t observed any connectivity issues since 08:50AM UTC consider this incident resolved. If you continue to experience problems, please open a Support ticket for assistance.