Delayed cluster modifications


Incident resolved in 7h25m41s

Update

Incident Summary

Between June 3rd 2024, 18:24 UTC and June 4th, 1:21 UTC, MongoDB Atlas customers across all commercial regions experienced degradation of the following functionalities.

Changes to cluster configuration, both in response to API calls and automated triggers (e.g., auto-scaling or node replacement) were executed with delays varying between a few minutes and several hours. Similar delays applied to Alert processing, impacting both the Atlas built-in Alert functionality and integrations such as Datadog and PagerDuty. Part of the daily cost tabulation for compute, storage and data transfer on June 3rd will be delayed to June 6th or June 7th. Lower granularities of cluster metrics were unavailable until up to 1:56 UTC. Finally, the Data Explorer and Realtime Performance Panel UIs were impaired for several hours.

Root Cause

The root cause of this incident was an unexpected failure during an Atlas internal maintenance activity. A planned migration of metadata for an internal workflow management system that backs the Atlas control plane resulted in unexpected resource congestion on the target database cluster as traffic was redirected to the target. This issue did not occur during pre-production testing. The rollback process required complex reconciliation of data in the source and target databases, delaying recovery.

MongoDB Actions

The MongoDB Atlas team is designing a new metadata migration strategy for future maintenance activities. This process will allow for rollback within less than 15 minutes of detecting a potential issue. We will not execute similar maintenance activities until this procedure is implemented and thoroughly tested.

Recommended Customer Actions

This issue was fully eliminated by the MongoDB Atlas team and does not require customer action.

1718131140

Resolved

This incident has been resolved.

1717467067

Update

Recovery procedures are complete, and we are continuing to monitor closely.

1717459798

Update

We continue to recover. Cluster operations submitted between 18:45 UTC and 21:55 UTC may continue to be delayed or delay subsequent changes. All other cluster operations submitted from this point forward should execute normally. Normal cluster operation is expected to resume within the next hour (by 00:00 UTC). Lower granularities of metrics for recent time periods may continue to be unavailable for the next five hours (until 04:00 UTC).

1717456072

Update

We are continuing to execute recovery procedures. Data Explorer and Real Time Performance Panel are fully operational. Cluster operations are recovering but still delayed. Lower granularities of metrics for recent time periods may continue to be unavailable. We will post another update in 30 minutes.

1717452690

Investigating

We are continuing to investigate this issue.

1717451082

Investigating

We have identified the root cause and have begun the recovery procedure. We will post another update within the next 30 minutes.

1717450300

Investigating

We are continuing to investigate and will post another update in 30 minutes.

1717448104

Investigating

We are continuing to investigate and will post another update in 30 minutes

1717446092

Investigating

We have identified that usages of Data Explorer may be delayed or unresponsive during this time. Additionally, certain operations in the Real-Time Performance Panel may not work. Lower granularities of metrics for recent time periods may also be unavailable.

1717442557

Investigating

We are currently investigating an issue that is causing delays in all cluster updates across all cloud providers.

1717440326