Incident History

Issue with Azure clusters with KeyVault enabled

Executive Summary

What Happened

On August 11, 2025, at 14:05 UTC, we implemented a planned infrastructure change to add additional IP addresses to our control plane NAT gateways. Although we communicated this change in advance on June 30, 2025, we acknowledge that our communication and the resulting customer preparations were not sufficient to prevent service disruptions for customers with IP access restrictions. Upon detecting the customer impact we initiated a rollback of the infrastructure change to restore service and prevent further disruptions. The rollback was completed by 16:19 UTC, returning all affected services to their previous operational state.

Impact Assessment

Root Cause Analysis

The addition of new IP addresses to our control plane caused network access failures for customers who had configured IP allowlists that didn't include the new addresses.  (Please note that this is not the same as the cluster IP allowlist which you would use to control how to connect to your cluster)

 The primary issues were:

  1. BYOK Encryption Validation: Our key validation process (running every 15 minutes) failed on operations from the new IPs. Due to a flaw in our error handling logic, the system incorrectly interpreted these network failures as an intentional revocation of access to the encryption key and automatically shut down affected clusters.
  2. Identity Provider: The new IP addresses weren't allow-listed in our identity provider, resulting in degraded registrations and logins until those IPs were allowed.
  3. Service Authentication:  App Services experienced partial service failures because requests to the Atlas API originating from Triggers new IP addresses were blocked by an outdated internal IP allowlist.

Prevention

Immediate fixes (Already implemented):

Next steps

Conclusion

We acknowledge the significant disruption this incident caused and its impact on your applications and business operations. We are committed to preventing similar issues in the future. Although we communicated the upcoming IP changes in advance, we take full responsibility for the conditions that led to these failures. We are implementing the improvements outlined in this postmortem and will continue to invest in more resilient infrastructure change processes to ensure the reliability and stability of our services.

1754927617 - 1754932975 Resolved

MongoDB Charts: Creation of new charts and viewing of existing charts failing

This incident has been resolved.

1754926203 - 1754929711 Resolved

Elevated LetsEncrypt Errors causing Delayed Cluster Modifications

This incident has been resolved.

1754424260 - 1754429366 Resolved

Delayed operations for Atlas clusters in Azure

The issues have been resolved.

1753494051 - 1753505146 Resolved

LetsEncrypt API Outage causing Delayed Cluster Modifications

This incident has been resolved.

1753124911 - 1753188961 Resolved

Delayed operations for Atlas clusters in Azure West US 2

Azure has resolved the issues with West US 2. Cluster operations have returned to normal.

1752807066 - 1753193219 Resolved

Atlas User Interface is Unavailable

We have increased capacity for the Atlas User Interface and have been monitoring for the past hour. At this stage, we are stable and have recovered. We are continuing to investigate root cause to ensure this specific failure doesn’t happen again.

1752579136 - 1752586533 Resolved

Serverless and Flex Upgrades Failing

This incident has been resolved.

1752279364 - 1752282522 Resolved

MongoDB Atlas: Data explorer operations fail with a timeout error

This incident has been resolved.

1751997833 - 1752000156 Resolved

MongoDB Cloud: Cloud web UI results in error page

This incident has been resolved.

1750268347 - 1750270580 Resolved
⮜ Previous Next ⮞