Metrics ingestion delays and delayed cluster operations


Incident resolved in 6h20m34s

Update

Executive Summary

Incident Date/Time: October 6–7, 2023
Duration: Approximately 2 days (partial impact observed intermittently over this period)
Impact:

Root Cause: Increased load on an internal backing database servicing Atlas and Cloud Manager monitoring systems due to a combination of unsharded high-traffic collections concentrated on a single shard, inefficient query patterns, and spikes in resource consumption coinciding with a software rollout.

Status: Resolved

What Happened

On October 6 and 7, MongoDB Atlas and Cloud Manager encountered temporary delays in metrics ingestion and backend disruptions affecting certain operational workflows. The primary contributors were elevated resource consumption and localized data distribution challenges in an internal database cluster supporting critical monitoring and operational systems.

Initial investigations pointed to high resource usage in one shard of the backing database cluster. However, further review revealed systemic inefficiencies, including:

Though the functionality and availability of customer clusters remained unaffected, customers experienced degraded monitoring performance and fewer timely Atlas dashboard updates. MongoDB implemented mitigation measures to stabilize the system and then resolved long-term root causes to restore operational workflows.

Impact Assessment

Affected Services:

Customer Impact:

Root Cause Analysis

The incident resulted from several contributing factors:

Combined, these factors overwhelmed the targeted shard and contributed to backend delays affecting metrics ingestion and operational requests.

Prevention

MongoDB has identified several lasting improvements and implemented strategic fixes to prevent recurrence:

  1. Collections experiencing concentrated load will be sharded to distribute traffic more evenly across multiple nodes, alleviating pressure on single shards.
  2. Inefficient queries are being optimized to improve resource utilization and reduce latency during routine operations.
  3. Additional infrastructure capacity has been provisioned to better handle elevated traffic volumes. Capacity planning processes are also being refined to anticipate future spikes in load.
  4. Processes for deploying updated versions of software are being redesigned to account for predictable increases in system resource demands during rollouts, ensuring smoother deployment.

Next Steps

Conclusion

We apologize for the impact of this event on our customers. We are aware that this outage had an impact on our customer’s operations. MongoDB’s highest priorities are security, durability, availability, and performance. We are committed to learning from this event and to update our internal processes to prevent similar scenarios in the future.

1760367844

Resolved

This incident has been resolved.

1759867402

Update

A fix has been implemented and we are monitoring the results.

1759865532

Update

We have identified the cause of the issue and have taken corrective actions. We are monitoring the impact of those mitigations. During this time Host Down alerts, metrics, and some Atlas Cluster Operations may still be delayed or missing.

1759859080

Investigating

We are continuing to investigate the issue. During this time Host Down alerts will not fire and metrics may be missing or delayed for Atlas and Cloud Manager Clusters. Cluster operations may also be delayed.

1759851907

Investigating

We are currently investigating a delay in our metric ingestion pipeline. Customers may see delays in cluster operations.

1759844568