Our Engineering team has implemented a fix that has significantly reduced the impact of the issue, largely mitigating the outage. Although the situation has improved, some users may still experience intermittent latency issues. We are closely monitoring the results and will continue to work towards full resolution. We will provide another update once we have confirmed the issue is fully resolved.
From 18:55 to 19:32 UTC, Our Engineering team observed an issue with all control plane operations (Create, Scale, Fork, etc ) for all non-Mongo Managed Database clusters. During this time, users encountered long running requests made either through the Cloud Console UI or API, including provisioning new clusters, listing clusters, etc. Our team has fully resolved the issues as of 19:32 UTC All services for non-Mongo Managed Databases should now be operating normally. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.
Our Engineering team has confirmed that the external network incident affecting multiple DigitalOcean services has been fully mitigated. The impacted services including Gen AI tools, App Platform, Load Balancers, Spaces, and provisioning or management actions for new clusters have recovered and are now operating normally. All requests are completing successfully.
Thank you for your patience. If you continue to experience any issues, please open a support ticket from within your account.
From 16:59 to 17:34 UTC, our Engineering team observed an issue with block storage volumes in the NYC1 and AMS3 regions. During this time, users may have experienced failures when attempting create, snapshot, attach, detach, or resize volumes. There was no impact to performance or availability of existing volumes. Our team has fully resolved the issues as of 17:34 UTC and volumes should be operating normally. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.
Our Engineering team has confirmed the full resolution of the networking connectivity issue affecting the BLR1 region. Users should experience expected performance when accessing Droplets and other services.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
From 13:50 to 14:15 UTC, our Engineering team observed an issue with an upstream provider impacting network connectivity in the BLR1 region.
During this time, users may have experienced increase in latency or packet loss when accessing Droplets and Droplet-based services, like Managed Kubernetes and Database Clusters in BLR1 region.
The impact has now subsided and as of 14:15 UTC, users should already experience better performance when accessing Droplets and other services.
We apologize for the inconvenience. If you are still experiencing any problems or have additional questions, then please open a support ticket within your account.
Between 14:28 PM & 14:35 PM UTC today, our Engineering team has identified an issue impacting the availability of multiple services, including Droplets, Volumes, Spaces, Kubernetes, Load Balancers, Managed Databases, etc. During this period, users might have noticed 500 Internal Server Errors or 503 Service Unavailable Errors when accessing these services and other dependent services.
Our team has taken appropriate measures to address the issue. We can confirm that all services have been restored and are now functioning normally.
We regret the inconvenience caused. However, if you continue to experience any issues, please create a support ticket for further analysis.