Our Engineering team has confirmed that the external network incident affecting multiple DigitalOcean services has been fully mitigated. The impacted services including Gen AI tools, App Platform, Load Balancers, Spaces, and provisioning or management actions for new clusters have recovered and are now operating normally. All requests are completing successfully.
Thank you for your patience. If you continue to experience any issues, please open a support ticket from within your account.
From 16:59 to 17:34 UTC, our Engineering team observed an issue with block storage volumes in the NYC1 and AMS3 regions. During this time, users may have experienced failures when attempting create, snapshot, attach, detach, or resize volumes. There was no impact to performance or availability of existing volumes. Our team has fully resolved the issues as of 17:34 UTC and volumes should be operating normally. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.
Our Engineering team has confirmed the full resolution of the networking connectivity issue affecting the BLR1 region. Users should experience expected performance when accessing Droplets and other services.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
From 13:50 to 14:15 UTC, our Engineering team observed an issue with an upstream provider impacting network connectivity in the BLR1 region.
During this time, users may have experienced increase in latency or packet loss when accessing Droplets and Droplet-based services, like Managed Kubernetes and Database Clusters in BLR1 region.
The impact has now subsided and as of 14:15 UTC, users should already experience better performance when accessing Droplets and other services.
We apologize for the inconvenience. If you are still experiencing any problems or have additional questions, then please open a support ticket within your account.
Between 14:28 PM & 14:35 PM UTC today, our Engineering team has identified an issue impacting the availability of multiple services, including Droplets, Volumes, Spaces, Kubernetes, Load Balancers, Managed Databases, etc. During this period, users might have noticed 500 Internal Server Errors or 503 Service Unavailable Errors when accessing these services and other dependent services.
Our team has taken appropriate measures to address the issue. We can confirm that all services have been restored and are now functioning normally.
We regret the inconvenience caused. However, if you continue to experience any issues, please create a support ticket for further analysis.
Our Engineering team has resolved the performance issue affecting Spaces. From approximately 15:42 UTC - 16:14 UTC, customers may have experienced slow performance or limited availability when accessing Spaces, its objects via the Control Panel or API. Accessing and creating Container Registry along with App builds were also affected during this time. All services should now be functioning normally.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
The issue impacting App Platform deployments has been successfully resolved. Users should no longer encounter delays during the build phase or have deployments getting stuck. All services are now confirmed to be stable and operating normally.
We appreciate your patience throughout this but if you continue to experience any issues, please create a support ticket for further analysis.