Our Engineering team has confirmed the full resolution of the networking connectivity issue affecting the BLR1 region. Users should experience expected performance when accessing Droplets and other services.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
From 13:50 to 14:15 UTC, our Engineering team observed an issue with an upstream provider impacting network connectivity in the BLR1 region.
During this time, users may have experienced increase in latency or packet loss when accessing Droplets and Droplet-based services, like Managed Kubernetes and Database Clusters in BLR1 region.
The impact has now subsided and as of 14:15 UTC, users should already experience better performance when accessing Droplets and other services.
We apologize for the inconvenience. If you are still experiencing any problems or have additional questions, then please open a support ticket within your account.
Between 14:28 PM & 14:35 PM UTC today, our Engineering team has identified an issue impacting the availability of multiple services, including Droplets, Volumes, Spaces, Kubernetes, Load Balancers, Managed Databases, etc. During this period, users might have noticed 500 Internal Server Errors or 503 Service Unavailable Errors when accessing these services and other dependent services.
Our team has taken appropriate measures to address the issue. We can confirm that all services have been restored and are now functioning normally.
We regret the inconvenience caused. However, if you continue to experience any issues, please create a support ticket for further analysis.
Our Engineering team has resolved the performance issue affecting Spaces. From approximately 15:42 UTC - 16:14 UTC, customers may have experienced slow performance or limited availability when accessing Spaces, its objects via the Control Panel or API. Accessing and creating Container Registry along with App builds were also affected during this time. All services should now be functioning normally.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
The issue impacting App Platform deployments has been successfully resolved. Users should no longer encounter delays during the build phase or have deployments getting stuck. All services are now confirmed to be stable and operating normally.
We appreciate your patience throughout this but if you continue to experience any issues, please create a support ticket for further analysis.
Between 08:00 UTC and 22:53 UTC, an issue with an upstream provider affected multiple DigitalOcean services, including SMS-based 2FA (causing code delivery failures), Docker Hub-hosted images (impacting Managed Kubernetes and App Platform deployments), and SnapShooter backup jobs. The issue has been resolved, and all services are now stable and operating normally.
We appreciate your patience throughout this but if you continue to experience any issues, please create a support ticket from within the Cloud Control Panel for further analysis.
Our Engineering team has confirmed that the IPv6 connectivity issue has been fully resolved across all regions. Users should no longer face IPv6 connectivity issues. If you continue to experience problems, please open a ticket with our support team. Thank you for your patience and we apologize for any inconvenience.
Our Engineering team has resolved the issue affecting the creation of NVDIA GPU nodes on new Kubernetes clusters. Users will no longer encounter issues when creating NVDIA GPU nodes on new Kubernetes clusters.
If you continue to experience any issues, please open a support ticket. We sincerely apologize for the inconvenience caused and appreciate your understanding.