From 15:20 - 16:10 UTC, our Engineering team observed an issue with network connectivity in the BLR1 region. During this time, users might have experienced varying amounts of packet loss and high latency with network connections going in or out of the BLR1 region, resulting in issues connecting to DigitalOcean services.
As of 16:10 UTC, the impact has subsided and users should no longer be facing network connectivity issues. We apologize for the inconvenience and if you are still experiencing issues or have any additional questions then please open a support ticket from within your account.
From 18:23 to 19:21 UTC, users were unable to complete registration of new cloud.digitalocean.com accounts, due to an upstream outage.
This issue is now fully resolved and users should be able to sign up normally.
We apologize for the inconvenience and invite users to retry any failed attempts. If you continue to experience issues or have any questions, please reach out to Support from within your account.
Our Engineering team identified a networking issue with our upstream provider in the SGP1 region. From 00:25 to 04:00 UTC, users may have experienced connectivity issues to DockerHub from Droplets, App Platform and Managed Kubernetes in the SGP1 region.
Our Engineering team has confirmed there is no longer any impact and connectivity should be restored for all customers.
We apologize for the inconvenience. If you continue to experience any issues, please open a support ticket from within your account.
From 17:04 UTC to 21:43 UTC, users may not have seen the option to resize an attached Block Storage Volume via the Cloud Control Panel, or may have encountered errors when attempting to resize attached Volumes via the API.
Our Engineering team has confirmed the resolution of the issue and Volume resizes should now be functioning normally in all regions.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our Engineering team has resolved the issue with Managed Kubernetes clusters in the SFO3 region. All services should now be operating normally. If you continue to experience any problems, please open a ticket with our support team. We apologize for any inconvenience this may have caused.
Our Engineering team identified and resolved an issue affecting the networking in our SFO3 region. There were multiple impacts between 08:48 and 08:52 UTC, 09:08 and 09:12 UTC, and 10:51 and 10:55 UTC. During these periods, users may have experienced timeouts with network connections to and from the SFO3 region.
Our Engineering team was able to take quick action to mitigate the impact and resolve the issue. All services are now functioning as expected. Thank you for your patience, and we apologize for any inconvenience. If you continue to experience any issues, please open a Support ticket for further analysis.
Our Engineering team has identified the root cause of the issue with creating and resizing Droplets and Droplet-based resources in our SGP1 and SYD1 regions.
No further user impact has occurred since our last update.
In order to fully remediate this issue, our Engineering team has scheduled emergency maintenance, which will take place from 14:00 - 22:00 UTC on August 29th.
Please visit the below maintenance link to know more :
link:https://status.digitalocean.com/incidents/np0zw6m04jm1
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our Engineering team has confirmed that the issue impacting our Droplet-based services in the NYC1 region has been completely mitigated. Users should no longer see issues with their Droplets and Droplet-related services.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
From 10:30 to 11:35 UTC, our Engineering team observed an issue with Container Registry and App Platform builds in the AMS3 and FRA1 regions.
During this time, users may have experienced delays while building their Apps and could have potentially experienced timeout errors in builds as a result. Additionally, a subset of customers may have experienced latency while interacting with the Container Registries.
Our Engineering team found that a backing component of the Container Registry was experiencing high memory usage. They were able to remediate that component at 11:35 UTC, which resolved the issue.
We apologize for the inconvenience. If you have any questions or continue to experience issues, please reach out via a Support ticket on your account.