Our Engineering team identified and resolved an issue with IPv6 networking in the BLR1 region.
From 12:15 UTC - 15:32 UTC, users may have experienced issues connecting to Droplets and Droplet-based services in the BLR1 region using IPv6 addresses.
Our Engineering team quickly identified the root cause of the incident to be related to a recent maintenance in that region and implemented a fix.
We apologize for the disruption. If you continue to experience any issues, please open a Support ticket from within your account.
Our Engineering team identified and resolved an issue with creation of Snapshots and Backups in the NYC3 region.
From 20:11 UTC to 21:04 UTC, users may have experienced errors while taking Snapshots of Droplets in NYC3. Backup creation was also failing, however, Backups will be retried automatically.
Our Engineering team quickly identified the root cause of the incident to be related to capacity on internal storage clusters and were able to rebalance capacity, allowing creations to succeed.
We apologize for the disruption. If you continue to experience any issues, please open a Support ticket from within your account.
Beginning July 2nd, 20:55 UTC, team account owners may have seen an issue with removing other users from their team accounts. As of July 3rd 20:47 UTC, a fix was deployed and our Engineering team has confirmed full resolution of the issue. Team owners should be able to remove other users from their teams without issue.
Thank you for your patience. If you continue to experience any problems, please open a support ticket from within your account.
Our Engineering team identified and resolved an issue impacting multiple services, in multiple regions.
From 19:53 - 20:22 UTC, users may have experienced errors with the following operations, for all regions:
Droplet creates with a root password
Droplet root password resets
Droplet Snapshots
MongoDB cluster creates
Let’sEncrypt certificate creation (for Load Balancers and Spaces)
LoadBalancer creation with certificates
Spaces bucket and key creations
Fetching CSV invoices
Bulk actions in the Spaces UI
Additionally, users could have seen latency while creating and updating Apps.
Our Engineering team quickly identified the root cause of the incident to be an incorrect firewall policy applied to core infrastructure and rolled back the change.
We apologize for the disruption. If you continue to experience any issues, please open a Support ticket from within your account.
Our Engineering team has confirmed that this incident has been fully resolved.
We appreciate your patience throughout this process and if you continue to experience problems, please open a ticket with our support team for further review.
Our Engineering team has confirmed the full resolution of the issue impacting the Managed Kubernetes cluster creation. As of 12:18 UTC the functionality has been restored completely and users should be able to deploy new clusters via Cloud Control Panel and API.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
As of 13.10 UTC, our Engineering team has confirmed full resolution of networking in our SFO region.
If you continue to experience problems, please open a ticket with our support team from your Cloud Control Panel. Thank you for your patience and we apologize for any inconvenience.
From 17:02 - 18:44 UTC, the Spaces API experienced availability dips. These dips caused Spaces bucket operations in SGP1 for a subset of users to experience latency or 503 errors.
The fix implemented by our Engineering team has been monitored and availability for the Spaces API is stable. Users should be able to interact with their Spaces buckets successfully.
If you continue to experience any issues, please reach out to the Support team from within your account.