From 17:22 UTC to 17:27 UTC, we experienced an issue with requests to the Cloud Control Panel and API
During that timeframe, users may have experienced an increase in 5xx errors for Cloud/API requests. The issue self-resolved quickly and our Engineering team is continuing to investigate root cause to ensure it does not occur again.
Thank you for your patience, and we apologize for any inconvenience. If you continue to experience any issues, please open a Support ticket for further analysis.
From 19:33 on to 21:02 UTC on July 18th, App Platform users may have experienced delays when deploying new Apps or when deploying updates to existing Apps in SFO3.
Our engineering team has deployed a fix for this issue. The impact has been resolved and users should no longer see any issues with the impacted services.
If you continue to experience problems, please open a ticket with our support team from your Cloud Control Panel. Thank you for your patience, and we apologize for any inconvenience.
Our Engineering team has confirmed the full resolution of the issue impacting the ability to create and manage Functions through the Cloud Control Panel and API in our TOR1 region. We appreciate your patience throughout the process.
If you continue to see errors please open a ticket with our Support team and we will be glad to assist you further.
Our Engineering team has resolved the issue with processing payments via PayPal on our platform. Services should now be operating normally.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our Engineering team has confirmed the issue with SMS delivery report delays when sending messages has been fully resolved.
We appreciate your patience throughout this process and if you continue to experience problems, please open a ticket with our support team for further review.
Our Engineering team identified and resolved an issue with IPv6 networking in the BLR1 region.
From 12:15 UTC - 15:32 UTC, users may have experienced issues connecting to Droplets and Droplet-based services in the BLR1 region using IPv6 addresses.
Our Engineering team quickly identified the root cause of the incident to be related to a recent maintenance in that region and implemented a fix.
We apologize for the disruption. If you continue to experience any issues, please open a Support ticket from within your account.
Our Engineering team identified and resolved an issue with creation of Snapshots and Backups in the NYC3 region.
From 20:11 UTC to 21:04 UTC, users may have experienced errors while taking Snapshots of Droplets in NYC3. Backup creation was also failing, however, Backups will be retried automatically.
Our Engineering team quickly identified the root cause of the incident to be related to capacity on internal storage clusters and were able to rebalance capacity, allowing creations to succeed.
We apologize for the disruption. If you continue to experience any issues, please open a Support ticket from within your account.
Beginning July 2nd, 20:55 UTC, team account owners may have seen an issue with removing other users from their team accounts. As of July 3rd 20:47 UTC, a fix was deployed and our Engineering team has confirmed full resolution of the issue. Team owners should be able to remove other users from their teams without issue.
Thank you for your patience. If you continue to experience any problems, please open a support ticket from within your account.
Our Engineering team identified and resolved an issue impacting multiple services, in multiple regions.
From 19:53 - 20:22 UTC, users may have experienced errors with the following operations, for all regions:
Droplet creates with a root password
Droplet root password resets
Droplet Snapshots
MongoDB cluster creates
Let’sEncrypt certificate creation (for Load Balancers and Spaces)
LoadBalancer creation with certificates
Spaces bucket and key creations
Fetching CSV invoices
Bulk actions in the Spaces UI
Additionally, users could have seen latency while creating and updating Apps.
Our Engineering team quickly identified the root cause of the incident to be an incorrect firewall policy applied to core infrastructure and rolled back the change.
We apologize for the disruption. If you continue to experience any issues, please open a Support ticket from within your account.