Our Engineering team has confirmed the full resolution of the issue with the upstream provider. Users should now be able to deploy to App Platform and manage their Spaces Buckets as normal.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our Engineering team investigated reports of Droplets becoming unresponsive. This issue was found to be caused by guest-level kernel hangs, and affected Droplets required a power cycle to restore functionality.
Upon further investigation, this was identified to be an upstream kernel issue with Ubuntu 20.04 running kernel version 5.4.0-122-generic. This kernel version has exhibited stability problems that can lead to Droplets becoming unresponsive intermittently. Customers running Ubuntu 20.04 with kernel 5.4.0-122-generic are advised to upgrade to a newer kernel version using the commands below:
sudo apt update
sudo apt upgrade linux-virtual
This problem is fixed with the newer kernel 5.4.0-123.139. The recommended option is to migrate to a more recent and fully supported OS version, such as Ubuntu 24.04, to ensure continued system stability and support. The affected customers can take the recommended steps above to avoid further impact.
We appreciate your patience and apologize for any inconvenience caused.
Significant submarine cable outages continue to impact multiple network carriers on the Indian subcontinent, with increased latency and packet loss to our BLR1 region occurring sporadically throughout the day.
On a positive note, our upstream providers have worked diligently over the past couple of days to minimize the impact of the loss of submarine capacity. As a result, we are now mainly seeing service degradation during the busier evening peak hours only.
For the moment, we have no updates to provide regarding when the situation might improve. Repair times for submarine cables are typically on the order of weeks or months. However, further short-term improvements may still be possible as upstream carriers work to re-balance traffic where feasible.
Separately, we continue to look into potential avenues for additional mitigations and will keep our customers aprised as we make progress in this area.
We apologize for the ongoing inconvenience created by this extraordinary situation and thank our customers for their patience.
From 18:53 to 19:15 UTC, our Engineering team identified an issue with the public Droplet API, resulting in difficulties in creating, accessing and listing Droplets via the Cloud Control Panel or API. Autoscaler and LBaaS APIs were also affected. Our team has fully resolved the issues, and as of 19:15 UTC, all services are operating normally.
We apologize for the inconvenience. If you are still experiencing any problems or have additional questions, then please open a support ticket within your account
From 14:24 to 14:41 UTC, our Engineering team identified issues affecting Block Storage Volumes in the NYC1 region. During this time, users may have experienced difficulties processing requests on Volumes. It was observed that this might have impacted Kubernetes clusters in the NYC1 region as well. Our team has fully resolved the issues, and as of 14:42 UTC, all services are operating normally.
We apologize for the inconvenience. If you are still experiencing any problems or have additional questions, then please open a support ticket within your account
Our Engineering team has confirmed full resolution of the issue impacting Droplet creation in all regions. Users can now create Droplets without any issues.
We appreciate your patience while we worked to fix this issue. If you continue to experience any problems, please open a ticket from your account for further investigation.
From 14:40 UTC to 14:52 UTC, users may have experienced errors when trying to use the API or while accessing our Cloud Control Panel at https://cloud.digitalocean.com.
Our Engineering team has implemented a fix and confirmed full resolution of the issue. Users should be able to access the API and Cloud Control Panel without issue now.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
As of 18:45 UTC, our Engineering team has resolved the issues with OpenAI rate limits that were affecting certain DigitalOcean services. The problem, which began at approximately 09:30 UTC, caused errors (429s) for Agents using OpenAI models for serverless inference.
Systems Affected:
Serverless inference
Evaluations
Technical Details: The issue was caused by exceeding OpenAI rate limits, which resulted in 429 errors for OpenAI models.
Duration: The incident lasted for approximately 9 hours, from 09:30 UTC to 18:45 UTC. We apologize for any inconvenience this may have caused and appreciate your patience as we worked to resolve the issue. If you continue to experience problems, please open a ticket with our Support team.
Our Engineering team identified and resolved an issue with the doctl v1.136.0 that was affecting automations and integrations, including App Platform and GitHub Action.
Users with the affected version of doctl may see 404 errors blocking their pipeline. The issue was caused by missing binaries in the release. The broken release has been deleted. Automations depending on doctl should now function as expected.
We apologize for the inconvenience. The new release shall be published soon.
Our Engineering team has implemented a fix for the issue that was preventing users from submitting or viewing tickets through our Support Ticket Portal. At this time, all users should be able to access the portal and manage tickets normally via https://cloudsupport.digitalocean.com/s/.
Additionally, any previously submitted tickets that were affected by this issue have now been successfully routed to our Support Team, who will be addressing them as quickly as possible.
We sincerely apologize for the inconvenience and appreciate your patience while we worked to resolve this issue.