As of 18:15 UTC, our Engineering team has confirmed the issue impacting accessibility of App Platform static websites has been resolved. Service has been restored and are now functioning normally.
We appreciate your patience and regret the inconvenience caused. If you continue to experience any issues, feel free to open a Support ticket for further investigation.
From 18:02 UTC to 21:10 UTC, customers in the BLR1 region who had not previously created an app may have experienced DOCR (DigitalOcean Container Registry) access errors when attempting to create new apps.
Our Engineering team has confirmed that the issue is fully resolved, and all systems are now operating normally.
If you continue to experience any problems, please open a ticket with our Support team. We apologize for the inconvenience caused.
Our Engineering team has confirmed the full resolution of this issue. From approximately 08:51 UTC – 09:12 UTC, users may have experienced difficulties signing in or accessing resources through the Control Panel and API due to an upstream provider issue. The upstream provider has fixed the issue, and all services are now functioning normally. If you continue to experience problems, please open a ticket with our support team.
Thank you for your patience and we apologize for any inconvenience.
From As of 16:34 to 19:47 UTC, may have encountered errors listing backup operations for their PostgreSQL, MySQL, OpenSearch, Redis and Kafa clusters through the API and UI. Our Engineering team has confirmed full resolution of the issue, users should no longer experience issues with listing backup operations.
Thank you for your patience, and we apologize for the inconvenience. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel.
During the timeframe 06:24 UTC – 13:45 UTC, the Gradient AI platform experienced a period of degraded functionality affecting a limited set of features. While the platform remained accessible, the below components did not perform as expected.
Impacted Areas:
Gradient Agent Evaluations
Agent trace visibility
Access management for traces
Agent deletion for agents with traces enabled
Our Engineering team identified the underlying cause and restored full functionality across all affected components. The platform has remained stable since resolution, and monitoring confirms normal performance.
Our Engineering team has fully implemented and confirmed the effectiveness of the fix for the increased guardrail latency. System performance has returned to normal, and all affected services are operating as expected.
We have observed stable metrics during extended monitoring and have not detected any further latency or interruptions.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
From 18:55 to 19:32 UTC, Our Engineering team observed an issue with all control plane operations (Create, Scale, Fork, etc ) for all non-Mongo Managed Database clusters. During this time, users encountered long running requests made either through the Cloud Console UI or API, including provisioning new clusters, listing clusters, etc. Our team has fully resolved the issues as of 19:32 UTC All services for non-Mongo Managed Databases should now be operating normally. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.
Our Engineering team has confirmed that the external network incident affecting multiple DigitalOcean services has been fully mitigated. The impacted services including Gen AI tools, App Platform, Load Balancers, Spaces, and provisioning or management actions for new clusters have recovered and are now operating normally. All requests are completing successfully.
Thank you for your patience. If you continue to experience any issues, please open a support ticket from within your account.
From 16:59 to 17:34 UTC, our Engineering team observed an issue with block storage volumes in the NYC1 and AMS3 regions. During this time, users may have experienced failures when attempting create, snapshot, attach, detach, or resize volumes. There was no impact to performance or availability of existing volumes. Our team has fully resolved the issues as of 17:34 UTC and volumes should be operating normally. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.