Our Engineering has confirmed resolution of the issue. Users should no longer experience errors with attempting to deploy new static sites in NYC3 on App Platform.
If you experience any further problems or have any questions, please open a support ticket within your account.
Our Engineering team has confirmed full resolution of the issue. Users should no longer experience errors when making requests to the Cloud Control Panel or API.
If you experience any further problems or have any questions, please open a support ticket within your account.
From 08:28 to 13:03 UTC, Our Engineering team observed an issue with Spaces Access keys for DOCR in AMS3 region. During this time, users encountered an error with "403 (InvalidAccessKeyId): The access key ID you provided does not exist in our records" while accessing spaces keys. Our team has fully resolved the issues as of 13:03 UTC. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.
From 18:57 UTC to 22:05 UTC, customers may have experienced issues accessing the Recovery Console due to a service interruption. During this time, Droplet functionality remained unaffected, and customers were still able to use the Recovery ISO option via SSH.
Our Engineering team has confirmed that the issue is now fully resolved, and Recovery Console access has been fully restored and is operating normally.
If you continue to experience any difficulties, please open a ticket with our Support team. We apologize for the inconvenience caused.
As of 18:15 UTC, our Engineering team has confirmed the issue impacting accessibility of App Platform static websites has been resolved. Service has been restored and are now functioning normally.
We appreciate your patience and regret the inconvenience caused. If you continue to experience any issues, feel free to open a Support ticket for further investigation.
From 18:02 UTC to 21:10 UTC, customers in the BLR1 region who had not previously created an app may have experienced DOCR (DigitalOcean Container Registry) access errors when attempting to create new apps.
Our Engineering team has confirmed that the issue is fully resolved, and all systems are now operating normally.
If you continue to experience any problems, please open a ticket with our Support team. We apologize for the inconvenience caused.
Our Engineering team has confirmed the full resolution of this issue. From approximately 08:51 UTC – 09:12 UTC, users may have experienced difficulties signing in or accessing resources through the Control Panel and API due to an upstream provider issue. The upstream provider has fixed the issue, and all services are now functioning normally. If you continue to experience problems, please open a ticket with our support team.
Thank you for your patience and we apologize for any inconvenience.
From As of 16:34 to 19:47 UTC, may have encountered errors listing backup operations for their PostgreSQL, MySQL, OpenSearch, Redis and Kafa clusters through the API and UI. Our Engineering team has confirmed full resolution of the issue, users should no longer experience issues with listing backup operations.
Thank you for your patience, and we apologize for the inconvenience. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel.
During the timeframe 06:24 UTC – 13:45 UTC, the Gradient AI platform experienced a period of degraded functionality affecting a limited set of features. While the platform remained accessible, the below components did not perform as expected.
Impacted Areas:
Gradient Agent Evaluations
Agent trace visibility
Access management for traces
Agent deletion for agents with traces enabled
Our Engineering team identified the underlying cause and restored full functionality across all affected components. The platform has remained stable since resolution, and monitoring confirms normal performance.
Our Engineering team has fully implemented and confirmed the effectiveness of the fix for the increased guardrail latency. System performance has returned to normal, and all affected services are operating as expected.
We have observed stable metrics during extended monitoring and have not detected any further latency or interruptions.
If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.