Incident History

Intermittent errors on select applications in us-east-2 (Ohio)

Between 08:30 UTC and 14:30 UTC, some customers in a specific environment in us-east-2 experienced sporadic 502 errors on HTTP requests. The issue was traced to an internal DNS disruption that intermittently prevented resolution for certain application endpoints, affecting only a portion of traffic to those applications. The issue is now resolved, and we are implementing measures to prevent similar occurrences in the future.

1755031203 - 1755031203 Resolved

GitHub issues may impact deployments

GitHub has reported that this issue is resolved.

1755009672 - 1755022008 Resolved

Log Service Degradation

This incident has been resolved.

1754505135 - 1754508545 Resolved

Laravel Serverless Postgres service degradation

This incident has been resolved.

1754494375 - 1754550542 Resolved

GitHub Webhook Disruption Impacting Deployments

We observed a temporary interruption in deployments lasting approximately 45 minutes due to an upstream issue with GitHub webhooks. The issue has been identified and resolved by GitHub.

More details are available in GitHub’s incident report: https://www.githubstatus.com/incidents/6swp0zf7lk8h

1754416151 - 1754416151 Resolved

Edge Network Performance Issues in Europe

This incident was resolved Aug 04, 2025 - 19:35 UTC.

https://www.cloudflarestatus.com/incidents/ffwzxb94kxk9

1754304003 - 1754346141 Resolved

Laravel Cloud Dashboard Downtime

What happened
Starting at 09:21 UTC on July 24, 2025, the Laravel Cloud dashboard became unavailable, including the ability to deploy applications. All hosted applications continued running without interruption — we only impacted the part of the platform you use to manage and deploy your applications. Full service was restored at 09:37 UTC, for a total downtime of 16 minutes.

What caused it
A scheduled database migration that added a new column to our deployments table was blocked by an existing long-running query against that same table. Because the migration couldn’t acquire the necessary lock, the deployment pipeline stalled. Once we identified and terminated the blocking query, the migration completed successfully.

How we fixed it

  1. We identified the long running query that was holding the lock and causing the deployment to fail.
  2. We deployed a hotfix for that query, and fixed our migration to handle any partial state changes.
  3. We retriggered the deployment pipeline, which completed successfully and restored all services.

What we’re doing next

1753399133 - 1753399133 Resolved

Laravel MySQL connection issues

This incident has been resolved.

1750841699 - 1750845180 Resolved

Laravel Serverless Postgres Connectivity

Upstream issue appears to be resolved

1750442486 - 1750451988 Resolved

Laravel Cloud Deployments

This incident has been resolved.

1750067664 - 1750073877 Resolved
⮜ Previous Next ⮞