Incident History

Incident with Codespaces

On February 2, 2026, GitHub Codespaces were unavailable between 18:55 and 22:20 UTC and degraded until the service fully recovered at February 3, 2026 00:15 UTC. During this time, Codespaces creation and resume operations failed in all regions. This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out. We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had.

1770063439 - 1770080094 Resolved

Incident with Actions

On February 2, 2026, between 18:35 UTC and 22:15 UTC, GitHub Actions hosted runners were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners and at February 3, 2026 00:30 UTC for larger runners. During this time, Actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot Coding Agent, Copilot Code Review, CodeQL, Dependabot, GitHub Enterprise Importer, and Pages. All regions and runner types were impacted. Self-hosted runners on other providers were not impacted. This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out. We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had.

1770058994 - 1770080164 Resolved

Disruption with some GitHub services

From Jan 31, 2026 00:30 UTC to Feb 2, 2026 18:00 UTC Dependabot service was degraded and failed to create 10% of Automated Pull Requests. This was due to a cluster failover that connected to a read-only database.We mitigated the incident by pausing Dependabot queues until traffic was properly routed to healthy clusters. We’re working on identifying and rerunning all failed jobs during this time.We’re adding new monitors and alerts to reduce our time to detection and prevent this in the future.

1770054079 - 1770057999 Resolved

Disruption with some GitHub services

From Feb 2, 2026 17:13 UTC to Feb 2, 2026 17:36 UTC we experienced failures on ~0.02% of Git operations. While deploying an internal service, a misconfiguration caused a small subset of traffic to route to a service that was not ready. During the incident we observed the degradation and statused publicly.To mitigate the issue, traffic was redirected to healthy instances and we resumed normal operation.We are improving our monitoring and deployment processes in this area to avoid future routing issues.

1770053665 - 1770054231 Resolved

Degraded Experience - Failing to finalize some CCA Jobs

Between 2026-01-30 19:06 UTC and 2026-01-30 20:04 UTC, Copilot Coding Agent experienced sessions getting stuck, with a mismatch between the UI-reported session status and the underlying Actions and job execution state. Impacted users could observe Actions finish successfully but the session UI continuing to show in-progress state, or sessions remaining in queued state.The issue was caused by a feature flag that resulted in events being published to a new Kafka topic. Publishing failures led to buffer/queue overflows in the shared event publishing client, preventing other critical events from being emitted. We mitigated the incident by disabling the feature flag and redeploying production pods, which resumed normal event delivery. We are working to improve safeguards and detection around event publishing failures to reduce time to mitigation for similar issues in the future.

1769806791 - 1769808143 Resolved

Actions Workflows Run Start Delays

On Jan 28, 2026, between 14:56 UTC and 15:44 UTC, GitHub Actions experienced degraded performance. During this time, workflows experienced an average delay of 49 seconds, and 4.7% of workflow runs failed to start within 5 minutes. The root cause was an atypical load pattern that overwhelmed system capacity and caused resource contention.Recovery began once additional resources came online at 15:25 UTC, with full recovery at 15:44 UTC. We are implementing safeguards to prevent this failure mode and enhancing our monitoring to detect and address similar patterns more quickly in the future.

1769613161 - 1769615644 Resolved

Regression in windows runners for public repositories

On Jan 26, 2026, from approximately 14:03 UTC to 23:42 UTC, GitHub Actions experienced job failures on some Windows standard hosted runners. This was caused by a configuration difference in a new Windows runner type that caused the expected D: drive to be missing. About 2.5% of all Windows standard runners jobs were impacted. Re-run of failed workflows had a high chance of succeeding given the limited rollout of the change.The job failures were mitigated by rolling back the affected configuration and removing the provisioned runners that had this configuration. To reduce the chance of recurrence, we are expanding runner telemetry and improving validation of runner configuration changes. We are also evaluating options to accelerate the mitigation time of any similar future events.

1769455551 - 1769471504 Resolved

Disruption with repo creation

Between January 24, 2026,19:56 UTC and January 25, 2026, 2:50 UTC repository creation and clone were degraded. On average, the error rate was 25% and peaked at 55% of requests for repository creation. This was due to increased latency on the repositories database impacting a read-after-write problem during repo creation. We mitigated the incident by stopping an operation that was generating load on the database to increase throughput. We have identified the repository creation problem and are working to address the issue and improve our observability to reduce our time to detection and mitigation of issues like this one in the future.

1769309021 - 1769310513 Resolved

Disruption with some GitHub services

On January 22, 2026, our authentication service experienced an issue between 14:00 UTC and 14:50 UTC, resulting in downstream disruptions for users.From 14:00 UTC to 14:23 UTC, authenticated API requests experienced higher-than-normal error rates, with an average of 16.9% and occasional peaks up to 22.2% resulting in HTTP 401 responses for authenticated API requests. From 14:00 UTC to 14:50 UTC, git operations over HTTP were impacted, with error rates averaging 3.8% and peaking at 10.8%. As a result, some users may have been unable to run git commands as expected.This was due to the authentication service reaching the maximum allowed number of database connections. We mitigated the incident by increasing the maximum number of database connections in the authentication service.We are adding additional monitoring around database connection pool usage and improving our traffic projection to reduce our time to detection and mitigation of issues like this one in the future.

1769091169 - 1769095379 Resolved

Policy pages for Copilot are timing out

On January 21, between 17:50 and 20:53 UTC, around 350 enterprises and organizations experienced slower load times or timeouts when viewing Copilot policy pages. The issue was traced to performance degradation under load due to an issue in upstream database caching capability within our billing infrastructure, which increased query latency to retrieve billing and policy information from approximately 300ms to up to 1.5s.To restore service, we disabled the affected caching feature, which immediately returned performance to normal. We then addressed the issue in the caching capability and re-enabled our use of the database cache and observed continued recovery.Moving forward, we’re tightening our procedures for deploying performance optimizations, adding test coverage, and improving cross-service visibility and alerting so we can detect upstream degradations earlier and reduce impact to customers.

1769023865 - 1769028786 Resolved
⮜ Previous Next ⮞