Degraded performance for Copilot Coding Agent
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
On February 3, 2026, between 14:00 UTC and 17:40 UTC, customers experienced delays in Webhook delivery for push events and delayed GitHub Actions workflow runs. During this window, Webhook deliveries for push events were delayed by up to 40 minutes, with an average delay of 10 minutes. GitHub Actions workflows triggered by push events experienced similar job start delays. Additionally, between 15:25 UTC and 16:05 UTC, all GitHub Actions workflow runs experienced status update delays of up to 11 minutes, with a median delay of 6 minutes.The issue stemmed from connection churn in our eventing service, which caused CPU saturation and delays for reads and writes, with subsequent downstream delivery delays for Actions and Webhooks. We have added observability tooling and metrics to accelerate detection, and are correcting stream processing client configuration to prevent recurrence.
On February 3, 2026, between 09:35 UTC and 10:15 UTC, GitHub Copilot experienced elevated error rates, with an average of 4% of requests failing.This was caused by a capacity imbalance that led to resource exhaustion on backend services. The incident was resolved by infrastructure rebalancing, and we subsequently deployed additional capacity.We are improving observability to detect capacity imbalances earlier and enhancing our infrastructure to better handle traffic spikes.
On February 2, 2026, GitHub Codespaces were unavailable between 18:55 and 22:20 UTC and degraded until the service fully recovered at February 3, 2026 00:15 UTC. During this time, Codespaces creation and resume operations failed in all regions. This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out. We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had.
On February 2, 2026, between 18:35 UTC and 22:15 UTC, GitHub Actions hosted runners were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners and at February 3, 2026 00:30 UTC for larger runners. During this time, Actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot Coding Agent, Copilot Code Review, CodeQL, Dependabot, GitHub Enterprise Importer, and Pages. All regions and runner types were impacted. Self-hosted runners on other providers were not impacted. This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out. We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had.
From Jan 31, 2026 00:30 UTC to Feb 2, 2026 18:00 UTC Dependabot service was degraded and failed to create 10% of Automated Pull Requests. This was due to a cluster failover that connected to a read-only database.We mitigated the incident by pausing Dependabot queues until traffic was properly routed to healthy clusters. We’re working on identifying and rerunning all failed jobs during this time.We’re adding new monitors and alerts to reduce our time to detection and prevent this in the future.
From Feb 2, 2026 17:13 UTC to Feb 2, 2026 17:36 UTC we experienced failures on ~0.02% of Git operations. While deploying an internal service, a misconfiguration caused a small subset of traffic to route to a service that was not ready. During the incident we observed the degradation and statused publicly.To mitigate the issue, traffic was redirected to healthy instances and we resumed normal operation.We are improving our monitoring and deployment processes in this area to avoid future routing issues.