Disruption with some GitHub services
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
On November 4, 2025, GitHub Enterprise Importer experienced a period of degraded migration performance and elevated error rates between 18:04 UTC and 23:36 UTC. During this interval customers queueing and running migrations experienced prolonged queue times and slower processing.
The degradation was ultimately connected to higher than normal system load, once load was reduced error rates returned to normal. The investigation is ongoing to pinpoint the precise root cause and prevent future recurrence.
Long-term work is planned to strengthen system resilience under high load and promote better visibility into migration status for customers.
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
On November 11, 2025, between 16:28 UTC and 20:54 UTC, GitHub Actions larger hosted runners experienced degraded performance, with 0.4% of overall workflow runs and 8.8% of larger hosted runner jobs failing to start within 5 minutes. The majority of impact was mitigated by 18:44, with a small tail of organizations taking longer to recover.The impact was caused by the same database infrastructure issue that caused similar larger hosted runner performance degradation on October 23rd, 2025. In this case, it was triggered by a brief infrastructure event in this incident rather than a database change.Through this incident, we identified and implemented a better solution for both prevention and faster mitigation. In addition to this, a durable solution for the underlying database issue is rolling out soon.
Between November 5, 2025 23:27 UTC and November 6, 2025 00:06 UTC, ghost text requests experienced errors from upstream model providers. This was a continuation of the service disruption for which we statused Copilot earlier that day, although more limited in scope.During the service disruption, users were again automatically re-routed to healthy model hosts, minimizing impact to users and we are updating our monitors and failover mechanism to mitigate similar issues in the future.
On November 5, 2025, between 21:46 and 23:36 UTC, ghost text requests experienced errors from upstream model providers that resulted in 0.9% of users seeing elevated error rates.During the service disruption, users were automatically re-routed to healthy model hosts but may have experienced increased latency in response times as a result of re-routing.We are updating our monitors and tuning our failover mechanism to more quickly mitigate issues like this in the future.
On November 3, 2025, between 14:10 UTC and 19:20 UTC, GitHub Packages experienced degraded performance, resulting in failures for 0.5% of Nuget package download requests. The incident resulted from an unexpected change in usage patterns affecting rate limiting infrastructure in the Packages service.We mitigated the issue by scaling up services and refining our rate limiting implementation to ensure more consistent and reliable service for all users. To prevent similar problems, we are enhancing our resilience to shifts in usage patterns, improving capacity planning, and implementing better monitoring to accelerate detection and mitigation in the future.