Starting at 19:13:50 UTC, the service responsible for importing Git repositories began experiencing errors that impacted both GitHub Enterprise Importer migrations and the GitHub Importer which were restored at 22:11:00 UTC. At the time, 837 migrations across 57 organizations were affected. Impacted migrations would have shown the error message "Git source migration failed. Error message: An error occurred. Please contact support for further assistance." in the migration logs and required a retry.The root cause of the issue was a recent configuration change that caused our workers, responsible for syncing the Git repository, to lose the necessary access required for the migration. We were able to retrieve the needed access for the workers , and all dependent services resumed normal operation.We’ve identified and implemented additional safeguards to help prevent similar disruptions in the future.
On April 23, 2025, between 07:00 UTC and 07:20 UTC, multiple GitHub services experienced degradation caused by resource contention on database hosts. The resulting error rates, which ranged from 2–5% of total requests, led to intermittent service disruption for users. The issue was triggered by heavy workloads on the database leading to connection saturation.The incident mitigated when the database throttling activated which allowed the system to rebalance the connections. This restored the traffic flow to the database and restored service functionality. To prevent similar issues in the future, we are reviewing the capacity of the database, improving monitoring and alerting systems, and implementing safeguards to reduce time to detection and mitigation.
On April 16, 2025 between 3:22:36 PM UTC and 5:26:55 PM UTC the Pull Request service was degraded. On average, 0.7% of page views were affected. This primarily affected logged-out users, but some logged-in users were affected as well. This was due to an error in how certain Pull Request timeline events were rendered, and we resolved the incident by updating the timeline event code.We are enhancing test coverage to include additional scenarios and piloting new tools to prevent similar incidents in the future.
On April 15th during regular testing we found a bug in our Copilot Metrics Pipeline infrastructure causing some data used to aggregate Copilot usage for the Copilot Metrics API to not be ingested. As a result of the bug, customer metrics in the Copilot Metrics API would have indicated lower than expected Copilot usage for the previous 28 days.To mitigate the incident we resolved the bug so that all data from April 14th onwards would be accurately calculated and immediately began backfilling the previous 28 days with the correct data. All data has been corrected as of 2025-04-17 5:34PM UTC.We have added additional monitoring to catch similar pipeline failures in the future earlier and are working on enhancing our data validation to ensure that all metrics we provide are accurate.
On April 15, 2025 from 12:45 UTC to 13:56 UTC, access to GitHub.com was restricted for logged out users using WebKit-based browsers, such as Safari and various mobile browsers. During the impacting time, roughly 6.6M requests were unsuccessful.This issue was caused by a configuration change intended to improve our handling of large traffic spikes but was improperly targeted at too large a set of requests.To prevent future incidents like this, we are improving how we operationalize these types of changes, adding additional tools for validating what will be impacted by such changes, and reducing the likelihood of manual mistakes through automated detection and handling of such spikes.
Due to a configuration change with unintended impact, some users that were not logged in who tried to visit GitHub.com from China were temporarily unable to access the site. For users already logged in, they could continue to access the site successfully. Impact started 2025/04/12 at 20:01 UTC. Impact was mitigated 2025/04/13 at 14:55 UTC. During this time, up to 4% of all anonymous requests originating from China were unsuccessful.
The configuration changes that caused this impact have been reversed and users should no longer see problems when trying to access GitHub.com.
On April 11 from 3:05am UTC to 3:44am UTC, approximately 75% of Codespaces users faced create and start failures. These were caused by manual configuration changes to an internal dependency. We reverted the changes and immediately restored service health.We are working on safer mechanisms for testing and rolling out such configuration changes, and we expect no further disruptions.
On April 9, 2025, between 11:27 UTC and 12:39 UTC, the Pull Requests service was degraded and experienced delays in processing updates. At peak, approximately 1–1.5% of users were affected by delays in synchronizing pull requests. During this period, users may have seen a "Processing updates" message in their pull requests after pushing new commits, and the new commits did not appear in the Pull Request view as expected. The Pull Request synchronization process has automatic retries and most delays were automatically resolved. Any Pull Requests that were not resynchronized during this window were manually synchronized on Friday, April 11 at 14:23 UTC.This was due to a misconfigured GeoIP lookup file that our routine GitHub operations depended on and led to background job processing to fail. We mitigated the incident by reverting to a known good version of the GeoIP lookup file on affected hosts.We are working to enhance our CI testing and automation by validating GeoIP metadata to reduce our time to detection and mitigation of issues like this one in the future.
On April 9, 2025, between 7:01 UTC and 9:31 UTC, the Pull Requests service was degraded and failed to update refs for repositories with higher traffic. This was due to a repository migration creating a larger than usual number of enqueued jobs. This resulted in an increase in job failures and delays for non-migration sourced jobs.We declared an incident once we confirmed that this issue was not isolated to the migrating repository and other repositories were also failing to process ref updates.We mitigated the incident by shifting the migration jobs to a different job queue. To avoid problems like this in the future, we are revisiting our repository migration process and are working to isolate potentially problematic migration workloads from non-migration workloads.
On 2025-04-08, between 00:42 and 18:05 UTC, as we rolled out an updated version of our GPT 4o model, we observed that vision capabilities for GPT-4o for Copilot Chat in GitHub were intermittently unavailable. During this period, customers may have been unable to upload image attachments to Copilot Chat in GitHub. In response, we paused the rollout at 18:05 UTC. Recovery began immediately and telemetry indicates that the issue was fully resolved by 18:21 UTC. Following this incident, we have identified areas of improvements in our model rollout process, including enhanced monitoring and expanded automated and manual testing of our end-to-end capabilities.