On April 11 from 3:05am UTC to 3:44am UTC, approximately 75% of Codespaces users faced create and start failures. These were caused by manual configuration changes to an internal dependency. We reverted the changes and immediately restored service health.We are working on safer mechanisms for testing and rolling out such configuration changes, and we expect no further disruptions.
On April 9, 2025, between 11:27 UTC and 12:39 UTC, the Pull Requests service was degraded and experienced delays in processing updates. At peak, approximately 1–1.5% of users were affected by delays in synchronizing pull requests. During this period, users may have seen a "Processing updates" message in their pull requests after pushing new commits, and the new commits did not appear in the Pull Request view as expected. The Pull Request synchronization process has automatic retries and most delays were automatically resolved. Any Pull Requests that were not resynchronized during this window were manually synchronized on Friday, April 11 at 14:23 UTC.This was due to a misconfigured GeoIP lookup file that our routine GitHub operations depended on and led to background job processing to fail. We mitigated the incident by reverting to a known good version of the GeoIP lookup file on affected hosts.We are working to enhance our CI testing and automation by validating GeoIP metadata to reduce our time to detection and mitigation of issues like this one in the future.
On April 9, 2025, between 7:01 UTC and 9:31 UTC, the Pull Requests service was degraded and failed to update refs for repositories with higher traffic. This was due to a repository migration creating a larger than usual number of enqueued jobs. This resulted in an increase in job failures and delays for non-migration sourced jobs.We declared an incident once we confirmed that this issue was not isolated to the migrating repository and other repositories were also failing to process ref updates.We mitigated the incident by shifting the migration jobs to a different job queue. To avoid problems like this in the future, we are revisiting our repository migration process and are working to isolate potentially problematic migration workloads from non-migration workloads.
On 2025-04-08, between 00:42 and 18:05 UTC, as we rolled out an updated version of our GPT 4o model, we observed that vision capabilities for GPT-4o for Copilot Chat in GitHub were intermittently unavailable. During this period, customers may have been unable to upload image attachments to Copilot Chat in GitHub. In response, we paused the rollout at 18:05 UTC. Recovery began immediately and telemetry indicates that the issue was fully resolved by 18:21 UTC. Following this incident, we have identified areas of improvements in our model rollout process, including enhanced monitoring and expanded automated and manual testing of our end-to-end capabilities.
On April 7, 2025 between 2:15:37 AM UTC and 2:31:14 AM UTC, multiple GitHub services were degraded. Requests to these services returned 5xx errors at a high rate due to an internal database being exhausted by our Codespaces service. The incident mitigated on its own.We have addressed the problematic queries from the Codespaces service, minimizing the risk of future reoccurrances.
On 2025-04-03, between 6:13:27 PM UTC and 7:12:00 PM UTC the docs.github.com service was degraded and errored. On average, the error rate was 8% and peaked at 20% of requests to the service. This was due to a misconfiguration and elevated requests.We mitigated the incident by correcting the misconfiguration.We are working to reduce our time to detection and mitigation of issues like this one in the future.
Between 2025-03-27 12:00 UTC and 2025-04-03 16:00 UTC, the GitHub Enterprise Cloud Dormant Users report was degraded and falsely indicated that dormant users were active within their business. This was due to increased load on a database from a non-performant query.We mitigated the incident by increasing the capacity of the database, and installing monitors for this specific report to improve observability for future. As a long-term solution, we are rewriting the Dormant Users report to optimize how it queries for user activity, which will result in significantly faster and accurate report generation.
On April 1st, 2025, between 08:17:00 UTC and 09:29:00 UTC the data store powering the Audit Log service experienced elevated errors resulting in an approximate 45 minute delay of Audit Log Events. Our systems maintained data continuity and we experienced no data loss. The delay only affected the Audit Log API and the Audit Log user interface. Any configured Audit Log Streaming endpoints received all relevant Audit Log Events. The data store team deployed mitigating actions which resulted in a full recovery of the data store’s availability.
Between March 29 7:00 UTC and March 31 17:00 UTC users were unable to unsubscribe from GitHub marketing email subscriptions due to a service outage. Additionally, on March 31, 2025 from 7:00 UTC to 16:40 UTC users were unable to submit eBook and event registration forms on resources.github.com, also due to a service outage. The incident occurred due to expired credentials used for an internal service. We mitigated it by renewing the credentials and redeploying the affected services. To improve future response times and prevent similar issues, we are enhancing our credential expiry detection, rotation processes, and on-call observability and alerting.
Beginning at 21:24 UTC on March 28 and lasting until 21:50 UTC, some customers of github.com had issues with PR tracking refs not being updated due to processing delays and increased failure rates. We did not status before we completed the rollback, and the incident is currently resolved. We are sorry for the delayed post on githubstatus.com.