GitHub header
All Systems Operational
Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Oct 3, 2024

No incidents reported today.

Oct 2, 2024

No incidents reported.

Oct 1, 2024

No incidents reported.

Sep 30, 2024
Resolved - This incident has been resolved.
Sep 30, 11:26 UTC
Update - Codespaces is operating normally.
Sep 30, 11:26 UTC
Update - We are seeing signs of recovery in Codespaces creations and starts. We are continuing to monitor for full recovery.
Sep 30, 11:25 UTC
Update - Codespaces is experiencing degraded performance. We are continuing to investigate.
Sep 30, 11:24 UTC
Update - We are investigating a high number of errors in Codespaces creation and start.
Sep 30, 11:09 UTC
Investigating - We are investigating reports of degraded availability for Codespaces
Sep 30, 11:08 UTC
Sep 29, 2024

No incidents reported.

Sep 28, 2024

No incidents reported.

Sep 27, 2024
Resolved - On September 27th, 2024 from 10:26 to 10:34 UTC, some users listing project releases may have encountered a 500 error.
Sep 27, 15:30 UTC
Sep 26, 2024
Resolved - Between September 25, 2024, 22:20 UTC and September 26, 2024, 5:00 UTC the Copilot service was degraded. During this time Copilot chat requests failed at an average rate of 15%.

This was due to a faulty deployment in a service provider that caused server errors from multiple regions. Traffic was routed away from those regions at 22:28 UTC and 23:39 UTC, which partially restored functionality, while the upstream service provider rolled back their change. The rollback was completed at 04:41 UTC.

We are continuing to improve our ability to respond more quickly to similar issues through faster regional redirection and working with our upstream provider on improved monitoring.

Sep 26, 05:08 UTC
Update - Monitors continue to see improvements. We are declaring full recovery.
Sep 26, 05:08 UTC
Update - Copilot is operating normally.
Sep 26, 05:03 UTC
Update - We've applied a mitigation to fix the issues and are seeing improvements in telemetry. We are monitoring for full recovery.
Sep 26, 03:51 UTC
Update - We believe we have identified the root cause of the issue and are monitoring to ensure the problem does not recur.
Sep 26, 02:34 UTC
Update - We are continuing to investigate the root cause of the latency previously observed to ensure there is no reoccurrence, and better stability going forward.

Sep 26, 01:46 UTC
Update - We are continuing to investigate the root cause of the latency previously observed to ensure there is no reoccurrence, and better stability going forward.
Sep 26, 01:03 UTC
Update - Copilot users should no longer see request failures. We are still investigating the root cause of the issue to ensure that the experience will remain uninterrupted.
Sep 26, 00:29 UTC
Update - We are seeing recovery for requests to Copilot API in affected regions, and are continuing to investigate to ensure the experience remains stable.
Sep 25, 23:55 UTC
Update - We have noticed a degradation in performance of Copilot API in some regions. This may result in latency or failed responses to requests to Copilot. We are investigating mitigation options.

Sep 25, 23:40 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Sep 25, 23:39 UTC
Sep 25, 2024
Resolved - On September 25th, 2024 from 18:32 UTC to 19:13 UTC, Actions service experienced a degradation during a production deployment, leading to actions failing to be downloaded at the start of a job. On average, 21% of Actions workflow runs failed to start during the course of the incident. The issue was traced back to a bug in an internal service responsible for generating the URLs used by the Actions runner to download actions.

To mitigate the impact, we rolled back the affecting deployment. We are implementing new monitors to improve our detection and response time for this class of issues in the future.

Sep 25, 19:19 UTC
Update - We're seeing issues related to Actions runs failing to download actions at the start of a job. We're investigating the cause and working on mitigations for customers impacted by this issue.
Sep 25, 19:14 UTC
Investigating - We are investigating reports of degraded performance for Actions and Pages
Sep 25, 19:11 UTC
Resolved - On September 25, 2024 from 14:31 UTC to 15:06 UTC the Git Operations service experienced a degradation, leading to 1,381,993 failed git operations. The overall error rate during this period was 4.2%, with a peak error rate of 12.5%.

The root cause was traced to a bug in a build script for a component that runs on the file servers that host git repository data. The build script incurred an error that did not cause the overall build process to fail, resulting in a faulty set of artifacts being deployed to production.

To mitigate the impact, we rolled back the affecting deployment.

To prevent further occurrences of this cause in the future, we will be addressing the underlying cause of the ignored build failure and improving metrics and alerting for the resulting production failure scenarios.

Sep 25, 16:03 UTC
Update - We are investigating reports of issues with both Actions and Packages, related to a brief period of time where specific Git Operations were failing. We will continue to keep users updated on progress towards mitigation.
Sep 25, 15:34 UTC
Investigating - We are investigating reports of degraded performance for Git Operations
Sep 25, 15:25 UTC
Sep 24, 2024
Resolved - On September 24th, 2024 from 08:20 UTC to 09:04 UTC the Codespaces service experienced an interruption in network connectivity, leading to 175 codespaces being unable to be created or resumed. The overall error rate during this period was 25%.

The cause was traced to an interruption in network connectivity caused by SNAT port exhaustion following a deployment, causing individual Codespaces to lose their connection to the service.

To mitigate the impact, we increased port allocations to give enough buffer for increased outbound connections shortly after deployments, and will be scaling up our outbound connectivity in the near future, as well as adding improved monitoring of network capacity to prevent future regressions.

Sep 24, 21:04 UTC
Update - Codespaces is operating normally.
Sep 24, 21:04 UTC
Update - We have successfully mitigated the issue affecting create and resume requests for Codespaces. Early signs of recovery are being observed in the impacted region.
Sep 24, 21:01 UTC
Update - Codespaces is experiencing degraded performance. We are continuing to investigate.
Sep 24, 21:00 UTC
Update - We are investigating issues with Codespaces in the US East geographic area. Some users may not be able to create or start their Codespaces at this time. We will update you on mitigation progress.
Sep 24, 20:56 UTC
Investigating - We are investigating reports of degraded availability for Codespaces
Sep 24, 20:54 UTC
Sep 23, 2024

No incidents reported.

Sep 22, 2024

No incidents reported.

Sep 21, 2024

No incidents reported.

Sep 20, 2024

No incidents reported.

Sep 19, 2024

No incidents reported.