Advertisement · 728 × 90

Posts by githubstatus.com

Preview
Partial degradation for code scanning default setup and for code quality Apr 21, 05:04 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 21, 04:18 UTC **Update** - The degradation has been mitigated. We are monitoring to ensure stability. Apr 21, 03:10 UTC **Update** - The issue remains mitigated. Issues that were linked to projects during the incident may take approximately three more hours to render correctly while we complete a re-index. Apr 20, 21:36 UTC **Update** - The degradation has been mitigated. We are monitoring to ensure stability. Apr 20, 18:21 UTC **Update** - The degradation has been mitigated. We are monitoring to ensure stability. Apr 20, 18:20 UTC **Monitoring** - The degradation has been mitigated. We are monitoring to ensure stability. Apr 20, 18:20 UTC **Update** - The issue has been mitigated. Newly created issues linked to projects should now function as expected. Issues that were linked to projects during the incident may take approximately five hours to render correctly while we complete a re-index. Apr 20, 18:08 UTC **Update** - A deployment to fix this issue of new issues not showing up in projects is underway. Apr 20, 17:32 UTC **Update** - We continue to work on mitigation regarding new issues not showing on project boards. Apr 20, 16:48 UTC **Update** - We continue to work on mitigation regarding new issues not showing on project boards. Apr 20, 16:16 UTC **Update** - Code scanning default setup and Code Quality triggers are back up and running. PRs not processed before or during this incident will require a new push to trigger code scanning or code quality analysis. We are seeing problems with new issues not showing on project boards and are working on mitigation. Apr 20, 15:20 UTC **Update** - We are continuing to work on a mitigation to unblock code scanning default setup and code quality features on pull requests. Apr 20, 14:38 UTC **Update** - We are currently deploying mitigations that should unblock code scanning default setup and code quality features on pull requests. Apr 20, 13:57 UTC **Update** - We are actively working to mitigate an issue affecting code scanning default setup and code quality features on pull requests. Users may experience pull request code scanning and code quality analyses not being triggered on new pull requests. Our engineering team has identified the root cause and working on mitigating the issue. Apr 20, 13:28 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
7 hours ago 0 0 0 0
Preview
Disruption with some GitHub services Apr 17, 15:18 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 17, 15:18 UTC **Monitoring** - The degradation affecting Issues has been mitigated. We are monitoring to ensure stability. Apr 17, 15:08 UTC **Update** - We have isolated a problematic component in our infrastructure and are working to mitigate. We will continue to post updates as we work toward resolution. Apr 17, 14:57 UTC **Update** - We are experiencing an issue that impacts approximately 10% of traffic to the web, resulting in slow and failed calls. We are investigating and will continue to post updates as we work toward mitigation. Apr 17, 14:56 UTC **Update** - Issues is experiencing degraded performance. We are continuing to investigate. Apr 17, 14:56 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
3 days ago 0 0 0 0
Preview
Incident with Codespaces Apr 16, 18:28 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 16, 18:22 UTC **Monitoring** - The degradation affecting Codespaces has been mitigated. We are monitoring to ensure stability. Apr 16, 16:37 UTC **Update** - Our provider is implementing a mitigation and we are seeing signs of recovery. Apr 16, 15:49 UTC **Update** - We found an issue that impacts 70% of Codespaces. We are engaged with the provider and working towards mitigation. Apr 16, 15:41 UTC **Update** - Codespaces is experiencing degraded availability. We are continuing to investigate. Apr 16, 15:08 UTC **Update** - We are experiencing degraded performance in Codespaces related to creating a new Codespace or starting an existing Codespace from the VS Code editor. SSH connections to Codespaces are not impacted. We are working toward mitigation and will continue to keep you updated on progress. Apr 16, 15:06 UTC **Investigating** - We are investigating reports of degraded performance for Codespaces
4 days ago 0 0 0 0
Preview
Disruption with some GitHub services Apr 14, 06:08 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 14, 06:07 UTC **Update** - This incident has been resolved. We will continue to monitor to ensure stability. Thank you for your patience and understanding as we addressed this issue. Apr 14, 06:07 UTC **Monitoring** - The degradation has been mitigated. We are monitoring to ensure stability. Apr 14, 04:40 UTC **Update** - We identified an issue that impacts the Copilot Dashboard on the Insights tab and are working on mitigation. We will continue to keep you updated on progress. Apr 14, 03:47 UTC **Update** - The team continues to investigate issues accessing with Copilot Dashboard on the Insights tab. We will continue providing updates on the progress towards mitigation. Apr 14, 02:40 UTC **Update** - The Copilot Dashboard on the Insights tab is not accessible and we are continuing to investigate. Apr 14, 02:37 UTC **Update** - Degradation of Service - Insights Page Apr 14, 01:57 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
1 week ago 0 0 0 0
Preview
Incident with Pages Apr 13, 20:35 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 13, 20:32 UTC **Update** - We have mitigated the issue with Pages. Apr 13, 20:30 UTC **Monitoring** - The degradation affecting Pages has been mitigated. We are monitoring to ensure stability. Apr 13, 19:57 UTC **Update** - We are investigating reports of issues with Pages. We will continue to keep users updated on progress towards mitigation. Apr 13, 19:56 UTC **Investigating** - We are investigating reports of degraded availability for Pages
1 week ago 0 0 0 0
Preview
Disruption with some GitHub services Apr 13, 17:40 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 13, 16:59 UTC **Update** - We have identified the root cause and are rolling out a fix for Copilot. The services should now be in recovery, with expected full recovery in 5 to 10 minutes. Apr 13, 16:41 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
1 week ago 0 0 0 0
Preview
Problems with third-party Claude and Codex Agent sessions not being listed in the agents tab dashboard Apr 10, 13:28 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 10, 13:08 UTC **Update** - We are investigating third party Claude and Codex Cloud Agent sessions not being listed in the agents tab dashboard. Apr 10, 13:07 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
1 week ago 0 0 0 0
Preview
Disruption with some GitHub services Apr 9, 20:36 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 9, 19:52 UTC **Update** - We continue to investigate periodic delays in Copilot Cloud Agent job processing Apr 9, 18:57 UTC **Update** - We are continuing to investigate Copilot Cloud Agent job delays Apr 9, 17:48 UTC **Update** - Copilot Cloud Agent jobs are being processed and we are monitoring recovery Apr 9, 16:57 UTC **Update** - We are investigating delays processing Copilot Cloud Agent jobs Apr 9, 16:20 UTC **Update** - We are experiencing issues where jobs are being delayed to start for copilot coding agent Apr 9, 16:20 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
1 week ago 0 0 0 0
Preview
Disruption with some GitHub services Apr 9, 10:15 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 9, 10:15 UTC **Monitoring** - The degradation has been mitigated. We are monitoring to ensure stability. Apr 9, 09:57 UTC **Update** - We are investigating an issue affecting GitHub Copilot coding agent. Users may experience significant delays when starting new agent sessions, with jobs remaining queued longer than expected. Our team has identified increased load as a contributing factor and is actively working to restore normal performance. Apr 9, 09:50 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
1 week ago 0 0 0 0
Preview
Disruption with GitHub notifications Apr 9, 04:57 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 9, 04:57 UTC **Monitoring** - The degradation has been mitigated. We are monitoring to ensure stability. Apr 9, 04:42 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
1 week ago 0 0 0 0
Advertisement
Preview
Disruption with some GitHub services Apr 2, 21:48 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 2, 21:48 UTC **Monitoring** - The degradation has been mitigated. We are monitoring to ensure stability. Apr 2, 20:35 UTC **Update** - Although we are observing recovery once again, we expect continued periods of degradation. Work that is queued during times of degradation does eventually get processed. We continue to investigate and find a mitigation, and will update again within 2 hours. Apr 2, 19:28 UTC **Update** - This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. We are still investigating and trying to understand the pattern of degradation. Apr 2, 18:25 UTC **Update** - We are once again seeing recovery with Copilot Cloud Agent job starts. We are keeping this open while we verify this won't recur. Apr 2, 17:59 UTC **Update** - When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running. We are investigating. Apr 2, 17:49 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
2 weeks ago 0 0 0 0
Preview
Copilot Coding Agent failing to start some jobs Apr 2, 16:30 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 2, 16:28 UTC **Update** - When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running. We are investigating. Apr 2, 16:18 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
2 weeks ago 0 0 0 0
Preview
Disruption with GitHub's code search Apr 1, 23:45 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 1, 23:45 UTC **Update** - Code search has recovered and is serving production traffic. Apr 1, 22:00 UTC **Update** - We have stabilized Code Search infrastructure, and are in the final stages of validation before slowly reintroducing production traffic. Apr 1, 19:37 UTC **Update** - We are still working on recovering back to a serviceable state and expect to have a more substantial update within another two hours. Apr 1, 17:48 UTC **Update** - We are observing some recovery for Code Search queries, but customers should be aware that the data being served may be stale, especially for changes that took place after 07:00 UTC today (1 April 2026). We are still working on recovering our ingestion pipeline, and synchronizing the indexed data. We will update again within 2 hours. Apr 1, 16:00 UTC **Update** - We identified an issue in our ingestion pipeline that degraded the freshness of Code Search results. While fixing the issue with the ingestion pipeline, a deployment caused a loss of dynamic configuration which is causing most requests for Code Search results to fail. We are working to restore the service and to re-ingest the misaligned data. Apr 1, 15:02 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
2 weeks ago 0 0 0 0
Preview
GitHub audit logs are unavailable Apr 1, 16:10 UTC **Resolved** - On April 1, 2026, between 15:34 UTC and 16:02 UTC, our audit log service lost connectivity to its backing data store due to a failed credential rotation. During this 28-minute window, audit log history was unavailable via both the API and web UI. This resulted in 5xx errors for 4,297 API actors and 127 github.com users. Additionally, events created during this window were delayed by up to 29 minutes in github.com and event streaming. No audit log events were lost; all audit log events were ultimately written and streamed successfully. Customers using GitHub Enterprise Cloud with data residency were not impacted by this incident. We were alerted to the infrastructure failure at 15:40 UTC — six minutes after onset — and resolved the issue by recycling the affected environment, restoring full service by 16:02 UTC. We are conducting a thorough review of our credential rotation process to strengthen its resiliency and prevent recurrence. In parallel, we are strengthening our monitoring capabilities to ensure faster detection and earlier visibility into similar issues going forward. Apr 1, 16:07 UTC **Update** - A routine credential rotation has failed for our our audit logs service; we have re-deployed our service and are waiting for recovery. Apr 1, 16:06 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
2 weeks ago 0 0 0 0
Preview
Incident with Copilot Apr 1, 12:41 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Apr 1, 12:10 UTC **Update** - The success rate and latency for creating and viewing agent sessions has stabilized at baseline levels, we are continuing to monitor recovery Apr 1, 12:02 UTC **Update** - The degradation has been mitigated. We are monitoring to ensure stability. Apr 1, 11:37 UTC **Update** - The success rate for creating and viewing agent sessions has stabilized, and we're continuing to monitor latency, which is trending toward baseline levels. Apr 1, 11:24 UTC **Update** - The degradation has been mitigated. We are monitoring to ensure stability. Apr 1, 10:56 UTC **Monitoring** - The degradation affecting Copilot has been mitigated. We are monitoring to ensure stability. Apr 1, 10:31 UTC **Update** - Users may see increased latency and intermittent errors when viewing or creating agent sessions. We are working on mitigations to return to baseline performance and success rate. Apr 1, 10:00 UTC **Update** - We are investigating reports of issues with service(s): Copilot Dotcom Agents. We will continue to keep users updated on progress towards mitigation. Apr 1, 09:58 UTC **Investigating** - We are investigating reports of degraded performance for Copilot
2 weeks ago 0 0 0 0
Preview
Incident with Pull Requests: High percentage of 500s Mar 31, 21:23 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 31, 21:16 UTC **Monitoring** - The degradation affecting Pull Requests has been mitigated. We are monitoring to ensure stability. Mar 31, 21:12 UTC **Update** - We continue to see a small subset of repositories experiencing timeouts and elevated latency in Pull Requests, affecting under 1% of requests. Mar 31, 19:28 UTC **Update** - Error rates remain elevated across multiple pull request endpoints. We are pursuing multiple potential mitigations. Mar 31, 18:42 UTC **Update** - We continue to experience elevated error rates affecting Pull Requests. An earlier fix resolved one component of the issue, but some users may still encounter intermittent timeouts when viewing or interacting with pull requests. Our teams are actively investigating the remaining causes. Mar 31, 17:16 UTC **Update** - We identified an issue causing increased errors when accessing Pull Requests. The mitigation is being applied across our infrastructure and we will continue to provide updates as the mitigation rolls out. Mar 31, 16:35 UTC **Update** - We are seeing recovery in latency and timeouts of requests related to pull requests, even though 500s are still elevated. While we are continuing to investigate, we are applying a mitigation and expect further recovery after it is applied. Mar 31, 16:15 UTC **Update** - We are continuing to investigate increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause. Mar 31, 15:39 UTC **Update** - We are investigating increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause. Mar 31, 15:06 UTC **Update** - We are seeing a higher than average number of 500s due to timeouts across GitHub services. We have a potential mitigation in flight and are continuing to investigate. Mar 31, 15:05 UTC **Investigating** - We are investigating reports of degraded performance for Pull Requests
2 weeks ago 0 0 0 0
Preview
Issues with metered billing report generation Mar 31, 15:10 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 31, 15:01 UTC **Monitoring** - The degradation has been mitigated. We are monitoring to ensure stability. Mar 31, 14:59 UTC **Update** - We have applied mitigations to a data store related to billing reports, and are seeing partial recovery to billing report generation. We continue to monitor for full recovery. Mar 31, 14:56 UTC **Update** - We are seeing a high number of 500s due to timeouts across GitHub services. We are redeploying some of our core services and we expect that this allow us to recover. Mar 31, 14:39 UTC **Update** - We're continuing to see high failure rates on billing report generation, and are working on mitigations for a data store related to billing reports. Mar 31, 13:56 UTC **Update** - We're seeing issues related to metered billing reports, intermittently affecting metered usage graphs and reports on the billing page. We have identified an issue with a data store, and are working on mitigations. Mar 31, 13:47 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
2 weeks ago 0 0 0 0
Preview
Elevated delays in Actions workflow runs and Pull Request status updates Mar 30, 13:25 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 30, 13:25 UTC **Update** - The degradation has been mitigated. We are monitoring to ensure stability. Mar 30, 13:20 UTC **Monitoring** - The degradation affecting Actions and Pull Requests has been mitigated. We are monitoring to ensure stability. Mar 30, 13:02 UTC **Investigating** - We are investigating reports of degraded performance for Actions and Pull Requests
3 weeks ago 0 0 0 0
Preview
Incident with Copilot Mar 27, 05:00 UTC **Resolved** - On March 27, 2026, from 02:30 to 04:56 UTC, a misconfiguration in our rate limiting system caused users on Copilot Free, Student, Pro, and Pro+ plans to experience unexpected rate limit errors. The configuration that was incorrectly applied was intended solely for internal staff testing of rate-limiting experiences. Copilot Business and Copilot Enterprise accounts were not affected. During this period, affected users received error messages instructing them to retry after a certain time. Approximately 32% of active Free users, 35% of active Student users, 46% of active Pro users, and 66% of active Pro+ users were affected. After identifying the root cause, we reverted the change and restored the expected rate limits. We are reviewing our deployment and validation processes to help ensure configurations used for internal testing cannot be inadvertently applied to production environments.
3 weeks ago 0 0 0 0
Advertisement
Preview
Teams Github Notifications App is down Mar 24, 17:09 UTC **Update** - We found an issue impacting notifications from GitHub to Microsoft Teams. We are working on mitigation and will keep users updated on progress towards mitigation. Mar 24, 16:59 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
3 weeks ago 0 1 0 0
Preview
Disruption with some GitHub services Mar 22, 10:02 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 22, 09:27 UTC **Update** - We are investigating intermittently high latency and errors from Git operations. Mar 22, 09:08 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
4 weeks ago 0 0 0 0
Preview
Disruption with Copilot Coding Agent Sessions Mar 20, 01:58 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 20, 01:26 UTC **Update** - We are rolling out our mitigation and are seeing recovery. Mar 20, 01:00 UTC **Update** - We are seeing widespread issues starting and viewing Copilot Agent sessions. We understand the cause and are working on remediation. Mar 20, 00:58 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
1 month ago 0 0 0 0
Git operations for users in the west coast are experiencing an increase in latency Mar 20, 00:05 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 20, 00:05 UTC **Update** - We have reached stability with git operations through our changes deployed today. Mar 19, 23:52 UTC **Update** - We are seeing early signs of improvement. We are working on one more small change to further improve traffic routing on the west coast. Mar 19, 22:57 UTC **Update** - We have completed the rollout of our new network path and are monitoring its impact. Mar 19, 21:59 UTC **Update** - We are beginning the rollout of our new network path. During this change, users will continue to see higher latency from the west coast. We will provide another update when the rollout is complete. Mar 19, 18:27 UTC **Update** - We are working to enable a new network path in the west coast to reduce load and will monitor the impact on latency for Git Operations Mar 19, 17:49 UTC **Update** - We are still seeing elevated latency for Git operations in the west coast and are continuing to investigate Mar 19, 17:01 UTC **Update** - We are redirecting traffic back to our Seattle region and customers should see a decrease in latency for Git operations Mar 19, 16:25 UTC **Investigating** - We are investigating reports of degraded performance for Git Operations
1 month ago 0 0 0 0
Issues with Copilot Coding Agent Mar 19, 14:32 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 19, 14:06 UTC **Update** - Copilot is operating normally. Mar 19, 14:02 UTC **Update** - We are investigating reports that Copilot Coding Agent session logs are not available in the UI. Mar 19, 13:45 UTC **Update** - Copilot is experiencing degraded performance. We are continuing to investigate. Mar 19, 13:44 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
1 month ago 0 0 0 0
Preview
Disruption with Copilot Coding Agent sessions Mar 19, 02:52 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 19, 02:46 UTC **Update** - We have rolled out our mitigation and are seeing recovery for Copilot Coding Agent sessions Mar 19, 02:25 UTC **Update** - We are seeing widespread issues starting and viewing Copilot Agent sessions. We have a hypothesis for the cause and are working on remediation. Mar 19, 02:05 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
1 month ago 0 0 0 0
Preview
Disruption with some GitHub services Mar 19, 01:44 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 19, 01:43 UTC **Update** - We are seeing recovery in git operations for customers on the West Coast of the US. Mar 19, 00:56 UTC **Update** - We continue to investigate the slow performance of Git Operations affecting the US West Coast. Mar 19, 00:10 UTC **Update** - We continue to investigate degraded performance for git operations from the US West Coast. Mar 18, 23:33 UTC **Update** - We are continuing to investigate degraded performance for git operations from the US West Coast. Mar 18, 22:48 UTC **Update** - We are experiencing increased latency when performing git operations, especially large pushes and pulls from customers on the west coast of the US. We are not seeing an increase in failures. We are continuing to investigate. Mar 18, 22:36 UTC **Update** - Git Operations is experiencing degraded performance. We are continuing to investigate. Mar 18, 22:36 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
1 month ago 0 0 0 0
Preview
Webhook delivery is delayed Mar 18, 19:46 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 18, 19:25 UTC **Update** - We are seeing recovery and are continuing to monitor the latency for webhook deliveries Mar 18, 18:51 UTC **Investigating** - We are investigating reports of degraded performance for Webhooks
1 month ago 0 0 0 0
Preview
Errors starting and connecting to Codespaces Mar 16, 15:28 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 16, 15:27 UTC **Update** - Errors starting or resuming Codespaces have resolved. Mar 16, 15:06 UTC **Update** - We are investigating reports of users experiencing errors when starting or connecting to Codespaces. Some users may be unable to access their development environments during this time. We are working to identify the root cause and will implement a fix as soon as possible. Mar 16, 15:01 UTC **Investigating** - We are investigating reports of impacted performance for some GitHub services.
1 month ago 0 0 0 0
Advertisement
Preview
Degraded performance for various services Mar 13, 16:15 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 13, 16:02 UTC **Update** - We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC. Mar 13, 15:47 UTC **Update** - We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC. Mar 13, 15:20 UTC **Update** - Packages is experiencing degraded performance. We are continuing to investigate. Mar 13, 15:14 UTC **Update** - We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation. Mar 13, 15:12 UTC **Investigating** - We are investigating reports of degraded performance for Actions and Issues
1 month ago 0 0 0 0
Preview
Degraded Codespaces experience Mar 12, 18:53 UTC **Resolved** - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available. Mar 12, 17:59 UTC **Update** - Codespaces IPs are no longer being blocked from Visual Studio Marketplace operations and we are monitoring for full recovery Mar 12, 17:20 UTC **Update** - We're seeing intermittent failures downloading from the extension marketplace from codespaces, caused by IP blocks for some codespaces. We're working to remove those blocks. Mar 12, 16:09 UTC **Update** - We're seeing intermittent failures downloading from the extension marketplace from codespaces and are investigating. Mar 12, 15:08 UTC **Update** - We're seeing partial recovery for the issue affecting extension installation in newly created Codespaces. Some users may still experience degraded functionality where extensions hit errors. The team continues to investigate the root cause while monitoring the recovery. Mar 12, 14:29 UTC **Update** - We have deployed a fix for the issue affecting extension installation in newly created Codespaces. New Codespaces are now being created with working extensions. We'll post another update by 15:30 UTC. Mar 12, 13:50 UTC **Update** - We are continuing to investigate an issue where extensions fail to install in newly created Codespaces. Users can create and access Codespaces, but extensions will not be operational, resulting in a degraded experience. The team is working on a fix. All newly created Codespaces are affected. We'll post another update by 15:00 UTC. Mar 12, 13:07 UTC **Update** - We're investigating an issue where extensions fail to install in newly created Codespaces. Users can still create and access Codespaces, but extensions will not be operational, resulting in a degraded development experience. Our team is actively working to identify and resolve the root cause. We'll post another update by 14:00 UTC. Mar 12, 13:06 UTC **Investigating** - We are investigating reports of degraded performance for Codespaces
1 month ago 0 0 0 0