•  


페이지를 파싱하기 어렵습니다. tag:www.githubstatus.com,2005:/history GitHub Status - Incident History 2024-04-27T09:57:01Z GitHub tag:www.githubstatus.com,2005:Incident/20642925 2024-04-26T16:49:38Z 2024-04-26T16:49:38Z We are investigating reports of degraded performance. <p><small>Apr <var data-var='date'>26</var>, <var data-var='time'>16:49</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Apr <var data-var='date'>26</var>, <var data-var='time'>16:49</var> UTC</small><br><strong>Update</strong> - The issue appears to be limited in scope to a few internal users, without any reports of issues from outside GitHub. We are adding additional logging to our WebAuthn flow to detect this in the future. If you cannot use your mobile passkey to sign in, please contact support or reach out to us in https://github.com/orgs/community/discussions/67791</p><p><small>Apr <var data-var='date'>26</var>, <var data-var='time'>15:10</var> UTC</small><br><strong>Update</strong> - Sign in to GitHub.com using a passkey from a mobile device is currently failing. Users may see an error message saying that passkey sign in failed, or may not see any passkeys available after signing in with their password.<br />This issue impacts both GitHub.com on mobile devices as well as cross-device authentication where the phone's passkey is used to authenticate on a desktop browser.<br />To workaround this issue, use your password and the 2FA method you setup prior to setting up your passkey, either TOTP or SMS.</p><p><small>Apr <var data-var='date'>26</var>, <var data-var='time'>15:10</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/20625392 2024-04-24T18:20:14Z 2024-04-24T18:20:14Z Incident with Pull Requests <p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>18:20</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>18:01</var> UTC</small><br><strong>Update</strong> - The previous mitigation has been rolled back and updates to the pull request merge button should be working again. If you are still seeing issues, please attempt refreshing the pull request page.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>17:45</var> UTC</small><br><strong>Update</strong> - One of our mitigations from the previous incident caused live updates to the pull request merge button to be disabled for some customers. Refreshing the page will update the mergability status.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>17:40</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Pull Requests</p> tag:www.githubstatus.com,2005:Incident/20621859 2024-04-24T16:16:34Z 2024-04-24T16:16:34Z Incident with Pull Requests, Git Operations, Actions, API Requests, Issues and Webhooks <p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>16:16</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>16:13</var> UTC</small><br><strong>Update</strong> - Issues is operating normally.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>16:13</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>16:13</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>16:13</var> UTC</small><br><strong>Update</strong> - Webhooks is operating normally.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>16:13</var> UTC</small><br><strong>Update</strong> - Git Operations is operating normally.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>16:12</var> UTC</small><br><strong>Update</strong> - API Requests is operating normally.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>15:50</var> UTC</small><br><strong>Update</strong> - We are seeing site-wide recovery but continue to closely monitor our systems and putting additional mitigations in place to ensure we are back to full health.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>14:08</var> UTC</small><br><strong>Update</strong> - We are continuing to see consistent impact, and we’re continuing to work on multiple mitigations to reduce load on our systems.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>12:47</var> UTC</small><br><strong>Update</strong> - We have found an issue that may be contributing additional load to the web site and are working on mitigations. We don't see any additional impact at this time and will provide another update within an hour if we see improvements or fully mitigate the issue based on this investigation.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>12:00</var> UTC</small><br><strong>Update</strong> - We have taken some mitigations and see less than 0.3 percent of requests failing site wide but we still see elevated 500 errors and will continue to stay statused and investigate until we are confident we have restored our error rate to base line.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>11:13</var> UTC</small><br><strong>Update</strong> - We are seeing increased 500 errors for various GraphQL and REST APIs related to database issues. Some users may see periodic 500 errors. The team is looking into the problematic queries and mitigations now.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>11:09</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>11:06</var> UTC</small><br><strong>Update</strong> - Git Operations is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>10:55</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>10:52</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>10:51</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>10:45</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for API Requests</p> tag:www.githubstatus.com,2005:Incident/20621931 2024-04-24T11:01:46Z 2024-04-24T11:01:46Z Incident with Git Operations <p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>11:01</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Apr <var data-var='date'>24</var>, <var data-var='time'>10:56</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Git Operations</p> tag:www.githubstatus.com,2005:Incident/20571321 2024-04-18T18:47:27Z 2024-04-18T18:47:27Z Incident with Codespaces <p><small>Apr <var data-var='date'>18</var>, <var data-var='time'>18:47</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Apr <var data-var='date'>18</var>, <var data-var='time'>18:41</var> UTC</small><br><strong>Update</strong> - Codespaces customers using our 16 core machines in West US 2 and West US 3 region may experience issues creating new Codespaces and resuming existing Codespaces. We suggest any customers experiencing issues switch to the East US region.</p><p><small>Apr <var data-var='date'>18</var>, <var data-var='time'>18:25</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Codespaces</p> tag:www.githubstatus.com,2005:Incident/20551752 2024-04-17T00:48:52Z 2024-04-25T20:49:15Z Incident with Copilot <p><small>Apr <var data-var='date'>17</var>, <var data-var='time'>00:48</var> UTC</small><br><strong>Resolved</strong> - On April 16th, 2024, between 22:31 UTC and 00:11 UTC, Copilot chat users experienced elevated request errors. On average, the error rate was 1.2% and peaked at 5.2%. This was due to a rolling application upgrade applied to a backend system during a maintenance event.<br /><br />The incident was resolved once the rolling upgrade was completed.<br /><br />We are working to improve monitoring and alerting of our services, be more resilient to failures, and coordinate maintenance events to reduce our time to detection and mitigation of issues like this in the future.</p><p><small>Apr <var data-var='date'>17</var>, <var data-var='time'>00:30</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate issues with Copilot</p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>23:59</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>23:57</var> UTC</small><br><strong>Update</strong> - We're investigating issues with Copilot availability</p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>23:51</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/20537427 2024-04-15T14:53:58Z 2024-04-25T15:12:08Z Incident with Copilot <p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>14:53</var> UTC</small><br><strong>Resolved</strong> - Between April 15th, 2024 (09:45 UTC) and April 18th, 2024 (19:10 UTC), Copilot completions experienced intermittent periods of degraded service availability affecting portions of Europe and North America, impacting 3.5% of users globally at its peak. This was due to rolling Operating System level maintenance updates to Copilot infrastructure within those regions, which failed to gracefully restart as intended.<br /><br />The incident was mitigated by routing traffic to other regions, and was resolved once the update was completed and normal traffic routing was restored.<br /><br />We are working to resolve the root issue that prevented systems from restarting gracefully, as well as improving our coordination and monitoring around backend maintenance operations going forward to reduce time to recovery from such issues in the future.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>14:13</var> UTC</small><br><strong>Update</strong> - We have applied mitigation for Copilot in EU region and are working towards the full recovery of the service.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>13:35</var> UTC</small><br><strong>Update</strong> - Due to an outage in one Copilot region traffic is currently being served from other regions. European users may experience higher response times.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>12:58</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/20530168 2024-04-14T21:53:52Z 2024-04-17T17:09:33Z We are investigating reports of degraded performance. <p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>21:53</var> UTC</small><br><strong>Resolved</strong> - Beginning at 17:30 UTC on April 11th and lasting until 20:30 UTC on April 14th, github.com saw significant (up to 2 hours) delays in delivering emails. At 14:21 UTC on April 14th, community reports of this were confirmed and an incident declared. Emails most impacted by the delay were password reset and unrecognized device verification, which contain time-sensitive links or verification codes, and are required to be acted on in order for password resets or unrecognized logins to proceed. <br /><br />Users attempting to reset their password during the incident were unable to complete the reset. Users without two-factor authentication (2FA), signing in on an unrecognized device, were unable to complete device verification. Enterprise Managed Users, users with 2FA, and users on recognized devices or IP addresses were still able to sign in. This impacted 800-1000 user device verifications and 300-400 password resets. <br /><br />The mailer delays were caused by increased usage of a shared resource pool; a separate internal job queue became unhealthy and prevented the mailer queue from being worked on. <br /><br />We have made some immediate improvements to better detect and react to this type of situation again. As a short-term mitigation strategy, we have added a queue-bypass ability for time-sensitive emails, like password reset and unrecognized device verification. We can enable this setting if we observe email delays reoccurring, which will ensure that future incidents do not affect user ability to complete critical login flows. We have paused the unhealthy job queue, to prevent impact to other queues using shared resources. And we have updated our methods of detection for anomalous email delivery, to identify this issue sooner.</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>21:52</var> UTC</small><br><strong>Update</strong> - We are seeing a full recovery. Device verification and password reset emails are delivered on time.</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>21:34</var> UTC</small><br><strong>Update</strong> - We are deploying a possible mitigation to the delayed device verification and password change emails.</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>19:54</var> UTC</small><br><strong>Update</strong> - We continue to investigate issues with delays of email deliveries which is preventing users without 2FA enabled from verifying new devices. We will provide more information as it becomes available.</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>15:50</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate issues with the delivery of device verification emails for users without 2FA.</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>15:01</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate issues with the delivery of device verification emails for users without 2FA on new devices.</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>14:27</var> UTC</small><br><strong>Update</strong> - Device verification emails for sign-ins for users without 2FA on new devices are being sent late or not at all. This is blocking successful sign-ins for these users. We are investigating.</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>14:21</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/20498552 2024-04-10T19:03:05Z 2024-04-11T19:40:59Z Incident with Git Operations, API Requests, Actions, Pages, Issues and Copilot <p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>19:03</var> UTC</small><br><strong>Update</strong> - On April 10, 2024, between 2024-04-10 18:33 UTC and 2024-04-10 19:03 UTC, several services were degraded due to the release of a compute-intensive database query that prevented a key database cluster from serving other queries. <br /><br />GitHub Actions saw delays and failures across the entire run life cycle and had a significant increase in the number of timeouts in API requests. All Pages deployments failed for the duration of the incident. Git Systems saw approximately 12% of raw file download requests and 16% of repository archive download requests return HTTP 50X error codes for the duration of the incident. Issues experienced increased latency for issue creation and updates. Codespaces saw roughly 500 requests to create and resume a Codespace timeout during the incident.<br /><br />We mitigated the incident by rolling back the offending query. We are working to introduce measures to automatically detect compute-intensive queries in test runs during CI to prevent an issue like this one from recurring.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>19:03</var> UTC</small><br><strong>Investigating</strong> - Git Operations, API Requests, Actions, Pages, Issues and Copilot are operating normally.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>19:03</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>19:03</var> UTC</small><br><strong>Update</strong> - Git Operations, API Requests, Actions, Pages, Issues and Copilot are operating normally.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>19:01</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>18:55</var> UTC</small><br><strong>Update</strong> - We're aware of issues impacting multiple services and have rolled back the deployment. Systems appear to be recovering and we will continue to monitor.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>18:53</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>18:45</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded availability. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>18:42</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>18:42</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded availability. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>18:41</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Git Operations, API Requests, Actions and Pages</p> tag:www.githubstatus.com,2005:Incident/20497442 2024-04-10T18:07:51Z 2024-04-12T18:28:26Z Incident with Codespaces <p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>18:07</var> UTC</small><br><strong>Resolved</strong> - Between 2024-04-09 21:35 UTC and 2024-04-10 19:03 UTC, creation of new Codespaces was degraded by an image upgrade to the virtual machines of new Codespaces. During the incident, approximately 7% of new Codespaces were created but never became available to their owning end users.<br /><br />We mitigated the incident by reverting to the previous image version. We are working to improve deployment confidence around image upgrades to reduce the likelihood of recurrence.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>17:31</var> UTC</small><br><strong>Update</strong> - We have applied a fix and are continuing to monitor. This incident will remain open for now until we have confirmed that the service is fully restored.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>16:56</var> UTC</small><br><strong>Update</strong> - We believe we have identified the root cause of the issue and are working to fully restore the Codespaces service. We will provide another update within the next 30 minutes.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>16:20</var> UTC</small><br><strong>Update</strong> - We’re seeing issues related to connecting to Codespaces impacting a subset of users. We are actively investigating and will provide another update shortly.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>16:12</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Codespaces</p> tag:www.githubstatus.com,2005:Incident/20494482 2024-04-10T09:38:36Z 2024-04-12T20:49:51Z Incident with Issues and Pull Requests <p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>09:38</var> UTC</small><br><strong>Resolved</strong> - Between 8:18 and 9:38 UTC on Wednesday, April 10th, customers experienced increased error rates across several services due to an overloaded primary database instance, ultimately caused by an unbounded query. We mitigated the impact by failing the instance over to more capable hardware and shipping an improved version of the query that runs against read replicas. In response to this incident, we are also working to make performance improvements to the class of queries that most frequently resulted in failed requests during this timeframe.<br /><br />Web-based repository file editing saw a 17% failure rate during the incident with other repository management operations (e.g. rule updates, web-based branch creation, repository renames) seeing failure rates between 1.5% and 8%. API failure rates for these operations were higher.<br /><br />Issue and Pull Request authoring was heavily impacted during this incident due to reliance on the impacted database primary. We are continuing work to remove our dependence on this particular primary instance from our authoring workflows for these services.<br /><br />GitHub search saw a 5% failure rate throughout this incident due to reliance on the impacted primary database when authorizing repository access. The majority of failing requests were for search bar autocomplete with a limited number of search result failures as well.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>09:38</var> UTC</small><br><strong>Update</strong> - Issues and Pull Requests are operating normally.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>09:38</var> UTC</small><br><strong>Update</strong> - The mitigation rolled out has successfully resolved the issue. We have seen failure rates reduce and normal service return across all affected features.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>09:30</var> UTC</small><br><strong>Update</strong> - We are aware of impact across a number of GitHub features. This is primarily seen to be impacting write actions for Issues, Repositories and Pull Requests. Additionally we are seeing increased failure rates for search queries.<br /><br />Our team has rolled out a mitigation and is monitoring for recovery.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>09:22</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Issues and Pull Requests</p> tag:www.githubstatus.com,2005:Incident/20489412 2024-04-09T20:17:07Z 2024-04-10T19:27:51Z Incident with Actions <p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>20:17</var> UTC</small><br><strong>Resolved</strong> - On April 9, 2024, between 18:00 and 20:17 UTC, Actions was degraded and had failures for new and existing customers. During this time, Actions failed to start for 5,426 new repositories, and 1% of runs for existing customers were delayed, with half of those failing due to an infrastructure error.<br /><br />The root cause was an expired certificate which caused authentication to fail between internal services. The incident was mitigated once the cert was rotated.<br /><br />We are working to improve our automation to ensure certs are rotated before expiration.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>19:43</var> UTC</small><br><strong>Update</strong> - We continue to work to resolve issues with repositories not being able to enable Actions and Actions network configuration setup not working properly. We have confirmed a fix and are in the process of deploying it to production. Another update will be shared within the next 30 minutes.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>19:06</var> UTC</small><br><strong>Update</strong> - We continue to work to resolve issues with repositories not being able to enable Actions and Actions network configuration setup not working properly. We will provide additional information shortly.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>18:36</var> UTC</small><br><strong>Update</strong> - We are aware of issues with repositories not being able to enable Actions. We are in the process of restoring full functionality and will provide additional information shortly.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>18:36</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/20483650 2024-04-09T05:10:21Z 2024-04-12T22:09:05Z We are investigating reports of degraded performance. <p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>05:10</var> UTC</small><br><strong>Resolved</strong> - On April 9, 2024, between 04:32 UTC and 05:10 UTC, an outage occurred in Github Packages, specifically impacting the download functionality of NPM Packages. All attempts to download NPM Packages failed during this period. Upon investigation, we found a recent code change in the NPM Registry to be the root cause. The customer impact was limited to users of NPM Registry, with no effects on other registries.<br /><br />We mitigated the incident by rolling back the problematic change. We are following up with repair items to cover our observability gaps and implementing measures in our CI process to detect such failures early before they can impact customers.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>04:51</var> UTC</small><br><strong>Update</strong> - We are investigating reports of issues with downloading NPM packages. We will continue to keep users updated on progress towards mitigation.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>04:32</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/20462209 2024-04-06T02:22:00Z 2024-04-09T14:43:24Z Incident with Pages <p><small>Apr <var data-var='date'> 6</var>, <var data-var='time'>02:22</var> UTC</small><br><strong>Resolved</strong> - On April 6, 2024, between 00:00:00 UTC and 02:20:05 UTC, access to Private Pages on the *.pages.github.io domain was degraded while the deployed TLS certificate was expired. Service was restored by uploading the renewed certificate to our CDN. This was due to a process error and a gap in our alerting. While the certificate was renewed and updated in our internal vault, it was not deployed to the CDN.<br /><br />We are working to reduce potential for errors in our certificate renewal process as well as adding the *.pages.github.io domain to our existing TLS alerting system.</p><p><small>Apr <var data-var='date'> 6</var>, <var data-var='time'>01:52</var> UTC</small><br><strong>Update</strong> - We are investigating issues with private pages due to an expired certificate</p><p><small>Apr <var data-var='date'> 6</var>, <var data-var='time'>01:52</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Pages</p> tag:www.githubstatus.com,2005:Incident/20455850 2024-04-05T09:18:10Z 2024-04-05T21:55:26Z Incident with Pages, Actions, Codespaces, API Requests, Issues and Pull Requests <p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>09:18</var> UTC</small><br><strong>Resolved</strong> - On April 5, 2024, between 8:11 and 8:58 UTC a number of GitHub services were degraded, returning error responses. Web request error rate peaked at 6%, API request error rate peaked at 10%. Actions had 103,660 workflow runs fail to start. <br /><br />A database load balancer change caused connection failures in one of our three data centers to various critical database clusters. The incident was mitigated once that change was rolled back.<br /><br />We have updated our deployment pipeline to better detect this problem in earlier stages of rollout to reduce impact to end users. <br /></p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>09:17</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>09:17</var> UTC</small><br><strong>Update</strong> - Issues is operating normally.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>09:17</var> UTC</small><br><strong>Update</strong> - API Requests is operating normally.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>09:17</var> UTC</small><br><strong>Update</strong> - Codespaces is operating normally.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>09:17</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>09:17</var> UTC</small><br><strong>Update</strong> - Pages is operating normally.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>09:17</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>09:00</var> UTC</small><br><strong>Update</strong> - We've reverted a change we believe caused this, are seeing initial indications of reduced errors, and are monitoring for full recovery</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:59</var> UTC</small><br><strong>Update</strong> - Pages is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:51</var> UTC</small><br><strong>Update</strong> - We're seeing connection failures to some databases in two of three sites and are investigating.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:50</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:49</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:49</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:49</var> UTC</small><br><strong>Update</strong> - Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:33</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Actions</p> tag:www.githubstatus.com,2005:Incident/20455837 2024-04-05T08:53:39Z 2024-04-05T08:53:39Z We are investigating reports of degraded performance. <p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:53</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:31</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/20455824 2024-04-05T08:48:15Z 2024-04-05T08:48:15Z Incident with Issues, API Requests, Pull Requests and Codespaces <p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:48</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:48</var> UTC</small><br><strong>Update</strong> - Issues, API Requests, Pull Requests and Codespaces are operating normally.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:36</var> UTC</small><br><strong>Update</strong> - Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:34</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:32</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>08:28</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Issues</p> tag:www.githubstatus.com,2005:Incident/20443672 2024-04-04T01:10:27Z 2024-04-10T21:23:09Z Incident with Actions, API Requests and Webhooks <p><small>Apr <var data-var='date'> 4</var>, <var data-var='time'>01:10</var> UTC</small><br><strong>Resolved</strong> - Between April 3rd, 2024 23:15 UTC and April 4th, 2024 01:10 UTC, GitHub Actions experienced a partial infrastructure outage that led to degraded workflows (failed or delayed starts). Additionally, 0.15% of Webhook deliveries were degraded due to an unrelated spike in database latency in a single availability zone. SLOs for Actions were 90% during the incident, but this was not evenly distributed across customers. We statused green after a long stretch of recovered SLOs, starting at April 4th, 2024 00:35 UTC. During this incident, we also had issues with incident tooling (https://www.githubstatus.com/) failing to update the public status page and occasionally not loading.<br /><br />The incident was resolved after the infrastructure issue was mitigated at 2024-04-04 04:27 UTC.<br /><br />We are working to improve monitoring and processes in response to this incident. We are investigating how we can improve resilience and our communication with our infrastructure provider, and how we can better handle ongoing incidents that are no longer impacting SLOs. We are also improving our incident tooling to ensure that the public status page is updated in a timely manner.</p><p><small>Apr <var data-var='date'> 4</var>, <var data-var='time'>01:09</var> UTC</small><br><strong>Update</strong> - API Requests is operating normally.</p><p><small>Apr <var data-var='date'> 4</var>, <var data-var='time'>01:07</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Apr <var data-var='date'> 4</var>, <var data-var='time'>00:46</var> UTC</small><br><strong>Update</strong> - We are seeing recovery in Actions workflows creation and accessing Actions statuses via the API.</p><p><small>Apr <var data-var='date'> 4</var>, <var data-var='time'>00:25</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'> 4</var>, <var data-var='time'>00:12</var> UTC</small><br><strong>Update</strong> - We are investigating Actions workflows failures and delays.</p><p><small>Apr <var data-var='date'> 4</var>, <var data-var='time'>00:06</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'> 3</var>, <var data-var='time'>23:59</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/20272593 2024-03-15T20:28:06Z 2024-03-22T15:58:39Z Incident with Actions and Pages <p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>20:28</var> UTC</small><br><strong>Resolved</strong> - This incident has the same root cause as <a href="https://www.githubstatus.com/incidents/9ym5p2sg6w5v">this incident.</a>. Please follow the link to view the incident summary.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>20:27</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>20:09</var> UTC</small><br><strong>Update</strong> - Pages is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>20:07</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/20272496 2024-03-15T20:24:44Z 2024-03-22T15:48:40Z Incident with Codespaces and API Requests <p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>20:24</var> UTC</small><br><strong>Resolved</strong> - On March 15, 2024, between 19:42 UTC and 20:24 UTC several services were degraded due to a regression in calling the permissions system. <br /><br />New GitHub Codespaces could not be created, as were Codespaces sessions that required minting a new auth token.<br /><br />Actions saw delays and infrastructure failures due to the upstream dependency on fetching tokens for the repository for runs to successfully execute. <br /><br />GitHub Pages were affected due to the impact on Actions, resulting in 1266 page builds failing, which at the low point represented 33% of page builds failing. This resulted in page edits not being reflected on those impacted sites.<br /><br />We deployed an application update that included a newer version of our database query builder. The new version uses a newer MySQL syntax for upsert queries that is not supported by the database proxy service we use for some of our production-environment database clusters. This incompatibility impacted the permissions cluster specifically, causing requests that attempted such queries to fail.<br /><br />We responded by rolling back the deployment, restoring the previous query use, and thus mitigated the incident.<br /><br />We have identified and corrected a misconfiguration of the permissions cluster in our development and CI environments that will ensure queries utilize the proxy service to prevent future syntax additions causing issues in production.<br /></p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>20:21</var> UTC</small><br><strong>Update</strong> - Codespaces is operating normally.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>20:20</var> UTC</small><br><strong>Update</strong> - API Requests is operating normally.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>20:17</var> UTC</small><br><strong>Update</strong> - We rolled back the most recent deployment and are seeing improvements across all services, and will continue to monitor for additional impact.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>20:11</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>20:03</var> UTC</small><br><strong>Update</strong> - Codespaces is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>20:03</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>20:00</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>19:55</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Codespaces</p> tag:www.githubstatus.com,2005:Incident/20229283 2024-03-13T01:58:49Z 2024-03-14T00:12:26Z Incident with Pull Requests <p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>01:58</var> UTC</small><br><strong>Resolved</strong> - From March 12, 2024 23:39 UTC to March 13, 2024 1:58 UTC, some Pull Requests updates were delayed and did not reflect the latest code that had been pushed. On average, 20% of Pull Requests page loads were out of sync and up to 30% of Pull Requests were impacted at peak. An internal component of our job queueing system was incorrectly handling invalid messages, resulting in stalled processing.<br /><br />We mitigated the incident by shipping a fix to handle the edge case gracefully and allow processing to continue.<br /><br />Once the fix was deployed at 1:47 UTC, our systems fully caught up with pending background jobs at 1:58 UTC.<br /><br />We’re working to improve resiliency to invalid messages in our system to prevent future delays for these pull request updates. We are also reviewing our monitoring and observability to identify and remediate these types of failure cases faster.<br /></p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>01:58</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>01:53</var> UTC</small><br><strong>Update</strong> - We believe we've found a mitigation and are currently monitoring systems for recovery.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>01:18</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate delays in PR updates. Next update in 30 minutes.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>00:47</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate an elevated number of pull requests that are out of sync on page load.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>00:12</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate an elevated number of pull requests that are out of sync on page load.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>23:39</var> UTC</small><br><strong>Update</strong> - We're seeing an elevated number of pull requests that are out of sync on page load.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>23:39</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Pull Requests</p> tag:www.githubstatus.com,2005:Incident/20219651 2024-03-12T01:00:51Z 2024-03-12T22:57:57Z Incident with API Requests, Git Operations, Webhooks and Copilot <p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>01:00</var> UTC</small><br><strong>Resolved</strong> - On March 11, 2024 starting at 22:45 UTC and ending on March 12, 2024 00:48 UTC various GitHub services were degraded and returned intermittent errors for users. During this incident, the following customer impacts occurred: API error rates as high as 1%, Copilot error rates as high as 17%, and Secret Scanning and 2FA using GitHub Mobile error rates as high as 100% followed by a drop in error rates to 30% starting at 22:55 UTC. This elevated error rate was due to a degradation of our centralized authentication service upon which many other services depend.<br /><br />The issue was caused by a deployment of network related configuration that was inadvertently applied to the incorrect environment. This error was detected within 4 minutes and a rollback was initiated. While error rates began dropping quickly at 22:55 UTC, the rollback failed in one of our data centers, leading to a longer recovery time. At this point, many failed requests succeeded upon retrying. This failure was due to an unrelated issue that had occurred earlier in the day where the datastore for our configuration service was polluted in a way that required manual intervention. The bad data in the configuration service caused the rollback in this one datacenter to fail. A manual removal of the incorrect data allowed the full rollback to complete at 00:48 UTC thereby restoring full access to services. We understand how the corrupt data was deployed and continue to investigate why the specific data caused the subsequent deployments to fail.<br /><br />We are working on various measures to ensure safety of this kind of configuration change, faster detection of the problem via better monitoring of the related subsystems, and improvements to the robustness of our underlying configuration system including prevention and automatic cleanup of polluted records such that we can automatically recover from this kind of data issue in the future.<br /></p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>01:00</var> UTC</small><br><strong>Update</strong> - We believe we've resolved the root cause and are waiting for services to recover</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>00:56</var> UTC</small><br><strong>Update</strong> - API Requests is operating normally.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>00:55</var> UTC</small><br><strong>Update</strong> - Git Operations is operating normally.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>00:54</var> UTC</small><br><strong>Update</strong> - Webhooks is operating normally.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>00:54</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>00:14</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate issues with our authentication service, impacting multiple services</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>23:55</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>23:31</var> UTC</small><br><strong>Update</strong> - Webhooks is operating normally.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>23:21</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>23:20</var> UTC</small><br><strong>Update</strong> - Git Operations is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>23:09</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>23:01</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for API Requests, Git Operations and Webhooks</p> tag:www.githubstatus.com,2005:Incident/20217989 2024-03-11T19:22:16Z 2024-03-13T22:45:33Z Incident with Actions <p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>19:22</var> UTC</small><br><strong>Resolved</strong> - On March 11, 2024 between at 18:44 UTC and 19:10 UTC, GitHub Actions performance was degraded and some users experienced errors when trying to queue workflows. Approximately 3.7% of runs queued during this time were unable to start.<br /><br />The issue was partially caused by a deployment of an internal system Actions relies on to process workflow run events. The pausing of the queue processing during this deployment for about 3 minutes caused a spike in queued workflow runs. When this queue began to be processed, the high number of queued workflows overwhelmed a secret-initialization component of the workflow invocation system. The errors generated by this overwhelmed system ultimately delayed workflow invocation. Through our alerting system, we received initial indications of an issue at approximately 18:44 UTC. However, we did not initially see impact on our run start delays and run queuing availability metrics until approximately 18:52 UTC. As the large queue of workflow run events burned down, we saw recovery in our key customer impact measures by 19:11 UTC, but waited to declare the incident resolved at 19:22 UTC while verifying there was no further customer impact.<br /><br />We are working on various measures to reduce spikes in queue build up during deployments of our queueing system, and have scaled up the workers which handle secret generation and storage during the workflow invocation process.<br /></p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>19:21</var> UTC</small><br><strong>Update</strong> - Actions experienced a period of decreased workflow run throughput, and we are seeing recovery now. We are in the process of investigating the cause.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>19:02</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/20212410 2024-03-11T10:20:15Z 2024-03-14T17:58:20Z Incident with Copilot <p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>10:20</var> UTC</small><br><strong>Resolved</strong> - On March 11, 2024, between 06:30 UTC and 11:45 UTC the Copilot Chat service was degraded and customers may have encountered errors or timed out requests for chat interactions. On average, the error rate was 10% and peaked at 45% of requests to the service for short periods of time.<br /><br />This was due to a gap in handling an edge case for messages returned from the underlying language models. We mitigated the incident by applying a fix to the handling of the streaming response.<br /><br />We are working to update monitoring to reduce time to detection and increase resiliency to message format changes.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>10:02</var> UTC</small><br><strong>Update</strong> - We are deploying mitigations for the failures we have been observing in some chat requests for Copilot. We will continue to monitor and update.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>09:03</var> UTC</small><br><strong>Update</strong> - We are seeing an elevated failure rate for chat requests for Copilot. We are investigating and will continue to keep users updated on progress towards mitigation.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>08:14</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/20135877 2024-03-01T17:42:41Z 2024-03-12T19:56:41Z Incident with API Requests, Copilot, Git Operations, Actions and Pages <p><small>Mar <var data-var='date'> 1</var>, <var data-var='time'>17:42</var> UTC</small><br><strong>Resolved</strong> - On March 1, 2024, between 17:00 UTC and 17:42 UTC, we saw elevated failure rates (from 1 to 10%) for Copilot, Actions, Pages, and Git for various APIs.<br /><br />This incident was triggered by a newly-discovered failure mode of a deployment pipeline to one of our compute clusters when it could not write a specific configuration file. This caused a drop in the amount of resources available in this cluster, which was mitigated by a redeployment.<br /><br />We have addressed the specific scenario to ensure resources are properly written and retrieved and added safeguards to ensure the deployment does not proceed if there is an issue of this type. We are also reviewing our systems to more effectively route traffic toward healthy clusters during an outage and adding more safeguards on cluster resource adjustments.<br /><br /></p><p><small>Mar <var data-var='date'> 1</var>, <var data-var='time'>17:42</var> UTC</small><br><strong>Update</strong> - Git Operations is operating normally.</p><p><small>Mar <var data-var='date'> 1</var>, <var data-var='time'>17:41</var> UTC</small><br><strong>Update</strong> - Actions and Pages are operating normally.</p><p><small>Mar <var data-var='date'> 1</var>, <var data-var='time'>17:36</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Mar <var data-var='date'> 1</var>, <var data-var='time'>17:34</var> UTC</small><br><strong>Update</strong> - Pages is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 1</var>, <var data-var='time'>17:34</var> UTC</small><br><strong>Update</strong> - One of our clusters is experiencing problems, and we are working on restoring the cluster at this time.</p><p><small>Mar <var data-var='date'> 1</var>, <var data-var='time'>17:30</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for API Requests, Copilot, Git Operations and Actions</p>
- "漢字路" 한글한자자동변환 서비스는 교육부 고전문헌국역지원사업의 지원으로 구축되었습니다.
- "漢字路" 한글한자자동변환 서비스는 전통문화연구회 "울산대학교한국어처리연구실 옥철영(IT융합전공)교수팀"에서 개발한 한글한자자동변환기를 바탕하여 지속적으로 공동 연구 개발하고 있는 서비스입니다.
- 현재 고유명사(인명, 지명등)을 비롯한 여러 변환오류가 있으며 이를 해결하고자 많은 연구 개발을 진행하고자 하고 있습니다. 이를 인지하시고 다른 곳에서 인용시 한자 변환 결과를 한번 더 검토하시고 사용해 주시기 바랍니다.
- 변환오류 및 건의,문의사항은 juntong@juntong.or.kr로 메일로 보내주시면 감사하겠습니다. .
Copyright ⓒ 2020 By '전통문화연구회(傳統文化硏究會)' All Rights reserved.
 한국   대만   중국   일본