Introduction
Datacenter switchovers
are a standard response to certain types of situations (
web search
), where traffic is shifted from one site to another. Technology organisations regularly practice them to ensure that tooling and hardware will respond appropriately in case of an emergency. Moreover, switching between datacenters makes room for potentially disruptive maintenance work on inactive servers, such as database upgrades/changes, hardware replacement etc. In other words, while we're serving traffic from an active datacentre, we are doing our regular upkeep work on the inactive one to maintain its efficiency and reliability.
What?
At Wikimedia, a datacentre switchover means switching over different components between our two main datacentres;
eqiad
and
codfw
.
When?
We perform two datacenter switchovers annually, during the week of the solar equinox:
- Northward: ~21st March
- Southward: ~21st September
Our switchover process is broken down into stages, where some can progress independently, while others need to progress in lockstep. This page documents all the steps needed for this work, broken down by component.
SRE/Service_Operations
is driving the process and maintains the software necessary to run the switchover, with a little help from their friends.
Impact
Impact of a switchover is expected to be
2-3 minutes of read-only for MediaWiki
, including extensions. Any other services/features/infrastructures not participating directly in the Switchover, will continue to work as normal. However, anything relying on MediaWiki indirectly (e.g. via some data pipeline) may experience some minor impact, for example some delay in receiving events. This is expected.
What does read-only mean?
Read-only is a two-step process: we first set MediaWiki itself read-only, and then the
MediaWiki databases
. We allow an amount of time between the two so to allow the last in-flight edits to land safely. All read-only functionality will continue to function as usual.
During read-only, any kind of writes reaching our
MediaWiki databases
(
UPDATE, DELETE, INSERT
in SQL terms) will be denied. Additionally, any features ignoring the global MediaWiki read-only configuration, will not function during this time window. This scheduled read-only period, adds a 0.001% MediaWiki edit unavailability per year.
Notes:
Non-MediaWiki databases
, are not part of the switchover.
High Level switchover flow
Scheduling details
Datacenter Switchovers will take place
on the work week of a Solar Equinox
, where we have assumed that the Northward Solar Equinox happens on
March 21st
, and the Southward Solar Equinox on
September 21st
. This doesn't match exactly the astronomical event on purpose.
A controlled switchover occurs in a span of 8 days:
Day 1 - Tuesday:
Traffic+Services
Non read-only parts of the Switchover always take place on Tuesday. This process is non disruptive and lower risk and it may be scheduled @ 14:00 UTC, however that is not necessary.
- Traffic:
Disable caching in the origin datacenter -
Switch_Datacenter#Traffic
- ~20 minutes
for disabling caching completely from origin
dc_from
datacentre
- Services:
Depool services in the origin datacenter to destination -
Switch_Datacenter#Services
- ~15-40 minutes
to switchover services to destination
dc_to
- Leave active/active services pooled only to destination
dc_to
- Switchover active/passive services from origin
dc_from
to destination
dc_to
Day 2 - Wednesday:
MediaWiki
The MediaWiki Switchover (read-only), will always take place on the Wednesday of the above mentioned week
. During read-only (2-3 minutes), no Wikis will be editable and editors will be seeing a warning message asking to try again later.
Read-only starts @ 14:00 UTC
. Readers should experience no changes for the entirety of the event.
- Switch Mediawiki itself to destination datacentre
Switch_Datacenter#MediaWiki
- ~35 minutes
for a complete run of the cookbook, from disabling puppet to re-enabling it, if timed right for the read-only part of the cookbook to fall at the start of the announced window. Doing it in an emergency can be done faster since there is no need to wait for a set time.
Note:
For the
next 7 calendar days
after the MW read-only phase
traffic will be flowing solely to one datacentre
(destination), rendering the other datacenter effectively inactive.
Day 3 - Thursday:
Deployment Server + Special cases
At your convenience, after coordinating with deployers, you may switch the Special cases
Day 8 - Wednesday:
Pool back inactive DC
A week later,
we activate caching and services in the inactive/secondary datacenter again
. With traffic flowing in both DCs, we are backin the normal Multi-DC mode.
This period may be extended, depending how maintenance work progresses at the inactive DC.
Note:
As of September 2023, we are running each datacenter as primary for half of the year.
The 2 datacentres are be considered coequal, alternating roles every 6 months
.
Weeks in advance: communication, testing, and preparation
Communication - 10 weeks before
See
Switch_Datacenter/Coordination
, coordinate dates and communication plan with involved groups.
Testing - 3 weeks before
Run a "live test" of the MediaWiki cookbook, and a dry-run for everything.
Depending on what changes have occurred to our infrastructure/production from the previous switchover, code changes in cookbooks are expected. The purpose of the live-test and the dry-run is to test most of the existing and updated codepaths, and identify potential issues there.
Note: Always use the
--dry-run
flag when running cookbooks for testing purposes
Live Test
A live test (
--live-test
) flag, will skip actions that could harm the primary DC or perform them on the secondary DC, and is available only for the
sre.switchdc.mediawiki
cookbook. What we should be careful about it is that
we "switch" from the currently secondary DC to the currently primary DC
. While the live-test process will log your actions to SAL, please remember to announce to #wikimedia-sre and to #wikimedia-operations too, that you will be running this test. Unless something goes really badly, this is a non-disruptive test.
For example, if currently our primary DC is
codfw
and for the upcoming switchover we will be switching to
eqiad
,
the direction for a live test is eqiad→codfw:
cumin1002:~# cookbook sre.switchdc.mediawiki --live-test
eqiad codfw
<entering cookbook menu>
> 00-disable-puppet
> 00-reduce-ttl
Limitations:
The
03-set-db-readonly
cookbook will fail if circular replication is not already enabled everywhere. It can be skipped if the live-test is run before circular replication is enabled. Please check with
Data Persistence
if you need to run this test or not.
Dry Run
A dry-run is available for both cookbooks we use during a switchover;
sre.switchdc.mediawiki
and
sre.discovery.datacenter
. During a dry-run, the direction is the one we have announced.
For example, if we are currently on codfw, switching over to eqiad, a dry-run's direction would be codfw→eqiad, as follows:
cumin1002:~# cookbook --dry-run sre.switchdc.mediawiki
codfw eqiad
<entering cookbook menu>
> 00-disable-puppet
> 00-reduce-ttl
cumin1002:~# cookbook --dry-run sre.discovery.datacenter depool codfw \
--all --reason "Datacenter services switchover dry-run" \
--task-id T357547
Preparation - a few days before
Data Persistance checklist:
- There is no ongoing long-running maintenance that affects database availability or lag (schema changes, upgrades, hardware issues, etc.)
- Replication is flowing from eqiad -> codfw and from codfw -> eqiad
- All database servers have its buffer pool filled up. This is taken care automatically with the
automatic buffer pool warmup functionality
. For sanity checks, some sample load could be sent to the MediaWiki application server to check requests happen as quickly as in the active datacenter.
Service Operations checklist:
- Check capacity in destination datacentre
. More specifically, ensure that MediaWiki deployments share at least the same number of pods in both datacentres.
- Prepare all patches
Per-service switchover instructions
Traffic
General procedure
See:
Global traffic routing
.
Day 1: Depool source datacentre
GeoDNS (User-facing) Routing:
- #1 Depool Traffic from source DC (DNS)
: C+2 and Submit
dns1004:~# authdns-update
-- This will propagate the change to all nameservers
!log Traffic: depool eqiad from user traffic
- Log to AWQ
Day 8: Switch to Multi-DC again
Same procedure as above, after reverting the relevant commit.
Dashboards
Services
General procedure
For a global switchover we are using the
sre.discovery.datacenter
to depool all services from a DC:
- active-active services in DNS discovery will be depooled from said DC
- active/passive ones will be switched over to the alternative DC, per user input
However, there are a few services we completely exclude from this process. These are hardcoded in the
sre.discovery.datacenter
cookbook.
What the cookbook does is:
- Reduce the TTL of the DNS discovery records to 10 seconds
- Depool the datacenter we're moving away from in confctl / discovery
- Restore the original TTL
Day 1: Depooling source DC
Before depooling any service,
do not forget to review (and copy/paste) the current status of all services, but running:
cookbook sre discovery.datacenter status all
The following command will depool all active/active services from a DC, and will prompt to move or skip the active/passive ones.
# Switch all services to eqiad
$
sudo
cookbook
sre.discovery.datacenter
depool
codfw
--all
--reason
"Datacenter Switchover"
--task-id
T12345
Day 8: Switch to Multi-DC again
The following command will repool all active/active services to a DC, and will prompt to move or skip the active/passive ones.
# Repool codfw
$
sudo
cookbook
sre.discovery.datacenter
pool
codfw
--reason
"Datacenter switch to Multi-DC"
--task-id
T12345
MediaWiki
We divide the process in logical phases that should be executed sequentially. Within any phase, top-level tasks can be executed in parallel to each other, while subtasks are to be executed sequentially to each other. The phase number is referred to in the names of the tasks in the
operations/cookbooks
repository, in the
cookbooks/sre/switchdc/mediawiki/
path.
Day 2: MediaWiki Switchover
Audible indicator
: Put
Listen to wikipedia
in the background during the switchover. Silence indicates read-only, when it starts to make sounds again, edits are back up.
Execution tip:
The best way to run this multi step cookbook is to start it in interactive mode from the cookbook root:
sudo cookbook sre.switchdc.mediawiki --ro-reason 'DC switchover (TXXXXXX)' codfw eqiad
and proceed through the steps
Start the following steps
about 30-60mins before the scheduled switchover time
, in a tmux or a screen.
Phase 0 - preparation
- Manual
StatusPage
: Add a scheduled maintenance (Maintenances -> Schedule Maintenance)
- Manual
scap lock:
Add a scap lock on a separate tmux/screen on the deployment server. This will block any scap deployments, and it will stay there waiting for your input to unlock it.
scap lock --all "Datacenter Switchover - T12345"
00-disable-puppet
: Disables puppet on maintenance hosts in both eqiad and codfw
00-reduce-ttl
:
Reduces TTL for various DNS discovery entries.
Make sure that at least 5 minutes (the old TTL) have passed before moving to Phase 1. The cookbook should force you to wait anyway
.
- (Optional-Skip)
00-warmup-caches
:
Warms up APC running the mediawiki-cache-warmup on the new site clusters. The warmup queries will repeat automatically until the response times stabilize:
- The global "urls-cluster" warmup against the appservers cluster
- The "urls-server" warmup against all hosts in the appservers cluster.
- The "urls-server" warmup against all hosts in the api-appservers cluster.
00-downtime-db-readonly-checks
:
Sets downtime for Read only checks on mariadb masters changed on Phase 3 so they don't page.
Stop for GO/NOGO
: Ask your peers for Go or NoGo
Phase 1 - stop maintenance
01-stop-maintenance
:
Stops maintenance jobs in both datacenters and kill all the periodic jobs (systemd timers) on maintenance hosts in both datacenters. Keep in mind there is a chance of a manual job running. Check again with your peers; usually the way forward is to kill the job by forcr.
Final GO/NOGO before read-only
: Ask what time is it.
This is the point of no return.
The following steps until Phase 7 need to be executed in quick succession to minimise read-only time
Phase 2 - read-only mode
02-set-readonly
:
Sets read-only mode by changing the
ReadOnly
conftool value
Phase 3 - lock down database masters
03-set-db-readonly
:
Puts origin DC
DC_FROM
core DB masters (shards: s1-s8, x1, es4-es5) in read-only mode and waits for destination DC's
DC_TO
databases to catch up with replication
Phase 4 - switch active datacenter configuration
After this, DNS will be changed for the source DC and internal applications (except mediawiki) will start hitting the new DC
Phase 5 - DEPRECATED - Invert Redis replication for MediaWiki sessions
Phase 6 - Set new site's databases to read-write
06-set-db-readwrite
:
Sets destination DC's core DB masters (shards: s1-s8, x1, es4-es5) in read-write mode
Phase 7 - Set MediaWiki to read-write
07-set-readwrite
:
Goes back to read-write mode by changing the
ReadOnly
conftool value
You are now out of read-only mode.
Phase 8 - Restore rest of MediaWiki
08-restart-envoy-on-jobrunners
:
Restarts pods on the (now) inactive jobrunners, trigger changeprop to re-resolve the DNS name and connect to destination DC
- A steady rate of 500s is expected until this step is completed, as changeprop may still be sending edits to source DC, though the database master will reject them.
-
08-start-maintenance
:
Starts maintenance on destination DC
- Runs puppet on the
maintenance hosts
, which will reactivate systemd timers in destination DC
- Most Wikidata-editing bots will restart once this is done and the "
dispatch lag
" has recovered. This should bring us back to 100% of editing traffic.
-
Manual
StatusPage
: End the planned maintenance
Phase 9 - Post read-only
-
09-restore-ttl
:
Sets the TTL for the DNS records to 300 seconds again
- Manual
#2 Update DNS records for master DBs
: merge and run
authdns-update
- Please use the following to SAL log:
!log Phase 9: Update DNS records for new database masters
-
09-run-puppet-on-db-masters
:
Runs Puppet on the database masters in both DCs, to update expected read-only state.
- This also removes the downtimes set in Phase 0
- Manual
CentralNotice banner:
Ensure the banner informing users of readonly is removed. There is some minor HTTP caching involved (~5mins) here too.
- Manual
Scap lock:
Go back to the terminal where you added the lock and press enter
-
Manual
#3 Update DNS records for maintenance host
: merge and run
authdns-update
-
Manual
#4 geo-maps: set default datacentre
: merge and run
authdns-update
- This default only affects a small portion of traffic, so this is mostly about logical consistency (when we have no idea where to route a request, we prefer the primary DC).
-
Manual
#5 debug.json: List primary DC servers first
: Re-order noc.wm.o's debug.json to have primary servers listed first, see
T289745
. Run scap backport to deploy.
Phase 10 - verification and troubleshooting
- Manual
Reading and Editing:
Ensure they work!?:)
- Manual
Recent Changes:
Ensure recent changes are flowing
- Manual
Email:
Ensure email works via
test an email
. The following command should fluctuate between 0m and a few minutes
mx1001:~$ sudo -i; sudo exim4 -bp | exiqsumm | tail -n 5
Dashboards
Dashboards
ElasticSearch
General context on how to switchover
CirrusSearch talks by default to the local datacenter (
$wmgDatacenter
). No special actions are required when disabling a datacenter.
Manually switching CirrusSearch to a specific datacenter can always be done. Point CirrusSearch to codfw by editing
wgCirrusSearchDefaultCluster
ext-CirrusSearch.php
.
To ensure coherence in case of lost updates, a reindex of the pages modified during the switch can be done by following
Recovering from an Elasticsearch outage / interruption in updates
.
Dashboards
Special cases
Exclusions
Exclusions have been implemented in the Switchover cookbook. The next section is still around for historical and information purposes. While it will probably not be needed, it's still useful information to have around.
If it is needed to exclude services, using the old
sre.switchdc.services
is still necessary until exclusion is implemented.
# Switch all services to codfw, excluding parsoid and cxserver
$
sudo
cookbook
sre.switchdc.services
--exclude
parsoid
cxserver
--
eqiad
codfw
Single service
If you are switching only one service, using the old
sre.switchdc.services
is still necessary
# Switch the service "parsoid" to codfw-only
$
sudo
cookbook
sre.switchdc.services
--services
parsoid
--
eqiad
codfw
apt
In March 2023 Switchover, we identified issues with apt.wikimedia.org being switched over. As of the September 2023 Switchover, those haven't been solved yet and apt.wikimedia.org won't participate in the Switchover.
apt.wikimedia.org needs a
puppet change
restbase-async
As of September 2023, this is no longer needed. We let restbase-async pooled in both DCs for now on. This is kept in the doc for historical purposes for now.
Restbase-async is a bit of a special case, being pooled active/passive with the active in the secondary datacenter. As such, it needs an additional step if we're just switching active traffic over and not simulating a complete failover:
pool restbase-async everywhere
sudo
cookbook
sre.discovery.service-route
--reason
T123456
pool
--wipe-cache
$dc_from
restbase-async
sudo
cookbook
sre.discovery.service-route
--reason
T123456
pool
--wipe-cache
$dc_to
restbase-async
depool restbase-async in the newly active dc, so that async traffic is separated from real-users traffic as much as possible.
sudo
cookbook
sre.discovery.service-route
--reason
T123456
depool
--wipe-cache
$dc_to
restbase-async
When simulating a complete failover, keep restbase pooled in $dc_to for as long as possible to test capacity, then switch it to $dc_from by using the above procedure.
As it is async, we trade the added latency from running it in the secondary datacenter for the lightened load on the primary datacenter's appservers.
Manual switch
These services require manual changes to be switched over and have not yet been included in service::catalog
- planet.wikimedia.org
- The DNS discovery name planet.discovery.wmnet needs to be switched from one backend to another as in example change
gerrit:891369
. No other change is needed.
- people.wikimedia.org
- In puppet hieradata the rsync_src and rsync_dst hosts need to be flipped as in example change
gerrit:891382
.
- FIXME: manual rsync command has to be run
- The DNS discovery name peopleweb.discovery.wmnet needs to be switched from one backend to another as in example change
gerrit::891381
.
noc.wikimedia.org
This is no longer applicable as of September 2023, noc.wikimedia.org is now active/active in mw-on-k8s.
The
noc.wikimedia.org
DNS name points to DNS discovery name mwmaint.discovery.wmnet that needs to be switched from one backend to another as in example change
gerrit:896118
. No other change is needed.
Dashboards
Databases
Main document:
MariaDB/Switch Datacenter
Other miscellaneous
Predictable, Recurring Switchovers
A few months after the Switchback of 2023, and following a feedback gathering process, a proposal to move to a predictable set of dates for the dates while also increasing the Switchover duration to 6 months was adopted and turned into a process. The document can be found in the link below:
Recurring, Equinox-based, Data Center Switchovers
Upcoming Switches
See
Switch Datacenter/Switchover Dates
for a pre-calculated list up to 2050
Past Switches
2024 switches
March
- Services + Traffic: Tuesday, March 19th, 2024 14:00 UTC
- MediaWiki: Wednesday, March 20th, 2024 14:00UTC
- Read only: 3 minutes 8 seconds
2023 switches
- September
- February
Reports
- Recap
- Read only: 1 minute 59 seconds
Switching back:
Schedule
Reports
- Read only: 3 minutes 1 second
2021 switches
- Schedule
- Reports
Switching back:
- Reports
2020 switches
- Schedule
- Services: Monday, August 31st, 2020 14:00 UTC
- Traffic: Monday, August 31st, 2020 15:00 UTC
- MediaWiki: Tuesday, September 1st, 2020 14:00 UTC
- Reports
Switching back:
- Traffic: Thursday, September 17th, 2020 17:00 UTC
- MediaWiki: Tuesday, October 27th, 2020 14:00 UTC
- Services: Wednesday, October 28th, 2020 14:00 UTC
2018 switches
- Schedule
- Services: Tuesday, September 11th 2018 14:30 UTC
- Media storage/Swift: Tuesday, September 11th 2018 15:00 UTC
- Traffic: Tuesday, September 11th 2018 19:00 UTC
- MediaWiki: Wednesday, September 12th 2018: 14:00 UTC
- Reports
Switching back:
- Schedule
- Traffic: Wednesday, October 10th 2018 09:00 UTC
- MediaWiki: Wednesday, October 10th 2018: 14:00 UTC
- Services: Thursday, October 11th 2018 14:30 UTC
- Media storage/Swift: Thursday, October 11th 2018 15:00 UTC
- Reports
2017 switches
- Schedule
- Elasticsearch: elasticsearch is automatically following mediawiki switch
- Services: Tuesday, April 18th 2017 14:30 UTC
- Media storage/Swift: Tuesday, April 18th 2017 15:00 UTC
- Traffic: Tuesday, April 18th 2017 19:00 UTC
- MediaWiki: Wednesday, April 19th 2017
14:00 UTC
(user visible, requires read-only mode)
- Deployment server: Wednesday, April 19th 2017 16:00 UTC
- Reports
Switching back:
- Schedule
- Traffic: Pre-switchback in two phases: Mon May 1 and Tue May 2 (to avoid cold-cache issues Weds)
- MediaWiki: Wednesday, May 3rd 2017
14:00 UTC
(user visible, requires read-only mode)
- Elasticsearch: elasticsearch is automatically following mediawiki switch
- Services: Thursday, May 4th 2017 14:30 UTC
- Swift: Thursday, May 4th 2017 15:30 UTC
- Deployment server: Thursday, May 4th 2017 16:00 UTC
- Reports
2016 switches
- Schedule
- Deployment server: Wednesday, January 20th 2016
- Traffic: Thursday, March 10th 2016
- MediaWiki 5-minute read-only test: Tuesday, March 15th 2016, 07:00 UTC
- Elasticsearch: Thursday, April 7th 2016, 12:00 UTC
- Media storage/Swift: Thursday, April 14th 2016, 17:00 UTC
- Services: Monday, April 18th 2016, 10:00 UTC
- MediaWiki: Tuesday, April 19th 2016, 14:00 UTC / 07:00 PDT / 16:00 CEST (requires read-only mode)
- Reports
Switching back:
- MediaWiki: Thursday, April 21st 2016, 14:00 UTC / 07:00 PDT / 16:00 CEST (requires read-only mode)
- Services, Elasticsearch, Traffic, Swift, Deployment server: Thursday, April 21st 2016, after the above is done
Monitoring Dashboards
Aggregated list of interesting dashboards