Multi-datacenter Replication

28 min to complete

In a previous lesson, we learned how to deal with node failure and recovery from downtime. Now, at Division 3, we must prepare for disaster readiness by expanding our ScyllaDB cluster across geographic regions in a multi-datacenter configuration.

In this lesson, we will set up a new ScyllaDB cluster in another datacenter and learn how to convert existing keyspaces to be stored in both datacenters. Before we get started, we will first need to understand what a multi-datacenter ScyllaDB cluster actually is.

In a multi-datacenter cluster, each ScyllaDB cluster communicates with one another over a public or private network, and the data is copied asynchronously to the other cluster depending on how the keyspaces were defined. For example, let’s say we have one keyspace with a replication DC1:3. This means that the data will only be stored in the first datacenter on three different nodes. If we change the replication factor to “DC1:3, DC2:3”, ScyllaDB will store three copies of the data in each datacenter. Now that we understand the basics of multi-dc let’s spin up our ScyllaDB cluster and get started. More details about this procedure can be found here. In our case, we can skip some of the steps in it, as we already have them pre-configured.

System Setup

In the setup, we will create a ScyllaDB cluster with two keyspaces and tables and populate them with some data. Follow this procedure to remove previous clusters and set up a new ScyllaDB cluster. Once the cluster is up, we’ll create the keyspaces and populate them with data.

The first task is to create the keyspace for the catalog.

docker exec -it scylla-node1 cqlsh
CREATE KEYSPACE catalog WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy','DC1' : 3};

Now that the keyspace is created, it is time to create the table.

use catalog;

CREATE TABLE mutant_data (
   first_name text,
   last_name text,
   address text,
   picture_location text,
   PRIMARY KEY((first_name, last_name)));

Now, let’s add a few mutants to the catalog with the following statements:

insert into mutant_data ("first_name","last_name","address","picture_location") VALUES ('Bob','Loblaw','1313 Mockingbird Lane', 'http://www.facebook.com/bobloblaw');
insert into mutant_data ("first_name","last_name","address","picture_location") VALUES ('Bob','Zemuda','1202 Coffman Lane', 'http://www.facebook.com/bzemuda');
insert into mutant_data ("first_name","last_name","address","picture_location") VALUES ('Jim','Jeffries','1211 Hollywood Lane', 'http://www.facebook.com/jeffries');

Next, create the Tracking keyspace:

CREATE KEYSPACE tracking WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy','DC1' : 3};

Now, we can create the tracking table:

use tracking;
CREATE TABLE tracking_data (
       first_name text,
       last_name text,
       timestamp timestamp,
       location varchar,
       speed double,
       heat double,
       telepathy_powers int,
       primary key((first_name, last_name), timestamp))
       WITH CLUSTERING ORDER BY (timestamp DESC)
       AND default_time_to_live = 72000 
       AND gc_grace_seconds = 0
       AND COMPACTION = {'class': 'TimeWindowCompactionStrategy',
                                  'compaction_window_unit' : 'HOURS',
                                  'compaction_window_size' : 1}; 

Now that the table is created, let’s insert data into the tracking_data table:

INSERT INTO tracking.tracking_data ("first_name","last_name","timestamp","location","speed","heat","telepathy_powers") VALUES ('Jim','Jeffries','2017-11-11 08:05+0000','New York',1.0,3.0,17) ;
INSERT INTO tracking.tracking_data ("first_name","last_name","timestamp","location","speed","heat","telepathy_powers") VALUES ('Jim','Jeffries','2017-11-11 09:05+0000','New York',2.0,4.0,27) ;
INSERT INTO tracking.tracking_data ("first_name","last_name","timestamp","location","speed","heat","telepathy_powers") VALUES ('Jim','Jeffries','2017-11-11 10:05+0000','New York',3.0,5.0,37) ;
INSERT INTO tracking.tracking_data ("first_name","last_name","timestamp","location","speed","heat","telepathy_powers") VALUES ('Jim','Jeffries','2017-11-11 10:22+0000','New York',4.0,12.0,47) ;
INSERT INTO tracking.tracking_data ("first_name","last_name","timestamp","location","speed","heat","telepathy_powers") VALUES ('Jim','Jeffries','2017-11-11 11:05+0000','New York',4.0,9.0,87) ;
INSERT INTO tracking.tracking_data ("first_name","last_name","timestamp","location","speed","heat","telepathy_powers") VALUES ('Jim','Jeffries','2017-11-11 12:05+0000','New York',4.0,24.0,57) ;
INSERT INTO tracking.tracking_data ("first_name","last_name","timestamp","location","speed","heat","telepathy_powers") VALUES ('Bob','Loblaw','2017-11-11 08:05+0000','Cincinnati',2.0,6.0,5) ;
INSERT INTO tracking.tracking_data ("first_name","last_name","timestamp","location","speed","heat","telepathy_powers") VALUES ('Bob','Loblaw','2017-11-11 09:05+0000','Cincinnati',4.0,1.0,10) ;
INSERT INTO tracking.tracking_data ("first_name","last_name","timestamp","location","speed","heat","telepathy_powers") VALUES ('Bob','Loblaw','2017-11-11 10:05+0000','Cincinnati',6.0,1.0,15) ;
INSERT INTO tracking.tracking_data ("first_name","last_name","timestamp","location","speed","heat","telepathy_powers") VALUES ('Bob','Loblaw','2017-11-11 10:22+0000','Cincinnati',8.0,3.0,6) ;
INSERT INTO tracking.tracking_data ("first_name","last_name","timestamp","location","speed","heat","telepathy_powers") VALUES ('Bob','Loblaw','2017-11-11 11:05+0000','Cincinnati',10.0,2.0,3) ;
INSERT INTO tracking.tracking_data ("first_name","last_name","timestamp","location","speed","heat","telepathy_powers") VALUES ('Bob','Loblaw','2017-11-11 12:05+0000','Cincinnati',12.0,10.0,60) ;

Bringing up the Second Datacenter

The second datacenter will be referred to as DC2 throughout this post.

If you are still in the cqlsh, exit:

exit

Change to the mms directory (if it’s not your working dir already):

cd scylla-code-samples/mms

To bring up the second datacenter, run the docker-compose utility and reference the docker-compose-dc2.yml file:

docker-compose -f docker-compose-dc2.yml up -d

After about 60 seconds, you should be able to see DC1 and DC2 when running the “nodetool status” command:

docker exec -it scylla-node1 nodetool status

Configuring the keyspaces for Multi-DC

Now that both clusters are up and running, we can begin to convert the existing keyspaces to exist on both data centers so that if a mutant attacks the first datacenter at Division 3, we will not lose data. The cqlsh utility will allow us to make these changes with the “ALTER KEYSPACE” statement. Let’s get started by logging in to the first node and converting our existing keyspaces named catalog and tracking.

docker exec -it scylla-node1 cqlsh
ALTER KEYSPACE catalog WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'DC1':3, 'DC2':3};
ALTER KEYSPACE tracking WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'DC1':3, 'DC2':3};

Additionally, we need to convert the system keyspaces:

ALTER KEYSPACE system_auth WITH replication = { 'class' : 'NetworkTopologyStrategy', 'DC1' : 3, 'DC2' : 3};
ALTER KEYSPACE system_distributed WITH replication = { 'class' : 'NetworkTopologyStrategy', 'DC1' : 3, 'DC2' : 3};
ALTER KEYSPACE system_traces WITH replication = { 'class' : 'NetworkTopologyStrategy', 'DC1' : 3, 'DC2' : 3};

The keyspaces are now ready for multi-dc, but are we done yet? The answer is no because we need to make sure the data is consistent and in sync in each datacenter. We can accomplish data consistency by running the “nodetool rebuild” command on each node in the second datacenter as shown below. This command rebuilds a node’s data by streaming data from other nodes in the cluster.

exit
docker exec -it scylla-node4 nodetool rebuild -- DC1
docker exec -it scylla-node5 nodetool rebuild -- DC1
docker exec -it scylla-node6 nodetool rebuild -- DC1

After that command is run, the data from the first datacenter will be streamed to the second, and no output should be displayed from the terminal. Let’s make sure that our keyspaces are accessible from the second datacenter with the following commands:

docker exec -it scylla-node4 cqlsh
describe catalog;

describe tracking;

Great, we can see both keyspaces are present. We should also query the data to make sure that it is there:

select * from catalog.mutant_data;

select * from tracking.tracking_data;

Now that the data exists in both datacenters, we can begin to do failure testing.

Disaster time

In this scenario, we will destroy our first datacenter and pretend that rogue mutants attacked it. Learn more about repair in this lesson and in the documentation.

This can be achieved by using the docker-compose utility to pause the first datacenter. First exit the cqlsh: 

exit

Then use the following command:

docker-compose pause

The above command pauses scylla-node1, scylla-node2, and scylla-node3.

After about 30 seconds, we can verify that the first datacenter is down by connecting to the second datacenter and running “nodetool status”:

docker exec -it scylla-node4 nodetool status

In the output above, we can see that the first datacenter is in a Down/Normal (DN) status, and the second datacenter is in an Up/Normal (UN) status. To ensure that the Mutant Monitoring System data still exists, we should verify that the data can be accessed:

docker exec -it scylla-node4 cqlsh
select * from tracking.tracking_data;

select * from catalog.mutant_data;

We can still see the data and feel safe. Going forward, let’s assume that the first datacenter was brought back up and ensure that we can still access the data from it:

exit
docker-compose unpause
docker exec -it scylla-node1 nodetool status

We can see in the output above that all of the nodes are online. Let’s verify the data exists with the following commands:

docker exec -it scylla-node1 cqlsh
select * from tracking.tracking_data;

select * from catalog.mutant_data;

It looks like we are back to normal! An important concept to consider when using ScyllaDB with multi-datacenters is consistency levels. If we use a consistency level of EACH_QUORUM, a quorum of nodes in each data center must be available for read or write requests. If we use a consistency level of ALL, all nodes in each data center must be available for read or write requests.  

If the second datacenter failed with a consistency level of ALL, we would not have access to the data. In an entire site failure scenario, consistency levels of LOCAL_ONE and LOCAL_QUORUM will work because a quorum is not needed across sites.

Conclusion

In this post, we learned what a multi-datacenter configuration is and expanded the initial Mutant Monitoring ScyllaDB Cluster from one to two datacenters. By having multiple datacenters, Division 3 was able to protect their data from a complete physical site failure. In future lessons, we will learn more about this topic when we discuss the different consistency levels available for multi-datacenter ScyllaDB clusters.

fa-angle-up