See a hands-on demo of a migration to ScyllaDB.
So let’s run through a quick demo and really,
really quick you’re going to see the migrator in action.
So let me switch to my terminal
and the migration essentially goes like this:
we have a ScyllaDB cloud cluster at our provision
and we will
first migrate our schema and then fire up the migrator
to write data to our target cluster.
Let’s start with the schema.
I have already defined it, the
target schema I want.
And since this is a Cassandra to ScyllaDB migration,
it is going to be exactly the same of the source database.
So as you can see, we here have the concept
of creating key-space and we also create a table.
And this is essentially
the very same definition of a Cassandra’s class.
Okay.
And yeah,
I mean, this is the schema we are going to migrate
and then after we have defined our schema,
the next step for us is to actually
write it,
define it in our destination cluster
and in order to do that we called the “cqlsh” command.
We specify the user and the password,
which for demonstration purposes is actually fine.
But of course in a real production environment
you don’t want to showcase your passwords like this.
And then the we point which node
we want to communicate to
and the file containing our schema.
Yeah in DynamoDB migration, you would
typically just
run the DynamoDB API commands to create a table and so forth.
All right.
The next step for us
is to configure the migrator which I have already done.
And guys here pretty much you can see how simple it is.
So I specify the source database and how to connect to it.
The consistency level we want to use.
Which table to migrate, and so on.
And then, what’s going to be my target?
Which is essentially here
and then I can specify some other parameters
which we are not going to focus here today.
All right, then,
I mean, after we have configured, migrated the schema
and configured and migrator the final and last step for
our migrator is to actually fire up the migration.
And what I want you to really pay attention here
is that it’s essentially a Spark job.
Okay.
So as I told you,
the migrator is built and runs on top of Apache Spark.
So if you are using the
Spark already, I mean you should have no trouble
getting started with it.
Here we have, we specify the configuration file,
how many executors do you want, etc.
and so on.
And then as it gets started it will automatically
infer the schema, for the source tables
and it will connect on
your target cluster and it will also infer the schema.
And that’s pretty much it.
Going quickly and to finalize,
if we go to our
Spark UI, let’s see if it’s up.
Yes, it is.
We can see the migrator progress over here and it’s pretty much
Apache Spark standard stuff.
Okay. So here we can see the jobs.
We have four workers and one worker, already has finished,
then the others, we will finish shortly.