12 min to complete
Alternator is an open-source project that gives Scylla compatibility with Amazon DynamoDB.
Note: In addition to the instructions provided in this document, which allow you to run the lab on a machine with Docker, you can find this lab in the Katacoda learning environment here. The Katacoda environment provides an interactive virtual machine where you can execute all the commands directly from your browser without the need to configure anything.
In this lesson, we’ll start by introducing the project. Afterward, we’ll see a hands-on example of creating a one-node Scylla cluster and performing some basic operations on it.
The goal of this project is to deliver an open-source alternative to DynamoDB, deployable wherever a user would want: on-premises, on other public clouds like Microsoft Azure or Google Cloud Platform, or still on AWS (for users who wish to take advantage of other aspects of Amazon’s market-leading cloud ecosystem, such as the high-density i3en instances). DynamoDB users can keep their same client code unchanged. Alternator is written in C++ and is a part of Scylla.
The three main benefits Scylla Alternator provides to DynamoDB users are:
- Cost: DynamoDB charges for read and write transactions (RCUs and WCUs). A free, open-source solution doesn’t.
- Performance: Scylla was implemented in modern C++. It supports advanced features that enable it to improve latency and throughput significantly.
- Openness: Scylla is open-source. It can run on any suitable server cluster regardless of location or deployment method.
Setting up a Scylla Cluster
If you haven’t done so yet, download the example from git:
git clone https://github.com/scylladb/scylla-code-samples.git
Go to the directory of the alternator example:
Next, we’ll start a one-node cluster with Alternator enabled.
By default, Scylla does not listen to DynamoDB API requests. To enable such requests, we will set the alternator-port configuration option to the port (8000 in this example), which will listen for DynamoDB API requests.
docker run --name some-scylla --hostname some-scylla -p 8000:8000 -d scylladb/scylla:4.4.0 --smp 1 --memory=750M --overprovisioned 1 --alternator-port=8000 --alternator-write-isolation=always
Wait a few seconds and make sure the cluster is up and running:
docker exec -it some-scylla nodetool status
Next, if you don’t already have it set up, install boto3 python library which also contains drivers for DynamoDB:
sudo pip install --upgrade boto3
In the three scripts create.py read.py and write.py, change the value for “endpoint_url” to the node’s IP address.
Create a Table
We’ll use the create.py script to create a table in our newly created cluster, using Alternator.
Authorization is not in the scope of this lesson, so we’ll use ‘None’ and revisit this in a future lesson.
We define a table called ‘mutant_data’ with the required properties such as the primary key “last_name,” a String data type. You can read about Boto 3 data types here.
The DynamoDB data model is similar to Scylla’s. Both databases have a partition key (also called “hash key” in DynamoDB) and an optional clustering key (called “sort key” or “range key” in DynamoDB), and the same notions of rows (which DynamoDB calls “items”) inside partitions. There are some differences in the data model. One of them is that in DynamoDB, columns (called “attributes”), other than the hash key and sort key, can be of any type and can be different in each row. That means they don’t have to be defined in advance. You can learn more about data modeling in Alternator in more advanced lessons.
In this simple example, we use a one-node Scylla cluster. In a production environment, it’s recommended to run a cluster of at least three nodes.
Also, in this example, we’ll send the queries directly to our single node. In a production environment, you should use a mechanism to distribute different DynamoDB requests to different Scylla nodes, to balance the load. More about that in future lessons.
Run the script:
Each Alternator table is stored in its own keyspace, which Scylla automatically creates. Table xyz will be in keyspace alternator_xyz. This keyspace is initialized when the first Alternator table is created (with a CreateTable request). The replication factor (RF) for this keyspace and all Alternator tables is chosen at that point, depending on the size of the cluster: RF=3 is used on clusters with three or more live nodes. RF=1 would is used if our cluster is smaller, as is in our case. Using a Scylla cluster of fewer than three nodes is not recommended for production.
Performing Basic Queries
Next, we will write and read some data from the newly created table.
In this script, we use the batch_write_item operation to write data to the table “mutant_data.” This allows us to write multiple items in one operation. Here we write two items using a PutRequest, a request to perform the PutItem operation on an item.
Notice that unlike Scylla (and Cassandra, for that matter) in DynamoDB, Writes do not have a configurable consistency level. They use CL=QUORUM.
Execute the script to write the two items to the table:
Next, we’ll read the data we just wrote, again using a batch operation, batch_get_item.
The response is a dictionary with the result, the two entries we previously wrote.
Execute the read to see the results:
DynamoDB supports two consistency levels for reads, “eventual consistency” and “strong consistency.” You can learn more about Scylla consistency levels here and here. Under the hood, Scylla implements Strongly-consistent reads with LOCAL_QUORUM, while eventually-consistent reads are performed with LOCAL_ONE.
- Project Alternator Wiki, with examples (Github)
- Scylla Cloud versus Amazon DynamoDB, benchmark
- Alternator Design Documentation
- Alternator uses Scylla’s LWT feature. You can read more about it in the documentation.
In this lesson, we learned the basics of Alternator: the open-source DynamoDB Scylla API. We saw how to create a cluster, connect to it, write data, and read data. Future lessons will cover more advanced topics and more interesting examples, including data modeling, backup and restore, single region vs. multi-region, streams (CDC), encryption at rest, and more.