SofTeCode Blogs

One Place for all Tech News and Support

Amazon Managed Apache Cassandra Service

6 min read
Amazon Managed Apache Cassandra Service

image by AWS

0
(0)

We introduced Amazon Managed Apache Cassandra Service (MCS) in preview at re Invent last year. within the few months that passed, the service introduced many new features, and it’s generally available today with a replacement name: Amazon Keyspaces (for Apache Cassandra).

Amazon Keyspaces is made on Apache Cassandra, and you’ll use it as a totally managed, serverless database. Your applications can read and write data from Amazon Keyspaces using your existing Cassandra command language (CQL) code, with little or no changes. for every table, you’ll select the simplest configuration counting on your use case:

With on-demand, you pay supported the particular reads and writes you perform. this is often the simplest option for unpredictable workloads.
With provisioned capacity, you’ll reduce your costs for predictable workloads by configuring capacity settings upfront. you’ll also further optimize costs by enabling auto-scaling, which updates your provisioned capacity settings automatically as your traffic changes throughout the day.

Using Amazon Keyspaces
One of the primary “serious” applications I built as a child, was an archive for my books. I’d wish to rebuild it now as a serverless API, using:

Amazon Keyspaces to store data.
AWS Lambda for the business logic.
Amazon API Gateway with the new HTTP API.

With Amazon Keyspaces, your data is stored in keyspaces and tables. A keyspace gives you how to group related tables together. within the blog post for the preview, I used the console to configure my data model. Now, I also can use AWS CloudFormation to manage my keyspaces and tables as code. for instance, I can create a bookstore keyspace and a books table with this CloudFormation template:

AWSTemplateFormatVersion: ‘2010-09-09’
Description: Amazon Keyspaces for Apache Cassandra example

AWSTemplateFormatVersion: '2010-09-09'
Description: Amazon Keyspaces for Apache Cassandra example

Resources:

  BookstoreKeyspace:
    Type: AWS::Cassandra::Keyspace
    Properties: 
      KeyspaceName: bookstore

  BooksTable:
    Type: AWS::Cassandra::Table
    Properties: 
      TableName: books
      KeyspaceName: !Ref BookstoreKeyspace
      PartitionKeyColumns: 
        - ColumnName: isbn
          ColumnType: text
      RegularColumns: 
        - ColumnName: title
          ColumnType: text
        - ColumnName: author
          ColumnType: text
        - ColumnName: pages
          ColumnType: int
        - ColumnName: year_of_publication
          ColumnType: int

Outputs:
  BookstoreKeyspaceName:
    Description: "Keyspace name"
    Value: !Ref BookstoreKeyspace # Or !Select [0, !Split ["|", !Ref BooksTable]]
  BooksTableName:
    Description: "Table name"
    Value: !Select [1, !Split ["|", !Ref BooksTable]]

If you don’t specify a reputation for a keyspace or a table within the template, CloudFormation generates a singular name for you. Note that during this way keyspaces and tables may contain uppercase characters that are outside of the standard Cassandra conventions, and you would like to place those names between quotation marks when using Cassandra command language (CQL).

Amazon Managed Apache Cassandra Service
image by AWS

When the creation of the stack is complete, I see the new bookstore keyspace within the console:
Selecting the books table, I even have a summary of its configuration, including the partition key, the clustering columns, and every one the columns, and therefore the choice to change the capacity mode for the table from on-demand to provisioned:

For authentication and authorization, Amazon Keyspaces supports AWS Identity and Access Management (IAM) identity-based policies, that you simply can use with IAM users, groups, and roles. Here’s an inventory of actions, resources, and conditions that you simply can use in IAM policies with Amazon Keyspaces. you’ll now also manage access to resources supported tags.

You can use IAM roles using AWS Signature Version 4 Process (SigV4) with this open-source authentication plugin for the DataStax Java driver. during this way you’ll run your applications inside an Amazon Elastic Compute Cloud (EC2) instance, a container-managed by Amazon ECS or Amazon Elastic Kubernetes Service, or a Lambda function, and leverage IAM roles for authentication and authorization to Amazon Keyspaces, without the necessity to manage credentials. Here’s a sample application that you simply can test on an EC2 instance with an associated IAM role giving access to Amazon Keyspaces.

Going back to my books API, I create all the resources i want, including a keyspace and a table, with the subsequent AWS Serverless Application Model (SAM) template.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Sample Books API using Cassandra as database

Globals:
  Function:
    Timeout: 30

Resources:

  BookstoreKeyspace:
    Type: AWS::Cassandra::Keyspace

  BooksTable:
    Type: AWS::Cassandra::Table
    Properties: 
      KeyspaceName: !Ref BookstoreKeyspace
      PartitionKeyColumns: 
        - ColumnName: isbn
          ColumnType: text
      RegularColumns: 
        - ColumnName: title
          ColumnType: text
        - ColumnName: author
          ColumnType: text
        - ColumnName: pages
          ColumnType: int
        - ColumnName: year_of_publication
          ColumnType: int

  BooksFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: BooksFunction
      Handler: books.App::handleRequest
      Runtime: java11
      MemorySize: 2048
      Policies:
        - Statement:
          - Effect: Allow
            Action:
            - cassandra:Select
            Resource:
              - !Sub "arn:aws:cassandra:${AWS::Region}:${AWS::AccountId}:/keyspace/system*"
              - !Join
                - ""
                - - !Sub "arn:aws:cassandra:${AWS::Region}:${AWS::AccountId}:/keyspace/${BookstoreKeyspace}/table/"
                  - !Select [1, !Split ["|", !Ref BooksTable]] # !Ref BooksTable returns "Keyspace|Table"
          - Effect: Allow
            Action:
            - cassandra:Modify
            Resource:
              - !Join
                - ""
                - - !Sub "arn:aws:cassandra:${AWS::Region}:${AWS::AccountId}:/keyspace/${BookstoreKeyspace}/table/"
                  - !Select [1, !Split ["|", !Ref BooksTable]] # !Ref BooksTable returns "Keyspace|Table"
      Environment:
        Variables:
          KEYSPACE_TABLE: !Ref BooksTable # !Ref BooksTable returns "Keyspace|Table"
      Events:
        GetAllBooks:
          Type: HttpApi
          Properties:
            Method: GET
            Path: /books
        GetBookByIsbn:
          Type: HttpApi
          Properties:
            Method: GET
            Path: /books/{isbn}
        PostNewBook:
          Type: HttpApi
          Properties:
            Method: POST
            Path: /books

Outputs:
  BookstoreKeyspaceName:
    Description: "Keyspace name"
    Value: !Ref BookstoreKeyspace # Or !Select [0, !Split ["|", !Ref BooksTable]]
  BooksTableName:
    Description: "Table name"
    Value: !Select [1, !Split ["|", !Ref BooksTable]]
  BooksApi:
    Description: "API Gateway HTTP API endpoint URL"
    Value: !Sub "https://${ServerlessHttpApi}.execute-api.${AWS::Region}.amazonaws.com/"
  BooksFunction:
    Description: "Books Lambda Function ARN"
    Value: !GetAtt BooksFunction.Arn
  BooksFunctionIamRole:
    Description: "Implicit IAM Role created for Books function"
    Value: !GetAtt Books

In this template, I don’t specify the keyspace and table names, and CloudFormation is generating unique names automatically. The function IAM policy gives access to read (Cassandra: Select) and write (Cassandra: Write) only to the books table. I’m using CloudFormation Fn:: Select and Fn:: Split intrinsic functions to urge the table name. the driving force also needs read access to the system* keyspaces.

To use the authentication plugin for the DataStax Java driver that supports IAM roles, I write the Lambda function in Java, using the APIGatewayV2ProxyRequestEvent and APIGatewayV2ProxyResponseEvent classes to speak with the HTTP API created by the API Gateway.
To connect to Amazon Keyspaces with TLS/SSL using the Java driver, i want to incorporate a trust store within the JVM arguments. When using the Cassandra Java Client Driver during a Lambda function, I can’t pass parameters to the JVM, so I pass equivalent options as system properties, and specify the SSL context when creating the CQL session with the with SSL context(SSLContext.getDefault()) parameter. Note that I even have to configure the pom.xml file, employed by Apache Maven, to incorporate the trust store file as a dependency. Now, I can use a tool like a curl or a Postman to check my books API. First, I take the endpoint of the API from the output of the CloudFormation stack. At the start, there are not any books stored within the books table, and if I do an HTTP GET on the resource, I buy an empty JSON list. For readability, I’m removing all HTTP headers from the output.

In the code, I’m employing a PreparedStatement to run a CQL statement to pick all rows from the books table. The names of the Keystore and of the table are passed to the Lambda function in an environment variable, as described within the SAM template above.

Let’s use the API to feature a book, by doing an HTTP POST on the resource.
can make sure the info has been inserted within the table using the CQL Editor within the console, where I choose all the rows within the table.

It works! We just built a totally serverless API using CQL to read and write data using temporary security credentials, managing the entire infrastructure, including the database table, like code.

Available Now
Amazon Keyspace (for Apache Cassandra) is prepared for your applications, please see this table for regional availability. you’ll find more information on the way to use Keyspaces within the documentation. during this post, I built a replacement application, but you’ll get many benefits by migrating your current tables to a totally managed environment. For migrating data, you’ll now use cqlsh as described during this post

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Give your views

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Social Love – Follow US