This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

DBaaS

Database as a Service

1 - Announcements

Announcement for the DBaaS service

2024-10-14 MSSQL DBaaS now available

We are excited to announce that we now have Microsoft SQL Server available in Elastx Database as a Service.

Elastx DBaaS automatically ensures your databases are reliable, secure, and scalable so that your business continues to run without disruption. You can achieve high availability and disaster protection by configuring replication and backups to protect your data. Backups and multi-node datastores are disaster protected as they are running over multiple availability zones which in our case are geographically separated data centers. Automatic failover makes your database highly available.

The following services are included as standard in our prices: 24x7 support, Threat Intelligence, DDoS protection, encrypted traffic between our availability zones and data encryption at rest.

You can find detailed information, specifications and pricing here, https://elastx.se/en/mssql-dbaas.

If you have any general questions or would like to sign-up please contact us at hello@elastx.se. Any technical questions please register a support ticket at https://support.elastx.se.

2024-04-12 ECP DBaaS Generally Available

We are happy to announce that the following services are now generally available (GA) in Elastx Cloud Platform (ECP).

  • MySQL DBaaS
  • MariaDB DBaaS
  • PostgreSQL DBaaS
  • Redis DBaaS

The ECP DataBase as a Service gives you a fully managed database with the possibility to run your database in a high availability and disaster protected environment.

It has been available as a public tech-preview since november 2023. A number of updates and features have been added during this period and the service has now reached a maturity level where we can offer this service to all customers.

You can find information, specifications and pricing here, https://elastx.se/en/database. Service documentation is available here, https://docs.elastx.cloud/docs/dbaas/.

If you have any general questions please contact us at hello@elastx.se and you can sign-up for the service here, https://elastx.se/en/signup. Any technical questions please register a support ticket at https://support.elastx.se.

2 - DBaaS Getting Started

CCX is a comprehensive data management and storage solution that offers a range of features including flexible node configurations, scalable storage options, secure networking, and robust monitoring tools. It supports various deployment types to cater to different scalability and redundancy needs, alongside comprehensive management functions for users, databases, nodes, and firewalls. The CCX project provides a versatile platform for efficient data handling, security, and operational management, making it suitable for a wide array of applications and workloads.

Deployment Solutions

Our deployment solutions offer customizable configurations for various node types, designed to support both dynamic and ephemeral storage requirements across multiple cloud environments. This includes comprehensive support for a wide range of cloud regions and instances, ensuring flexibility and scalability.

Database Support

Our platform is compatible with a diverse array of database types, including:

  • MariaDB
  • MySQL
  • PostgreSQL
  • Cache 22 (deprecated and will be removed in a future release. It is replaced with Valkey)
  • Valkey
  • Microsoft SQL Server

Node Configurations

We provide support for various node configurations to meet your database needs:

  • Replica nodes for MariaDB, MySQL, PostgreSQL, Redis, and Microsoft SQL Server (Single server and Always-On)
  • Galera clusters for MariaDB and MySQL

Monitoring and Management

Our platform features advanced monitoring capabilities, offering detailed performance analysis through extensive charts. It enables efficient management of nodes, including:

  • Datastore scaling
  • Volume scaling
  • Promote replica to primary
  • Node repair mechanisms

User and Database Administration

We offer sophisticated tools for managing database users and their permissions, ensuring secure access control.

Network Security

Our firewall configuration options are designed to enhance network security, providing robust protection for your data.

Event Logging

The event viewer tracks and displays a comprehensive history of operations performed on the datastore, enhancing transparency and accountability.

Backup and Recovery

Our backup solutions include:

  • Incremental and full backup options for comprehensive data protection
  • Point-in-time recovery capabilities
  • Automated cloud backup uploads with customizable retention periods
  • Restoration from separate volumes to optimize datastore space utilization

Customizable Settings

We offer customizable settings for various operational database parameters, allowing for tailored database management.

Account Management

Our platform facilitates user account creation and management, streamlining the login and registration process.

Billing and Payments

Our billing and payment processing tools are designed to simplify financial transactions, including the management of payments and invoices.

Feature Matrix

Each datastore has different features and are suitable for different use cases. Below is a feature matrix showing what operational feature is supported on each datastore:

MySQL MariaDb PostgreSQL Valkey/Cache22 MS SQLServer
(single server)
MS SQLServer
(Always-on, std license)
Scale nodes Yes Yes Yes Yes No No
Scale volume Yes Yes Yes Yes Yes Yes
Upgrade Yes Yes Yes Yes Yes Yes
Promote replica Yes Yes Yes Yes Yes Yes
Configuration management Yes Yes Yes Yes No No
Backup to S3 Yes Yes Yes Yes Yes Yes
Restore Yes Yes Yes Yes Yes Yes
PITR Yes Yes Yes No No No
User management† Yes Yes Yes Yes Yes Yes
Create databases Yes Yes Yes Yes No No
Query monitoring Yes Yes Yes Yes No Yes
Database growth
(Capacity planning)
Yes Yes Yes Yes No Yes

† : User management features and scope depends on the underlying datastore. There are datastore specific limitations.

3 - FAQ

Does CCX provide a High-availability feature

Yes

Can I change the database configuration?

This is a managed service.

Does CCX support multiple AZs.

CCX supports multiple AZs if the Cloud Provider does.

Can write only instances and read only replicas be created?

Yes, but the write-only instances is read-write. It allows both reads and writes.

Does support use a Proxy or Load Balancer (fora example, if there are 2 or more Read Replica instances, then Read can be load balanced across multiple instances).

DNS is used to facilitate this. However, the user can create his own loadbalancer (such as HAProxy or ProxySQL) and connect to the database service. A load balancer should be placed as close as possible to the user’s application. We recommend that the end-user manages the loadbalancer.

Are backups automatic?

Yes, backups are created automatically. The user can set the frequency.

Is there an auto-upgrade SQL version feature, for minor and major updates?

Only minor upgrades. Major upgrades are not supported in an online operation. See product documentation about upgrades (Life-cycle management)

Can it be backed up externally, for example dumping data?

Yes. See product documentation.

Can external data be restored, if yes, how?

See product documentation.

4 - Managed Service

CCX is provided as a managed service for your database engine in the cloud(s) you select. This document aims to outline the responsibilities of CCX and what rests with you, as the user.

CCX does:

  • Deploy, secure and configure database engines onto virtual machines (VMs) in your chosen cloud
  • Allow you to:
    • Configure firewall rules for access
    • Create new databases and users
    • Configure replication for the chosen topology
    • Ensure connectivity between nodes
    • View metrics for each VM and the datastore
    • View query statistics for your service
    • Configure and set a backup schedule for your service
    • Configure a maintenance window that allows CCX to perform maintenance and provide security patches
    • Scale your service horizontally (up and down)
    • Scale your storage vertically (up)
    • Manage and monitor the database to ensure connectivity

The primary responsibility of CCX is to ensure that your datastore is running at all times, reacting to scenarios to ensure this is true. CCX does not access your data or control how you use the databases within your datastore.

In order to achieve this, CCX does not:

  • Provide SSH or other ways to access the underlying infrastructure
  • Allow superuser access to the managed services
  • Allow the modification of settings that are not suitable for production use
  • Allow the installation of untrusted extensions or code

If you do have specific requirements, such as:

  1. Temporary modification of configuration
  2. Feature requests for extensions or different versions of the databases

Please reach out to Support and we will work with you to find a solution.

5 - Overview

Overview of DBaaS

Elastx DBaaS automatically ensures your databases are reliable, secure, and scalable so that your business continues to run without disruption. It provides full compatibility with the source database engines while reducing operations costs by automating database provisioning and other time-consuming tasks.

Easy high availability and disaster protection by configuring replication, clusters and backups to protect your data. Backups and multi-node datastores are disaster protected as they are running over multiple availability zones which in our case are geographically separated data centers. Automatic failover makes your database highly available. Your data is encrypted at rest, and ISO 27001, 27017, 27018 as well as 14001 compliant.

A Datastore is a database instance with one or more nodes. In the Datastore you can have one or more databases. A datastore can be created with a single node, three nodes in a active/active cluster or a primary node with one or two read only replicas.

You can manage the Datastores with the web-UI. Authentication to the web-UI is done with the Elastx Identity Provider where MFA with TOTP or Yubikey is required. All datastores owned by the organization will be visible for all users with access to that organization.

To access the datastores you need database user credentials which you get and manage for each individual database. You also need to configure the Datastore firewall to allow access from specific IP addresses. The Connection assistant will help you to get the connection string for common programming languages.

In the web-UI you get graphs on key performance metrics on the database and the nodes that will help you to manage capacity and performance. You can scale the Datastore by adding or removing nodes and also to change the size of a node by replacing a node with a different flavor. Contact Elastx support if you need to increase the storage capacity and we will help you. Please note that ephemeral storage can’t be increased unless you change node flavor.

6 - Reference

6.1 - Datastore Statuses

When you deploy a Datastore, you will see a Status reported in the CCX Dashboard. This article outlines the statuses and what they mean.

Status Description Action Required?
Deploying Your Datastore is being configured and deployed into the Cloud you specified No
Available Your Datastore is up and running with no reported issues No
Unreachable Your Datastore might be running but CCX is not able to communicate directly with one or more Node(s). Verify you can access the Datastore and contact Support
Maintenance Your Datastore is applying critical security updates during the specific maintenance window. No
Deleting You have requested the deletion of your Datastore and it is currently being processed. No, unless this deletion was not requested by you or the Datastore has been in this state for more than 2 hours
Deleted Your Datastore has been deleted. No
Failed Your Datastore has failed, this can be a hardware or software fault Contact Support

6.2 - Glossary

Term Definition AKA Area
Datastore A deployment of a Database on CCX. A Datastore has a unique ID, it is essential to include this when contacting Support with issues or queries. Service Deployment
Node A Virtual Machine (VM) in a Cloud that makes up a Datastore. A Node consists of:
- CPU - the number of cores

- RAM - the amount (GB) of memory

- Storage - the amount (GB/TB) of persistent storage
Virtual Machine (VM)
Node
Server
Instance
Compute
Storage The amount of persistent data for your Datastore.
Storage comes in multiple different formats and not all are supported by all Clouds. There are cost and performance considerations when choosing the storage.
Storage
Volumes The types of Storage available. Typically, this is measured in IOPS and the higher IOPS has increased performance with an increased cost per GB
Database The engine deployed and configured for your Datastore. To see these options, check Supported Databases Database Management System (DBMS) General
Virtual Private Cloud (VPC) A private network configured that is unique to your account and ensures that any traffic between your Datastore does not go over the Public Internet Private Network Networking
Cloud An infrastructure provider where Datastores can be deployed Deployment
Region A geographic region with one or more Datacentres owned or operated by a Cloud. A Datastore is deployed into a single Region Deployment
Availability Zone (AZ) A Region can have one or more Availability Zones. More than one Availability Zones allows one Datacentre to go down without bringing down all of the Nodes in your Datastore.
CCX will automatically attempt to deploy each Node in a Datastore into a different AZ (if the Region supports it)
Deployment
Replication A method of exchanging data between two Nodes that ensures they stay in sync and allows one Node to fail without bringing your Datastore down Operations
Primary / Replica The recommended deployment for a Production Datastore with 2 or more Nodes, one acting as the Primary and the other(s) acting as the Replica Highly Available
High Availability
Operations
Multi-Primary Multiple Nodes deployed with the same role, all of them acting as the Primary. This topology is not supported by all Databases Clustered Operations
Status The last known status of your Datastore. For details of the possible statuses, see here State Operations
Maintenance The application of critical security updates to your Datastore. These are applied in your Maintenance Window which can be configured per Datastore. Operations
Monitoring This is the metrics of the hardware and software for your Datastore. These can be accessed in the CCX Dashboard and can be shown per Node. For details of the metrics available, see here. Observability

6.3 - Notifications

CCX notifies users by email in case of certain events. Recipients can be configured on the Datastore Settings page or in the Datastore wizard.

Alert Description Action Required?
Cluster Upgrade Cluster is being upgraded No
Cluster Storage Resized Cluster storage has been automatically resized from size to new_size. No
HostAutoScaleDiskSpaceReached The cluster is running out of storage and will be automatically scaled. No

6.4 - Observability

6.4.1 - Metrics

6.4.1.1 - Introduction

CCX uses Prometheus and exporters for monitoring. The monitoring data is exposed though the exports from each node. This is a controlled under the Firewall tab in the CCX UI.

6.4.1.2 - MySQL And MariaDB

  • MySQL / MariaDB
    • Handler Stats
      • Statistics for the handler. Shown as:
        • Read Rnd
          • Count of requests to read a row based on a fixed position
        • Read Rnd Next
          • Count of requests to read a subsequent row in a data file
        • Read Next
          • Count of requests to read the next row in key order
        • Read Last
          • Count of requests to read the last key in an index
        • Read Prev
          • Count of requests to read the previous row in key order
        • Read First
          • Count of requests to read a row based on an index key value
        • Read Key
          • Count of requests to read the last key in an index
        • Update
          • Count of requests to update a row
        • Write
          • Count of requests to insert to a table
    • Handler Transaction Stats
    • Database Connections
      • Count of connections to the database. Shown as:
        • Thread Connected
          • Count of clients connected to the database
        • Max Connections
          • Count of max connections allowed to the database
        • Max Used Connections
          • Maximum number of connections in use
        • Aborted Clients
          • Number of connections aborted due to client not closing
        • Aborted Connects
          • Number of failed connection attempts
        • Connections
          • Number of connection attempts
    • Queries
      • Count of queries executed
    • Scan Operations
      • Count of operations for the operations: SELECT, UPDATE and DELETE
    • Table Locking
      • Count of table locks. Shown as:
        • Table locks immediate
          • Count of table locks that could be granted immediately
        • Table locks waited
          • Count of locks that had to be waited due to existing locks or another reason
    • Temporary Tables
      • Count of temporary tables created. Shown as:
        • Temporary tables
          • Count of temporary tables created
        • Temporary tables on Disk
          • Count of temporary tables created on disk rather than in memory
    • Sorting
    • Aborted Connections
      • Count of failed or aborted connections to the database. Shown as:
        • Aborted Clients
          • Number of connections aborted due to client not closing
        • Aborted Connects
          • Number of failed connection attempts
        • Access Denied Errors
          • Count of unsuccessful authentication attempts
      • Memory Utilisation

6.4.1.3 - PostgreSQL

  • PostgreSQL
    • SELECT (fetched)
      • Count of rows fetched by queries to the database
    • SELECT (returned)
      • Count of rows returned by queries to the database
    • INSERT
      • Count of rows inserted to the database
    • UPDATE
      • Count of rows updated in the database
    • DELETE
      • Count of rows deleted in the database
    • Active Sessions
      • Count of currently running queries
    • Idle Sessions
      • Count of connections to the database that are not currently in use
    • Idle Sessions in transaction
      • Count of connections that have begun a transaction but not yet completed while not actively doing work
    • Idle Sessions in transaction (aborted)
      • Count of connections that have begun a transaction but did not complete and were forcefully aborted before they could complete
    • Lock tables
      • Active locks on the database
    • Checkpoints requested and timed
      • Count of checkpoints requested and scheduled
    • Checkpoint sync time
      • Time synchronising checkpoint files to disk
    • Checkpoint write time
      • Time to write checkpoints to disk

6.4.1.4 - Redis

  • Redis
    • Blocked Clients
      • Clients blocked while waiting on a command to execute
    • Memory Used
      • Amount of memory used by Redis (in bytes)
    • Connected Clients
      • Count of clients connected to Redis
    • Redis commands per second
      • Count of commands processed per second
    • Total keys
      • The total count of all keys stored by Redis
    • Replica Lag
      • The lag (in seconds) between the primary and the replica(s)

6.4.1.5 - System

  • System - Hardware level metrics for your Datastore
    • Load Average
      • The overall load on your Datastore within the preset period
    • CPU Usage
      • The breakdown of CPU utilisation for your Datastore, including both System and User processes
    • RAM Usage
      • The amount of RAM (in Gigabytes) used and available within the preset period
    • Network Usage
      • The amount of data (in Kilobits or Megabits per second) received and sent within the preset period
    • Disk Usage
      • The total amount of storage used (in Gigabytes) and what is available within the preset period
    • Disk IO
      • The input and output utilisation for your disk within the preset period
    • Disk IOPS
      • The number of read and write operations within the preset period
    • Disk Throughput
      • The amount of data (in Megabytes per second) that is being read from, or written to, the disk within the preset period

6.4.1.6 - Valkey

  • Valkey
    • Blocked Clients
      • Clients blocked while waiting on a command to execute
    • Memory Used
      • Amount of memory used by Valkey (in bytes)
    • Connected Clients
      • Count of clients connected to Valkey
    • Valkey commands per second
      • Count of commands processed per second
    • Total keys
      • The total count of all keys stored by Valkey
    • Replica Lag
      • The lag (in seconds) between the primary and the replica(s)

6.5 - Products

6.5.1 - MariaDb

6.5.1.1 - Backup

Mariabackup is used to create backups.

CCX backups the Primary server. In multi-primary setups the node with the highest wsrep_local_index is elected.

Backups are streamed directly to S3 staroge.

Mariabackup blocks DDL operations during the backup using the --lock-ddl flag. Any attempt to CREATE, ALTER, DROP, TRUNCATE a table during backup creation will be locked with the status Waiting for backup lock (see SHOW FULL PROCESSLIST). In this case, wait for the backup to finish and, perform the operation later.

Also see the section ‘Schedule’.

Schedule

The backup schedule can be tuned and backups can be paused

6.5.1.2 - Configuration

max_connections

  • 75 connections / GB of RAM.
  • Example: 4GB of RAM yields 300 connections.
  • This setting cannot be changed as it affects system stability.

InnoDB settings

  • These setting cannot be changed as it affects system stability.

innodb_buffer_pool_size

  • 50% of RAM if total RAM is > 4GB
  • 25% of RAM if total RAM is <= 4GB

innodb_log_file_size

  • 1024 MB if innodb_buffer_pool_size >= 8192MB
  • 512 MB if innodb_buffer_pool_size < 8192MB

innodb_buffer_pool_instances

  • 8

InnoDB options

variable_name variable_value
innodb_buffer_pool_size Depends on instance size
innodb_flush_log_at_trx_commit 2
innodb_file_per_table 1
innodb_data_file_path Depends on instance
innodb_read_io_threads 4
innodb_write_io_threads 4
innodb_doublewrite 1
innodb_buffer_pool_instances Depends on instance size
innodb_redo_log_capacity 8G
innodb_thread_concurrency 0
innodb_flush_method O_DIRECT
innodb_autoinc_lock_mode 2
innodb_stats_on_metadata 0
default_storage_engine innodb

General options

variable_name variable_value
tmp_table_size 64M
max_heap_table_size 64M
max_allowed_packet 1G
sort_buffer_size 256K
read_buffer_size 256K
read_rnd_buffer_size 512K
memlock 0
sysdate_is_now 1
max_connections Depends on instance size
thread_cache_size 512
table_open_cache 4000
table_open_cache_instances 16
lower_case_table_names 0

Storage

  • We recommend a maximum of 100GB storage per GB of RAM.
  • Example: 4GB of RAM yields 400GB of storage.
  • The recommendation is not enforced by the CCX platform.

6.5.1.3 - Importing Data

This procedure describes how to import data to a MariaDB datastore located in CCX.

  • The MariaDB Datastore on CCX is denoted as the ‘replica’
  • The source of the data is denoted as the ‘source’

note:

If you do not want to setup replication, then you can chose to only apply the sections:

  • Create a database dump file
  • Apply the dumpfile on the replica

Limitations of MariaDB

MariaDB does not offer as fine grained control over privileges as MySQL. Nor does it have the same level of replication features.

The following properties must be respected in order to comply with the SLA:

  • There must be no user management happening on the source, while the data is imported and the replication link is active. This is avoid corruption of the mysql database and possibly other system databases.
  • It is recommended to set binlog-ignore-db on the source to ‘mysql, performance_schema, and sys’ during the data import/sync process.

Preparations

Ensure that the source is configured to act as a replication source.

  • Binary logging is enabled.
  • server_id is set to non 0.

Also, prepare the replica with the databases you wish to replicate from the source to the master:

  • Using the CCX UI, go to Databases, and issue a Create Database for each database that will be replicated.

Ensure the CCX Firewall is updated:

  • Add the replication source as a Trusted Source in the Firewall section of the CCX UI.

Create a replication user on the source

Create a replication user with sufficient privileges on the source:

CREATE USER 'repluser'@'%' IDENTIFIED BY '<SECRET>';
GRANT REPLICATION SLAVE ON *.* TO 'repluser'@'%';

Prepare the replica to replicate from the source

The replica must be instrucuted to replicate from the source. Make sure to change <SOURCE_IP>, <SOURCE_PORT>, and <SECRET>. Run the following on the source:

CHANGE MASTER TO MASTER_HOST=<SOURCE_IP>, MASTER_PORT=<SOURCE_PORT>, MASTER_USER='repluser', MASTER_PASSWORD='<SECRET>', MASTER_SSL=1;

Create a database dump file of the source

The database dump contains the data that you wish to import into the replica. Only partial dumps are possible. The dump must not contains any mysql or other system databases.

danger: The dump must not contains any mysql or other system databases.

On the source, issue the following command. Change ADMIN, SECRET and DATABASES:

mysqldump -uADMIN -p<SECRET>   --master-data --single-transaction --triggers --routines --events --databases DATABASES > dump.sql`

If your database dump contains SPROCs, triggers or events, then you must replace DEFINER. This may take a while:

sed 's/\sDEFINER=`[^`]*`@`[^`]*`//g' -i dump.sql

Apply the dumpfile on the replica

cat dump.sql | mysql -uccxadmin -p -h<REPLICA_PRIMARY>

Start the replica

On the replica do:

START SLAVE

followed by

SHOW SLAVE STATUS;

And verify that:

             Slave_IO_State: Waiting for source to send event
	     ..
  	     Slave_IO_Running: Yes
             Slave_SQL_Running: Yes

When the migration is ready

STOP SLAVE;
RESET SLAVE ALL;

Troubleshooting

If the replication fails to start then verify:

  • All the steps above has been followed.
  • Ensure that the replication source is added as a Trusted Source in the Firewall section of the CCX UI.
  • Ensure that you have the correct IP/FQDN of the replication source.
  • Ensure that users are created correctly and using the correct password.
  • Ensure that the dump is fresh.

6.5.1.4 - Limitations

Every product has limitations. Here is a list MariaDB limitations:

Permissions

The privilege system is not as flexible as in MySQL.

The ‘ccxadmin’ user has the following privileges:

Global / all databases (.):

  • CREATE USER, REPLICATION SLAVE, REPLICATION SLAVE ADMIN, SLAVE MONITOR

On databases created from CCX, the admin user can create new users and grant privileges:

  • ALL PRIVILEGES WITH GRANT OPTION

This means that users can only create databases from the CCX UI. Once the database has been created from the CCX UI, then the ccxadmin user can create users and grant user privileges on the database using MariaDB CLI.

6.5.1.5 - Overview

CCX supports two types of MariaDB clustering:

  • MariaDB Replication (Primary-replica configuration)x
  • MariaDB Cluster (Multi-primary configuration)

For general purpose applications we recommend using MariaDB Replication, and we only recommend to use MariaDB Cluster if you are migrating from an existing application that uses MariaDB Cluster.

If you are new to MariaDB Cluster we stronly recommend to read about the MariaDB Cluster 10.x limitations and MariaDB Cluster Overview to understand if your application can benefit from MariaDB Cluster.

MariaDB Replication uses the standard asynchronous replication based on GTIDs.

Scaling

Storage and nodes can be scaled online.

Nodes (horizonal)

  • The maximum number of database nodes in a datastore is 5.
  • Multi-primary configuration must contain an odd number of nodes (1, 3 and 5).

Nodes (vertical)

A node cannot be scaled vertically currently. To scale to large instance type, then a larger instance must be added and then remove the unwanted smaller instances.

Storage

  • Maximum size depends on the service provider and instance size
  • Volume type cannot currently be changed

Further Reading

6.5.1.6 - Restore

There are two options to restore a backup:

  • Restore a backup on the existing datastore
  • Restore a backup on a new datastore

Please note that restoring a backup may be a long running process.

This option allows to restore a backup with point in time recovery. The WAL logs are replayed until the desired PITR.

Warning! Running several restores may change the timelines.

This option allows to restore a backup on a new datastore. This option does not currently support PITR.

6.5.1.7 - TLS Connection

SSL Modes

CCX currently supports connections to MariaDB in two SSL modes:

  1. REQUIRED: This mode requires an SSL connection. If a client attempts to connect without SSL, the server rejects the connection.
  2. VERIFY_CA: This mode requires an SSL connection and the server must verify the client’s certificate against the CA certificates that it has.

CA Certificate

The Certificate Authority (CA) certificate required for VERIFY_CA mode can be downloaded from your datastore on CCX using an API call or through the user interface on page https://{your_ccx_domain}/projects/default/data-stores/{datastore_id}/settings. This certificate is used for the VERIFY_CA SSL mode.

Example Commands

Here are example commands for connecting to the MySQL server using the two supported SSL modes:

  1. REQUIRED mode:

    mysql --ssl-mode=REQUIRED -u username -p -h hostname
    
  2. VERIFY_CA mode:

    mysql --ssl-mode=VERIFY_CA --ssl-ca=ca.pem -u username -p -h hostname
    

require_secure_transport

This is a MariaDB setting that governs if connections to the datastore are required to use SSL. You can change this setting in CCX in Settings -> DB Parameters

Scenario Server Parameter Settings Description
Disable SSL enforcement require_secure_transport = OFF This is the default to support legacy applications. If your legacy application doesn’t support encrypted connections, you can disable enforcement of encrypted connections by setting require_secure_transport=OFF. However, connections are encrypted unless SSL is disabled on the client. See examples
Enforce SSL require_secure_transport = ON This is the recommended configuratuion.

Examples

ssl-mode=DISABLED and require_secure_transport=OFF

mysql -uccxadmin -p -h...  -P3306 ccxdb --ssl-mode=disabled
...
mysql> \s
--------------
...
Connection id:		52
Current database:	ccxdb
Current user:		ccxadmin@...
*SSL:			Not in use*
Current pager:		stdout
...

ssl-mode=PREFERRED and require_secure_transport=OFF

mysql -uccxadmin -p -h...  -P3306 ccxdb --ssl-mode=preferred
...
mysql> \s
--------------
...
Connection id:		52
Current database:	ccxdb
Current user:		ccxadmin@...
SSL:			Cipher in use is TLS_AES_256_GCM_SHA384
Current pager:		stdout
...

ssl-mode=DISABLED and require_secure_transport=ON

mysql -uccxadmin -p -h...  -P3306 ccxdb --ssl-mode=disabled
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 3159 (08004): Connections using insecure transport are prohibited while --require_secure_transport=ON.

ssl-mode=PREFERRED|REQUIRED and require_secure_transport=ON

mysql -uccxadmin -p -h...  -P3306 ccxdb --ssl-mode=preferred|required
mysql> \s
--------------
...
Connection id:		52
Current database:	ccxdb
Current user:		ccxadmin@...
SSL:			Cipher in use is TLS_AES_256_GCM_SHA384
Current pager:		stdout
...

tls_version

The tls_version is set to the following by default:

Variable_name Value
tls_version TLSv1.2,TLSv1.3

6.5.2 - MSSQLServer

6.5.2.1 - Configurations

Important default values

max_connections

  • SQL Server has no direct “max connection per GB of RAM” rule. The actual number of user connections allowed depends on the version of SQL Server that you are using, and also the limits of your application(s), and hardware.
  • SQL Server allows a maximum of 32,767 user connections.
  • User connections is a dynamic (self-configuring) option, SQL Server adjusts the maximum number of user connections automatically as needed, up to the maximum value allowable.
  • In most cases, you do not have to change the value for this option. The default is 0, which means that the maximum (32,767) user connections are allowed.
  • To determine the maximum number of user connections that your system allows, you can execute sp_configure or query the sys.configuration catalog view.
  • For more info: https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/configure-the-user-connections-server-configuration-option?view=sql-server-ver16&viewFallbackFrom=sql-server-ver16.

6.5.2.2 - Limitations

Every product has limitations. Below is a list of Microsoft SQL Server limitations:

License

  • The standard license is applied.

Configurations

  • Single node (no High Availability)
  • Always On (2 nodes, asynchronous commit mode, High Availability)

Always On-specific limitations

  • Refer to the Microsoft standard license for a complete list of limitations.
  • Only asynchronous commit mode is currently supported.
  • The ccxdb is currently the only supported Always On enabled database.
  • Scaling is not supported as the standard license does not permit more than two nodes.

User-created databases (not Always On) are not transferred to the replica

  • In the Always On configuration, only the ccxdb is replicated.
  • Data loss may occur for other user-created databases, as they are not transferred to the replica during the add node process. Therefore, they may be lost if a failover, automatic repair, or any other life-cycle management event occurs.

6.5.2.3 - Overview

CCX supports two Microsoft SQLServer 2022 configurations:

  • Single-node (No high-availability)
  • Always-on, two nodes, async-commit mode (high-availability) in an primary-replica configuration.

The ‘standard’ license is applied.

Scaling

Scaling is not supported in SQLServer as of the standard license.

Storage

  • Maximum size depends on the service provider and instance size
  • Volume type cannot currently be changed

Further Reading

6.5.2.4 - User Management

CCX supports creating database users from the web interface.

The database user is created as follows:

CREATE LOGIN username WITH PASSWORD = 'SECRET',  DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF,  CHECK_POLICY=OFF
ALTER SERVER ROLE [sysadmin] ADD MEMBER [username]

6.5.3 - MySQL

6.5.3.1 - Backup

Percona Xtrabackup is used to create backups.

CCX backups the Primary server. In multi-primary setups the node with the highest wsrep_local_index is elected.

Backups are streamed directly to S3 staroge.

Percona Xtrabackup blocks DDL operations during the backup using the --lock-ddl flag. Any attempt to CREATE, ALTER, DROP, TRUNCATE a table during backup creation will be locked with the status Waiting for backup lock (see SHOW FULL PROCESSLIST). In this case, wait for the backup to finish and, perform the operation later.

Also see the section ‘Schedule’.

Schedule

The backup schedule can be tuned and backups can be paused.

6.5.3.2 - Configuration

max_connections

  • 75 connections / GB of RAM.
  • Example: 4GB of RAM yields 300 connections.
  • This setting cannot be changed as it affects system stability.

InnoDB settings

  • These setting cannot be changed as it affects system stability.

innodb_buffer_pool_size

  • 50% of RAM if total RAM is > 4GB
  • 25% of RAM if total RAM is <= 4GB

innodb_log_file_size

  • 1024 MB if innodb_buffer_pool_size >= 8192MB
  • 512 MB if innodb_buffer_pool_size < 8192MB

innodb_buffer_pool_instances

  • 8

InnoDB options

variable_name variable_value
innodb_buffer_pool_size Depends on instance size
innodb_flush_log_at_trx_commit 2
innodb_file_per_table 1
innodb_data_file_path Depends on instance size
innodb_read_io_threads 4
innodb_write_io_threads 4
innodb_doublewrite 1
innodb_buffer_pool_instances Depends on instance size
innodb_redo_log_capacity 8G
innodb_thread_concurrency 0
innodb_flush_method O_DIRECT
innodb_autoinc_lock_mode 2
innodb_stats_on_metadata 0
default_storage_engine innodb

General options

variable_name variable_value
tmp_table_size 64M
max_heap_table_size 64M
max_allowed_packet 1G
sort_buffer_size 256K
read_buffer_size 256K
read_rnd_buffer_size 512K
memlock 0
sysdate_is_now 1
max_connections Depends on instance size
thread_cache_size 512
table_open_cache 4000
table_open_cache_instances 16
lower_case_table_names 0

Storage

  • We recommend a maximum of 100GB storage per GB of RAM.
  • Example: 4GB of RAM yields 400GB of storage.
  • The recommendation is not enforced by the CCX platform.

6.5.3.3 - Importing Data

This procedure describes how to import data to a MySQL datastore located in CCX.

  • The MySQL Datastore on CCX is denoted as the ‘replica’
  • The source of the data is denoted as the ‘source’

note:

If you dont want to setup replication, then you can chose to only apply the sections:

  • Create a database dump file
  • Apply the dumpfile on the replica

Preparations

Ensure that the source is configured to act as a replication source:

  • Binary logging is enabled.
  • server_id is set to non 0.

Ensure the CCX Firewall is updated:

  • Add the replication source as a Trusted Source in the Firewall section of the CCX UI.

Create a replication user on the source

Create a replication user with sufficient privileges on the source:

CREATE USER 'repluser'@'%' IDENTIFIED BY '<SECRET>';
GRANT REPLICATION SLAVE ON *.* TO  'repluser'@'%';

Prepare the replica to replicate from the source

The replica must be instructed to replicate from the source:

Make sure to change <SOURCE_IP>, <SOURCE_PORT>, and <SECRET>.

CHANGE REPLICATION SOURCE TO SOURCE_HOST=<SOURCE_IP>, SOURCE_PORT=<SOURCE_PORT>, SOURCE_USER='repluser', SOURCE_PASSWORD='<SECRET>', SOURCE_SSL=1;

Create a replication filter on the replica

The replica filter prevents corruption of the datastore.

If the datastore’s system tables are corrupted using replication then the SLA is void and the datastore must be recreated.

CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB=(mysql,sys, performance_schema);

Create a database dump file

The database dump contains the data that you wish to import into the replica. Only partial dumps are possible. The dump must not contains any mysql or other system databases.

On the source, issue the following command. Change USER, SECRET and DATABASES:

mysqldump --set-gtid-purged=OFF -uUSER -pSECRET   --master-data --single-transaction --triggers --routines --events  --databases DATABASES > dump.sql

Important! If your database dump contains SPROCs, triggers or events, then you must replace DEFINER:

sed 's/\sDEFINER=`[^`]*`@`[^`]*`//g' -i dump.sql

Apply the dumpfile on the replica

cat dump.sql | mysql -uccxadmin -p -h<REPLICA_PRIMARY>

Start the replica

On the replica do:

START REPLICA;

followed by

SHOW REPLICA STATUS;

And verify that:

             Replica_IO_State: Waiting for source to send event
	     ..
  	     Replica_IO_Running: Yes
             Replica_SQL_Running: Yes

When the migration is ready

STOP REPLICA;
RESET REPLICA ALL;
CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB=();

Troubleshooting

If the replication fails to start then verify:

  • All the steps above has been followed.
  • Ensure that the replication source is added as a Trusted Source in the Firewall section of the CCX UI.
  • Ensure that you have the correct IP/FQDN of the replication source.
  • Ensure that users are created correctly and using the correct password.
  • Ensure that the dump is fresh.

6.5.3.4 - Importing Data From AWS RDS

This procedure describes how to import data from Amazon RDS to a MySQL datastore located in CCX.

  • The MySQL Datastore on CCX is referred to as the ‘CCX Primary’
  • The RDS Source of the data is referred to as the ‘RDS Writer’

Schematically, this is what we will set up:

sd

warning:

AWS RDS makes it intentionally difficult to migrate away from. Many procedures on the internet, as well as AWS’s own procedures, will not work.

The migration we suggest here (and is the only one we know works) requires that the RDS Writer instance be blocked for writes until a mysqldump has been completed. However, AWS RDS blocks operations such as FLUSH TABLES WITH READ LOCK: mysqldump: Couldn't execute 'FLUSH TABLES WITH READ LOCK': Access denied for user 'admin'@'%' (using password: YES) (1045) Therefore, the actual applications must be blocked from writing.

Also, some procedures on the internet suggest creating a read-replica. This will not work either, as the AWS RDS Read-replica is crippled and lacks GTID support.

note:

If you don’t want to set up replication, you can choose to only apply the following sections:

  • Create a database dump file of the RDS Writer
  • Apply the dump file on the CCX replica

Also, practice this a few times before you actually do the migration.

Preparations

  • Create a datastore on CCX. Note that you can also replicate from MySQL 8.0 to MySQL 8.4.
  • Get the endpoint of the CCX Primary (under the Nodes section). sd The endpoint in our case is db-9bq15.471ed518-8524-4f37-a3b2-136c68ed3aa6.user-ccx.mydbservice.net.
  • Get the endpoint of the RDS Writer. In this example, the endpoint is database-1.cluster-cqc4xehkpymd.eu-north-1.rds.amazonaws.com
  • Update the Security group on AWS RDS to allow the IP address of the CCX Primary to connect. To get the IP address of the CCX Primary, run:
    dig db-9bq15.471ed518-8524-4f37-a3b2-136c68ed3aa6.user-ccx.mydbservice.net
    
  • Ensure you can connect a MySQL client to both the CCX Primary and the RDS Writer.

Create a Replication User On the RDS Writer Instance

Create a replication user with sufficient privileges on the RDS Writer. In the steps below, we will use repl and replpassword as the credentials when setting up the replica on CCX.

CREATE USER 'repl'@'%' IDENTIFIED BY 'replpassword';
GRANT REPLICATION SLAVE ON *.* TO  'repluser'@'%'; #mysql 8.0
GRANT REPLICATION REPLICATION_SLAVE_ADMIN ON *.* TO  'repluser'@'%';

Block Writes to the RDS Writer Instance

This is the most challenging part. You must ensure your applications cannot write to the Writer instance. Unfortunately, AWS RDS blocks operations like FLUSH TABLES WITH READ LOCK.

Create a Consistent Dump

Assuming that writes are now blocked on the RDS Writer Instance, you must get the binary log file and the position of the RDS Writer instance.

Get the Replication Start Position

The start position (binary log file name and position) is used to tell the replica where to start replicating data from.

MySQL 8.0: SHOW MASTER STATUS\G
MySQL 8.4 and later: SHOW BINARY LOG STATUS\G

It will output:

 *************************** 1. row ***************************
             File: mysql-bin-changelog.000901
         Position: 584
     Binlog_Do_DB:
 Binlog_Ignore_DB:
Executed_Gtid_Set: 796aacf3-24ed-11f0-949d-0605a27ab4b9:1-876
1 row in set (0.02 sec)

Record the File: mysql-bin-changelog.000901 and the Position: 584 as they will be used to set up replication.

Create the mysqldump

Be sure to specify the database you wish to replicate. You must omit any system databases. In this example, we will dump the databases prod and crm.

mysqldump -uadmin -p -hdatabase-1.cluster-cqc4xehkpymd.eu-north-1.rds.amazonaws.com --databases prod crm --triggers --routines --events --set-gtid_purged=OFF --single-transaction  > dump.sql

Wait for it to complete.

Unblock Writes to the RDS Writer Instance

At this stage, it is safe to enable application writes again.

Load the Dump On the Replica

Create a Replication Filter On the Replica

The replica filter prevents corruption of the datastore, and we are not interested in changes logged by AWS RDS to mysql.rds* tables anyway. Also add other databases that you do not wish to replicate to the filter.

note:

If the CCX datastore’s system tables are corrupted using replication, then the datastore must be recreated.

CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB=(mysql, sys, performance_schema);

Important! If your database dump contains stored procedures, triggers, or events, then you must replace DEFINER:

sed 's/\sDEFINER=`[^`]*`@`[^`]*`//g' -i dump.sql

Apply the Dump File On the CCX Primary:

cat dump.sql | mysql -uccxadmin -p -hCCX_PRIMARY

Connect the CCX Primary to the RDS Writer Instance

The CCX Primary must be instructed to replicate from the RDS Writer. We have the binlog file and position from the earlier step:

  • mysql-bin-changelog.000901
  • 584
CHANGE REPLICATION SOURCE TO SOURCE_HOST='database-1.cluster-cqc4xehkpymd.eu-north-1.rds.amazonaws.com', SOURCE_PORT=3306, SOURCE_USER='repl', SOURCE_PASSWORD='replpassword', SOURCE_SSL=1, SOURCE_LOG_FILE='mysql-bin-changelog.000901', SOURCE_LOG_POS=584;

Start the Replica

On the replica, run:

START REPLICA;

followed by:

SHOW REPLICA STATUS;

And verify that:

             Replica_IO_State: Waiting for source to send event
	            ...
  	       Replica_IO_Running: Yes
          Replica_SQL_Running: Yes

When the Migration is Ready

At some point, you will need to point your applications to the new datastore. Ensure:

  • Prevent writes to the RDS Writer
  • Make sure the CCX Primary has applied all data (use SHOW REPLICA STATUS)
  • Connect the applications to the new datastore
STOP REPLICA;
RESET REPLICA ALL;
CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB=();

Troubleshooting

If the replication fails to start, verify:

  • All the steps above have been followed
  • Ensure that the IP address of the CCX Primary is added to the security group used by the RDS Writer instance
  • Ensure that you have the correct IP/FQDN of the RDS Writer instance
  • Ensure that users are created correctly and using the correct password
  • Ensure that the dump is fresh

6.5.3.5 - Importing Data From GCP

This procedure describes how to import data from Google Cloud SQL to a MySQL datastore located in CCX.

  • The MySQL Datastore on CCX is referred to as the ‘CCX Primary’
  • The GCP Source of the data is referred to as the ‘GCP Primary’

Schematically, this is what we will set up:

sd

note:

If you don’t want to set up replication, you can choose to only apply the following sections:

  • Create a database dump file of the GCP Primary
  • Apply the dump file on the CCX replica

Also, practice this a few times before you actually do the migration.

Preparations

  • Create a datastore on CCX. Note that you can also replicate from MySQL 8.0 to MySQL 8.4.
  • Get the endpoint of the CCX Primary (under the Nodes section). sd The endpoint in our case is db-9bq15.471ed518-8524-4f37-a3b2-136c68ed3aa6.user-ccx.mydbservice.net.
  • The GCP Primary must have a Public IP.
  • Get the endpoint of the GCP Primary. In this example, the endpoint is 34.51.xxx.xxx
  • Update the Security group on GCP to allow the IP address of the CCX Primary to connect. To get the IP address of the CCX Primary, run:
    dig db-9bq15.471ed518-8524-4f37-a3b2-136c68ed3aa6.user-ccx.mydbservice.net
    
  • Ensure you can connect a MySQL client to both the CCX Primary and the GCP Primary.

Create a Replication User on the GCP Primary Instance

Create a replication user with sufficient privileges on the GCP Primary. In the steps below, we will use repl and replpassword as the credentials when setting up the replica on CCX.

CREATE USER 'repl'@'%' IDENTIFIED BY 'replpassword';
GRANT REPLICATION SLAVE ON *.* TO  'repluser'@'%'; #mysql 8.0
GRANT REPLICATION REPLICATION_SLAVE_ADMIN ON *.* TO  'repluser'@'%';

Create the mysqldump

Be sure to specify the database you wish to replicate. You must omit any system databases. In this example, we will dump the databases prod and crm.

mysqldump -uroot -p -h34.51.xxx.xxx --databases prod crm --triggers --routines --events --set-gtid_purged=OFF --source-data --single-transaction > dump.sql

Wait for it to complete.

Load the Dump on the Replica

Create a Replication Filter on the Replica

The replica filter prevents corruption of the datastore, and we are not interested in changes logged by GCP to mysql.rds* tables anyway. Also add other databases that you do not wish to replicate to the filter.

note:

If the CCX datastore’s system tables are corrupted using replication, then the datastore must be recreated.

CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB=(mysql, sys, performance_schema);

Important! If your database dump contains stored procedures, triggers, or events, then you must replace DEFINER:

sed 's/\sDEFINER=`[^`]*`@`[^`]*`//g' -i dump.sql

Apply the Dump File on the CCX Primary:

cat dump.sql | mysql -uccxadmin -p -hCCX_PRIMARY

Connect the CCX Primary to the GCP Primary

Issue the following commands on the CCX Primary:

CHANGE REPLICATION SOURCE TO SOURCE_HOST='34.51.xxx.xxx', SOURCE_PORT=3306, SOURCE_USER='repl', SOURCE_PASSWORD='replpassword', SOURCE_SSL=1;

Start the Replica

On the CCX Primary, run:

START REPLICA;

followed by:

SHOW REPLICA STATUS\G

And verify that:

             Replica_IO_State: Waiting for source to send event
	     ..
  	     Replica_IO_Running: Yes
             Replica_SQL_Running: Yes

When the Migration is Ready

At some point, you will need to point your applications to the new datastore. Ensure:

  • There are no application writes to the GCP Primary
  • The CCX Primary has applied all data (use SHOW REPLICA STATUS \G, check the Seconds_Behind_Master)
  • Connect the applications to the new datastore

Then you can clean up the replication link on the CCX Primary:

STOP REPLICA;
RESET REPLICA ALL;
CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB=();

Troubleshooting

If the replication fails to start, verify:

  • All the steps above have been followed
  • Ensure that the IP address of the CCX Primary is added to the security group used by the GCP Primary instance
  • Ensure that you have the correct IP/FQDN of the GCP Primary instance
  • Ensure that users are created correctly and using the correct password
  • Ensure that the dump is fresh

6.5.3.6 - Limitations

Every product has limitations. Here is a list MySQL limitations:

Permissions

The privilege system in MySQL is offers more capabilties than MariaDB. Hence, the ‘ccxadmin’ user has more privileges in MySQL than in MariaDB.

The ‘ccxadmin’ user has the following privileges:

  • Global / all databases (*.*):
    • SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, REPLICATION_SLAVE_ADMIN, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, GRANT

This means that the ‘ccxadmin’ may assign privileges to users on all databases.

Restrictions:

‘ccxadmin’ is not allowed to modify the following databases

  • mysql.*
  • sys.*

For those database, the following privileges have been revoked from ‘ccxadmin’:

  • INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER

6.5.3.7 - Overview

CCX supports two types of MySQL clustering:

  • MySQL Replication (Primary-replica configuration)
  • Percona XtraDb Cluster (Multi-primary configuration)

For general purpose applications we recommend using MySQL Replication, and we only recommend to use Percona XtraDb Cluster if you are migrating from an existing application that uses Percona XtraDb Cluster.

If you are new to Percona XtraDb Cluster we stronly recommend to read about the Percona XtraDb Cluster limitations and Percona XtraDb Cluster Overview to understand if your application can benefit from Percona XtraDb Cluster.

MySQL Replication uses the standard asynchronous replication based on GTIDs.

Scaling

Storage and nodes can be scaled online.

Nodes (horizonal)

  • The maximum number of database nodes in a datastore is 5.
  • Multi-primary configuration must contain an odd number of nodes (1, 3 and 5).

Nodes (vertical)

A node cannot be scaled vertically currently. To scale to large instance type, then a larger instance must be added and then remove the unwanted smaller instances.

Storage

  • Maximum size depends on the service provider and instance size
  • Volume type cannot currently be changed

Further Reading

6.5.3.8 - Restore

There are two options to restore a backup:

  • Restore a backup on the existing datastore
  • Restore a backup on a new datastore

Please note that restoring a backup may be a long running process.

This option allows to restore a backup with point in time recovery. The WAL logs are replayed until the desired PITR. Warning! Running several restores may change the timelines.

This option allows to restore a backup on a new datastore. This option does not currently support PITR.

6.5.3.9 - TLS Connection

SSL Modes

CCX currently supports connections to MySQL in two SSL modes:

  1. REQUIRED: This mode requires an SSL connection. If a client attempts to connect without SSL, the server rejects the connection.

  2. VERIFY_CA: This mode requires an SSL connection and the server must verify the client’s certificate against the CA certificates that it has.

CA Certificate

The Certificate Authority (CA) certificate required for VERIFY_CA mode can be downloaded from your datastore on CCX using an API call or through the user interface on page https://{your_ccx_domain}/projects/default/data-stores/{datastore_id}/settings. This certificate is used for the VERIFY_CA SSL mode.

Example Commands

Here are example commands for connecting to the MySQL server using the two supported SSL modes:

  1. REQUIRED mode:

    mysql --ssl-mode=REQUIRED -u username -p -h hostname
    
  2. VERIFY_CA mode:

    mysql --ssl-mode=VERIFY_CA --ssl-ca=ca.pem -u username -p -h hostname
    

require_secure_transport

This is a MySQL setting that governs if connections to the datastore are required to use SSL. You can change this setting in CCX in Settings -> DB Parameters:

Scenario Server Parameter Settings Description
Disable SSL enforcement require_secure_transport = OFF This is the default to support legacy applications. If your legacy application doesn’t support encrypted connections, you can disable enforcement of encrypted connections by setting require_secure_transport=OFF. However, connections are encrypted unless SSL is disabled on the client. See examples
Enforce SSL require_secure_transport = ON This is the recommended configuration.

Examples

ssl-mode=DISABLED and require_secure_transport=OFF

mysql -uccxadmin -p -h...  -P3306 ccxdb --ssl-mode=disabled
...
mysql> \s
--------------
...
Connection id:		52
Current database:	ccxdb
Current user:		ccxadmin@...
*SSL:			Not in use*
Current pager:		stdout
...

ssl-mode=PREFERRED and require_secure_transport=OFF

mysql -uccxadmin -p -h...  -P3306 ccxdb --ssl-mode=preferred
...
mysql> \s
--------------
...
Connection id:		52
Current database:	ccxdb
Current user:		ccxadmin@...
SSL:			Cipher in use is TLS_AES_256_GCM_SHA384
Current pager:		stdout
...

ssl-mode=DISABLED and require_secure_transport=ON

mysql -uccxadmin -p -h...  -P3306 ccxdb --ssl-mode=disabled
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 3159 (08004): Connections using insecure transport are prohibited while --require_secure_transport=ON.

ssl-mode=PREFERRED|REQUIRED and require_secure_transport=ON

mysql -uccxadmin -p -h...  -P3306 ccxdb --ssl-mode=preferred|required
mysql> \s
--------------
...
Connection id:		52
Current database:	ccxdb
Current user:		ccxadmin@...
SSL:			Cipher in use is TLS_AES_256_GCM_SHA384
Current pager:		stdout
...

tls_version

The tls_version is set to the following by default:

Variable_name Value
tls_version TLSv1.2,TLSv1.3

6.5.3.10 - User Management

CCX supports creating database users from the web interface. The database user has the following privileges:

  • Global / all databases (*.*):
    • SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, REPLICATION_SLAVE_ADMIN, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, GRANT

This means that the database user may assign privileges to users on all databases.

Restrictions:

The database user is not allowed to modify the following databases

  • mysql.*
  • sys.*

For those database, the following privileges have been revoked from ‘ccxadmin’:

  • INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER

6.5.4 - PostgreSQL

6.5.4.1 - Backup

Pg_basebackup is used to create backup. Also see the section ‘Schedule’.

CCX backups the Secondary server.

Backups are streamed directly to S3 staroge.

Schedule

The backup schedule can be tuned and backups can be paused

6.5.4.2 - Configuration

These settings cannot be changed as it affects system stability

Important default values

Parameter Default value
wal_keep_size 1024 (v.1.50+) / 512
max_wal_senders min 16, max 4 x Db Node count
wal_level replica
hot_standby ON
max_connections see below
shared_buffers instance_memory x 0.25
effective_cache_size instance_memory x 0.75
work_mem instance_memory / max_connections
maintenance_work_mem instance_memory/16

Max connections

The maximum number of connections depends on the instance size. The number of connections can be scaled by adding a new database secondary allowing of a larger instance size. The new replica can then be promoted to the new primary. See Promoting a replica for more information.

Instance size (GiB RAM) Max connections
< 4 100
8 200
16 400
32 800
64+ 1000

Archive mode

All nodes are configured with archive_mode=always.

Auto-vacuum

Auto-vacuum settings are set to default. Please read more about automatic vaccuming here

6.5.4.3 - Extensions

Supported extentions

Extension Postgres version
vector (pgvector) 15 and later
postgis 15 and later

Creating an extension

Connect to PostgreSQL using an admin account (e.g ccxadmin).

CREATE EXTENSION vector;
CREATE EXTENSION

See Postgres documentation for more information.

6.5.4.4 - Importing Data

This procedure describes how to import data to a PostgreSQL datastore located in CCX.

  • The PostgreSQL Datastore on CCX is denoted as the ‘replica’
  • The source of the data is denoted as the ‘source’

Create a database dump file

Dump the database schema from the <DATABASE> you wish to replicate:

pg_dump --no-owner -d<DATABASE> > /tmp/DATABASE.sql

Apply the dumpfile on the replica

postgres=# CREATE DATABASE <DATABASE>;

Copy the DSN from Nodes, Connection Information in the CCX UI. Change ‘ccxdb’ to <DATABASE>:

psql postgres://ccxadmin:.../<DATABASE> <  /tmp/DATABASE.sql

6.5.4.5 - Limitations

Every product has limitations. Here is a list PostgreSQL limitations:

Permissions

PostgreSQL users are created with the following permissions:

  • NOSUPERUSER, CREATEROLE, LOGIN, CREATEDB

6.5.4.6 - Restore

Postgres configures archive_command and archive_mode=always. Morever, during the restore the restore_command is set.

There are two options to restore a backup:

  • Restore a backup on the existing datastore
  • Restore a backup on a new datastore

Please note that restoring a backup may be a long running process.

This option allows to restore a backup with point in time recovery. The WAL logs are replayed until the desired PITR. Warning! Running several restores may change the timelines.

This option allows to restore a backup on a new datastore. This option does not currently support PITR.

6.5.5 - Redis

6.5.5.1 - Backup

A backup of Redis consists of both RDB and AOF.

Schedule

The backup schedule can be tuned and backups can be paused

6.5.5.2 - Configuration

Volume size

Since Redis is an in-memory database, the storage size is fixed and twice the amount of the RAM. Thus, it is not possible to:

  • specify the storage size in the deployment wizard.
  • scale the storage.

Persistance

Redis is configured to use both AOF and RDB for persistance. The following configuration parameters are set:

  • appendonly yes
  • default values for AOF
  • default values for RDB

6.5.5.3 - User Management

CCX simplifies Redis user management by providing a clear and intuitive user interface for managing privileges and accounts. Below are detailed instructions and explanations for managing Redis users within the CCX environment.

Viewing Existing Users

To view existing Redis users:

  1. Navigate to the Users section in your CCX Redis cluster.
  2. Here you’ll see a list of existing user accounts along with their associated privileges.

list users

User Information Displayed:

  • Account: Username of the Redis user.
  • Privileges: Specific privileges granted or filtered out.
  • Actions: Options to manage (modify/delete) the user.

Note: By default, the -@admin and -@dangerous privileges are filtered out for security purposes.

Creating a New Redis Admin User

create admin user

To create a new Redis admin user:

  1. Click on the Create Admin user button.

  2. Fill in the required fields:

    • Username: Enter the desired username.
    • Password: Enter a secure password for the user.
    • Categories: Enter the privilege categories. By default, using +@all will grant all privileges except those explicitly filtered (like -@admin and -@dangerous).
  3. Optionally, you can define more granular restrictions:

    • Commands: Enter commands to explicitly allow (+) or disallow (-). For example:

      • Allow command: +get
      • Disallow command: -get
    • Channels: Specify Redis Pub/Sub channels. You can allow (&channel) or disallow (-&channel).

    • Keys: Specify key access patterns. Use the syntax ~key to allow or ~-key to disallow access to specific keys or patterns.

  4. After completing the form, click on the Create button to save the new user.

Default Privilege Filtering

CCX ensures the security of your Redis instance by automatically filtering potentially harmful privileges:

  • -@admin: Restricts administrative commands.
  • -@dangerous: Restricts commands that could compromise the cluster’s stability.

These privileges cannot be granted through CCX’s standard user interface for security reasons.

Firewall and Access Control

User accounts in CCX Redis clusters are protected by built-in firewall rules:

  • Accounts are only allowed to connect from trusted sources defined in the firewall settings.

Ensure your firewall rules are properly configured to maintain secure access control to your Redis users.

6.5.6 - Valkey

6.5.6.1 - Backup

A backup of Valkey consists of both RDB and AOF.

Schedule

The backup schedule can be tuned and backups can be paused

6.5.6.2 - Configuration

Volume size

Since Valkey is an in-memory database, the storage size is fixed and twice the amount of the RAM. Thus, it is not possible to:

  • specify the storage size in the deployment wizard.
  • scale the storage.

Persistance

Redis is configured to use both AOF and RDB for persistance. The following configuration parameters are set:

  • appendonly yes
  • default values for AOF
  • default values for RDB

6.5.6.3 - User Management

CCX simplifies Valkey user management by providing a clear and intuitive user interface for managing privileges and accounts. Below are detailed instructions and explanations for managing Valkey users within the CCX environment.

Viewing Existing Users

To view existing Valkey users:

  1. Navigate to the Users section in your CCX Valkey cluster.
  2. Here you’ll see a list of existing user accounts along with their associated privileges.

list users

User Information Displayed:

  • Account: Username of the Valkey user.
  • Privileges: Specific privileges granted or filtered out.
  • Actions: Options to manage (modify/delete) the user.

Note: By default, the -@admin and -@dangerous privileges are filtered out for security purposes.

Creating a New Valkey Admin User

create admin user

To create a new Valkey admin user:

  1. Click on the Create Admin user button.

  2. Fill in the required fields:

    • Username: Enter the desired username.
    • Password: Enter a secure password for the user.
    • Categories: Enter the privilege categories. By default, using +@all will grant all privileges except those explicitly filtered (like -@admin and -@dangerous).
  3. Optionally, you can define more granular restrictions:

    • Commands: Enter commands to explicitly allow (+) or disallow (-). For example:

      • Allow command: +get
      • Disallow command: -get
    • Channels: Specify Valkey Pub/Sub channels. You can allow (&channel) or disallow (-&channel).

    • Keys: Specify key access patterns. Use the syntax ~key to allow or ~-key to disallow access to specific keys or patterns.

  4. After completing the form, click on the Create button to save the new user.

Default Privilege Filtering

CCX ensures the security of your Valkey instance by automatically filtering potentially harmful privileges:

  • -@admin: Restricts administrative commands.
  • -@dangerous: Restricts commands that could compromise the cluster’s stability.

These privileges cannot be granted through CCX’s standard user interface for security reasons.

Firewall and Access Control

User accounts in CCX Valkey clusters are protected by built-in firewall rules:

  • Accounts are only allowed to connect from trusted sources defined in the firewall settings.

Ensure your firewall rules are properly configured to maintain secure access control to your Valkey users.

6.6 - Supported Databases

Database Topology CCX Supported EOL Notes
MariaDB Primary/Replica
Multi-Primary
10.11 16 Feb 2028
Primary/Replica
Multi-Primary
11.4 29 May 2029
MySQL Primary/Replica
Multi-Primary
8.0 April, 2026
Primary/Replica
Multi-Primary
8.4 30 Apr 2029
PostgreSQL Primary/Replica 14 12 Nov 2026
Primary/Replica 15 11 Nov 2027
Primary/Replica 16 8 Nov 2028
Redis Sentinel 7.2 deprecated
Valkey Sentinel 8 tbd
Microsoft SQL Server for Linux Single Instance 2022 2027?
Primary/Replica (always on) 2022 2027?

7 - Changelog

Latest changes for ELASTX DBaaS

V 1.55

Overview

Changes has been introduced in user onboarding flow. Important security fixes has bee deployed.

Changes

  • Change in user onboarding flow.

Important fixes

  • Fixed CMON config invalid save.
  • Single server MSSQL datastore creation fails unless 1 node is manually chosen.
  • UI fix for create datastore from backup when changing node configuration.
  • Backend using 8.0.6 Valkey resolving security issues (CVE-2025-49844).

V 1.54

Overview

Support for new PostgreSQL and MariaDB versions. Added more Valkey modules.

Changes

  • PostgreSQL 17.
  • MariaDB 11.8 including MariaDB Vector.
  • Improved instance selection.
  • Added valkey-search and valkey-json.

Important fixes

  • Fixed missing add/extend storage option for MSSQL.
  • Fixed issue with PostgreSQL 17 unable to set DB parameters.
  • Corrected disk utilization charts (sum vs. max).

V 1.53

Overview

Valkey is now replacing Redis as option for creating new datastores. The monitoring charts have been improved and refreshed.

Changes

  • Improved backup handling.
  • Customer log UI improvements.
  • Terraform functionality extensions.

Important fixes

  • Fixed problem with DB parameter acceptance and group synchronisation.
  • Corrected a problem with incorrect volume type for expanded volumes.
  • Improved datastore status transitions during upgrades.

V 1.51

Overview

A new way to group custom database parameters is introduced. This allows to apply the group to multiple datastores in a more structured way. Autoscaling of volumes has been improved where the actual scaling is less intrusive than it was in 1.50. Added a SUPERUSER checkbox when creating a new PostgreSQL user, with caution prompts.

Changes

  • Parameter Groups for Database Management.
  • Database Logs in Events Viewer.
  • Create Datastore from incremental backup from different storage types.
  • Reboot database node.
  • Make Postgres SUPERUSER configurable.

Important fixes

  • Deployments stuck in deploying status forever
  • Corrected state transitions so a failed deployment eventually marks as “failed” instead of hanging.
  • Reduced unnecessary Service updates, lowering API calls to Kubernetes.
  • Set correct interval to 15/30/60 minutes for incremental backups.
  • Disabled volume-editing for ephemeral storage as it was never intended.

V 1.50

Overview

This release offers upward volume autoscaling, new customer database parameters, improved monitoring in terms of mail notifications and more metrics. It is now possible to create (m)TLS-based sessions where the client can prefetch server certificates. The backup management has been improved disalloving concurrent backup race conditions.

Changes

  • Auto-scale volumes, enabled by default.
  • Send email notifications to end user.
  • (m)TLS for Mysql/MariaDB, Postgres and Redis.
  • Do all DNS queries through ExternalDNS.
  • The Terraform provider has been substantially improved.

Important fixes

  • Fixed a problem where multiple concurrent backups were executed.
  • There was a problem in removing datastores stuck in creation or modifying state.
  • Redis and MSSQL backup restore was not working properly.
  • Optimized failover time for Always on.

V 1.48

Overview

In this release we introduce MSSQL, the new Openstack V2 instance flavors and volume types giving even better performance and price efficiency.

Changes

  • MSSQL in Standalone and Always On versions.
  • Lifecycle management, database upgrades.
  • Improved automatic datastore failover handling.
  • Change existing datastore volume type and size.
  • Account password management.
  • Choose new V2 node flavors with improved performance.
  • Mobile UI.
  • Datastore UI overview page paging and filtering.
  • Terraform provider upgrade allowing automated datastore and firewall management.
  • Improved documentation with more practical examples for, among other, external backup/restore and Terraform provider usage, and much more.

Important fixes

  • Fixed a problem where DNS name for datatore nodes occasionally disappeared.
  • Fixed Postgres creation and restore who occasionally could fail.
  • Corrected a problem where datastore creation from backup failed.

V 1.47

Overview

The release focus on datastore failure handling, e.g. when nodes are lost and how the failover is managed. It introduces improved general database life cycle management and initial backend support for MSSQL Server 2022.

Changes

  • Automatatic datastore failure handling.
  • Datastore creation from backups.
  • Improved datastore upgrade process.
  • Expose monitoring ports for customer prometheus clients.
  • Repair and node scaling for MSSQL.
  • UI view filtering and list presentation.
  • Improved UI guidance tool tips.
  • Terraform API for grouped firewall rules.

Important fixes

  • Corrected a problem where promotion of new MSSQL primary led to endless loops.
  • Fixed problem where DNS records for datastore nodes could sometime disappear after upgrades.
  • Corrected a problem with inconsistent logging and presentation of changed cluster and node status.

V 1.46

Overview

This release introduces configuration management and simplified service access. Initial support for life cycle management is introduced.

Changes

  • Access to services/failover. This provides the user with a single entrypoint to the datastore.
  • Configuration Management. Ability to let the end-user tune certain configuration values.
  • Lifecycle Management. Ability to upgrade datastores (OS and database software) using a roll-forward upgrade method.
  • Improved customer error interaction for handling nodes.

Important fixes

  • Corrected a bug that caused the control plane process to restart occasionally.

8 - DBaaS Guides

Database as a Service

8.1 - Backup and Restore via CLI

Backup and Restore databases with the help of CLI tools

Overview

This guide will help you getting started with creating and restoring your own backups using various database CLI tools.
For the built-in backup functionality, please see here.

PostgreSQL

Backup

The client we are using in this guide is pg_dump which is included in the PostgreSQL client package. It is recommended to use the same client version as your server version.

The basic syntax and an example to dump a PostgreSQL database with the official tool pg_dump is shown below. To connect and authenticate with a remote host you can either specify this information either with options, environment variables or a password file.

Usage & example

pg_dump [OPTION]... [DBNAME]
  -h, --host=HOSTNAME      database server host or socket directory (default: "local socket")
  -p, --port=PORT          database server port (default: "5432")
  -U, --username=USERNAME  database user name (default: "$USER")
  -f, --file=FILENAME      output file or directory name
  -d, --dbname=DBNAME      database to dump
pg_dump -h mydatabaseserver -U mydatabaseuser -f dump.sql -d mydatabase

Environment variables

As mentioned, we can also specify the options, connection and authentication information via environment variables, by default the client checks if the below environment variables are set.

For a full list, check out the documentation under PostgreSQL Documentation.

PGDATABASE
PGHOST
PGOPTIONS
PGPORT
PGUSER

It is not recommended to specify the password via the above methods, and thus not listed here. For the password it is better to use a so called password file. By default the client checks the user’s home directory for a file named .pgpass. Read more about the password file by going to the official documentation linked under PostgreSQL Documentation.

Restore

To restore a database we will use the client psql which is also included in the PostgreSQL client package. It is recommended to use the same client version as your server version.

Usage & example

psql [OPTION]... [DBNAME [USERNAME]]
  -h, --host=HOSTNAME      database server host or socket directory (default: "local socket")
  -p, --port=PORT          database server port (default: "5432")
  -U, --username=USERNAME  database user name (default: "$USER")
  -f, --file=FILENAME      execute commands from file, then exit
  -d, --dbname=DBNAME      database name to connect to
psql -h mydatabaseserver -U mydatabaseuser -f dump.sql -d mydatabase

PostgreSQL Documentation

  • PostgreSQL 11/14 - pgdump
  • PostgreSQL 11/14 - The Password file
  • PostgreSQL 11/14 - Environment variables
  • PostgreSQL 11/14 - SQL Dump

MariaDB

Backup

The client we are using in this guide is mariadb-dump which is included in the MariaDB client package.

The basic syntax and an example to dump a MariaDB database with the official tool mariadb-dump is shown below together with some of the options we will use.

Usage & example

mariadb-dump [OPTIONS] database [tables]
OR     mariadb-dump [OPTIONS] --databases DB1 [DB2 DB3...]
-h, --host=name       Connect to host.
-B, --databases       Dump several databases...
-q, --quick           Don't buffer query, dump directly to stdout.
--single-transaction  Creates a consistent snapshot by dumping all tables
                      in a single transaction...
--skip-lock-tables    Disable the default setting to lock tables

For a full list of options, check out the documentation under MariaDB Documentation.

Depending on your specific needs and the scope of the backup you might need to use the pre-created database user. This is because any subsequent users created in the portal are set up with permissions to a specific database while the pre-existing admin user have more global permissions that are needed for some of the dump options.

mariadb-dump -h mydatabaseserver -B mydatabase --quick --single-transaction --skip-lock-tables > dump.sql

It is not recommended to specify the password via the command line. Consider using an option file instead, by default the client checks the user’s home directory for a file named .my.cnf. You can read more about option files in the official documentation linked under MariaDB Documentation.

Restore

To restore the database from the dump file we will use the tool mariadb that is also included in the MariaDB client package.

Usage & example

mariadb [OPTIONS] [database]
-h, --host=name     Connect to host
mariadb -h mydatabaseserver mydatabase < dump.sql

MariaDB Documentation

MySQL

Backup

The client we are using in this guide is mysqldump which is included in the MySQL client package.

The basic syntax and an example to dump a MySQL database with the official tool mysqldump is shown below together with some of the options we will use.

Usage & example

mysqldump [OPTIONS] database [tables]
OR     mysqldump [OPTIONS] --databases DB1 [DB2 DB3...]
-h, --host=name       Connect to host.
-B, --databases       Dump several databases...
-q, --quick           Don't buffer query, dump directly to stdout.
--single-transaction  Creates a consistent snapshot by dumping all tables
                      in a single transaction...
--skip-lock-tables    Disable the default setting to lock tables
--no-tablespaces      Do not write any CREATE LOGFILE GROUP or 
                      CREATE TABLESPACE statements in output 

For a full list of options, check out the documentation under MySQL Documentation.

Depending on your specific needs and the scope of the backup you might need to use the pre-created database user. This is because any subsequent users created in the portal are set up with permissions to a specific database while the pre-existing admin user have more global permissions that are needed for some of the dump options.

mysqldump -h mydatabaseserver -B mydatabase --quick --single-transaction --skip-lock-tables --no-tablespaces > dump.sql`

It is not recommended to specify the password via the command line. Consider using an option file instead, by default the client checks the user’s home directory for a file named .my.cnf. You can read more about option files in the official documentation linked under MySQL Documentation.

Restore

To restore the database from the dump file we will use the tool mysql that is also included in the MySQL client package.

Usage & example

mysql [OPTIONS] [database]
-h, --host=name     Connect to host
mysql -h mydatabaseserver mydatabase < dump.sql

MySQL Documentation

8.2 - Backup and Restore via DBaaS UI

Overview and examples of Elastx DBaaS built-in backup functionality

Overview

All our supported database types comes with built-in backup functionality and is enabled by default. Backups are stored in our object storage, which is encrypted at rest and also utilizes all of our availability zones for highest availability. You can easily set the amount of backups per day, the prefered time of day and the retention period in our DBaaS UI. For MySQL, MariaDB and PostgreSQL we also support creating new datastores from backup, making it easy to create a new database cluster using another cluster as a base.
For backup pricing, you can use our DBaaS price calculator found here: ECP-DBaaS

Good to know

Beaware: Please note that if you delete a datastore, all backups for that datastore will also be deleted. This action cannot be reverted.

  • Backups are taken for the whole datastore.
  • Maximum backup retention period is 90 days. Default value is 7 days.
  • There’s no storage quota for backups.
  • Incremental backups are supported and enabled by default on MySQL and MariaDB.
  • Backups cannot be downloaded locally. To create an offsite backup, you can use one of the CLI-tools. See here for some examples.
  • Creating new datastores from previously taken backups is supported for MySQL, MariaDB and PostgreSQL.

Manage backups

Begin by logging into your Elastx DBaaS account, choose your datastore and go to Backups.
Under this tab you will see all the previously taken backups for the chosen datastore, if you just created this datastore, it might be empty.

Retention Period

To change retention period click on Backup settings at the top right corner, set your prefered retention period and click Save.

Backup schedules

For datastores running MySQL and MariaDB you have the ability to set schedules for both full and incremental backups.
To change how often and when your backups should run, click on Backup Schedules in the left corner.
Select the backup type you want to change and choose edit:

  • Incremental backups can be set to run every 15, 30 and 60 minutes.
  • Full backups can be set to run hourly or daily. Set your prefered time in UTC.

Restore backup on your running datastore

Beaware: Please note that this process will completely overwrite your current data and all changes since your last backup will be lost.

Go to the Backups tab for the datastore you want to restore. Select the prefered backup and click on the three dots under Actions and choose restore.

Create a new datastore from backup

For MySQL, MariaDB and PostgreSQL you have the ability to use a backup as a base for a new datastore.
Go to backups and click on the three dots under actions for the backup you want to use as a base and select Create Datastore.
A new datastore will be created with the same specification and name (with extension _Copy) as the base datastore.
When it’s finished, you can rename your new datastore by going to Settings > Datastore name.

Disable backups

Beaware: Not recommended. Please note that if you disable full backups, no backups will be taken after this point until you manually enable it again.

Go to the backups tab for the datastore you want to pause backups. Select Backup Schedules, click on the three dots for type of backup you want to disable and choose pause. To re-enable backups again, take the same steps and choose enable.

8.3 - Config Management

note: Deprecated in v1.51 in favor of parameter groups Please see to Parameter Groups

In CCX, you have the ability to fine-tune your database performance by adjusting various DB Parameters. These parameters control the behavior of the database server and can impact performance, resource usage, and compatibility.

img

Available DB Parameters

This is an example, and is subject to change and depends on the configuration of CCX.

  1. group_concat_max_len

    • Description: Specifies the maximum allowed result length of the GROUP_CONCAT() function.
    • Max: 104857600 | Min: 1024 | Default: 1024
  2. interactive_timeout

    • Description: Sets the number of seconds the server waits for activity on an interactive connection before closing it.
    • Max: 28800 | Min: 3000 | Default: 28800
  3. max_allowed_packet

    • Description: Specifies the maximum size of a packet or a generated/intermediate string.
    • Max: 1073741824 | Min: 536870912 | Default: 536870912
  4. sql_mode

    • Description: Defines the SQL mode for MySQL, which affects behaviors such as handling of invalid dates and zero values.
    • Default: ONLY_FULL_GROUP_BY, STRICT_TRANS_TABLES, NO_ZERO_IN_DATE, NO_ZERO_DATE, ERROR_FOR_DIVISION_BY_ZERO, NO_ENGINE_SUBSTITUTION
  5. table_open_cache

    • Description: Sets the number of open tables for all threads.
    • Max: 10000 | Min: 4000 | Default: 4000
  6. wait_timeout

    • Description: Defines the number of seconds the server waits for activity on a non-interactive connection before closing it.
    • Max: 28800 | Min: 3000 | Default: 28800

How to Change Parameters

  1. Navigate to the DB Parameters tab within the Settings section.
  2. Review the list of available parameters and their current values.
  3. Click on the Edit Parameters button in the upper-right corner.
  4. Adjust the values as necessary within the defined minimum and maximum limits.
  5. Once you’ve made the required changes, save the new configuration.

note: The latest saved settings are applied when adding a node (either as part of Scaling, during Lifecycle management, or during automatic repair).

Best Practices

  • Understand the impact: Changing certain parameters can significantly impact the performance and stability of your database. Make sure to test changes in a staging environment if possible.
  • Stay within limits: Ensure that your values respect the maximum and minimum bounds defined for each parameter.
  • Monitor after changes: After adjusting any parameter, monitor your database performance to ensure the changes have the desired effect.

By properly configuring these parameters, you can optimize your database for your specific workload and operational requirements.

8.4 - Create Datastore From Backup

In CCX, it is possible to create a new datastore from a backup Supported databases: MySQL, MariaDb, Postgres.

Select the backup you wish to restore in the Backup tab and select “Create datastore” from the action menu next to the backup. This process may take some time depending on the size of the backup. The new datastore will have the name datastore as the parent but will be suffixed with _Copy.

This allows you to:

  • create a datastore from a backup for development and testing purposes.
  • Investigate and analyse data without interfering with the production environment.

Limitations

PITR is not supported yet.

8.5 - Database Db Management

This guide explains how to create, list, and manage databases within the CCX platform for both PostgreSQL and MySQL systems. Databases is not a concept in Redis, and in Microsoft SQLServer creating databases is not supported.

Listing Existing Databases

Once databases are created, you can view the list of databases in the Databases tab.

  • The Database Name column shows the names of the databases.
  • The Size column displays the size of the database.
  • The Tables column indicates the number of tables within each database.

List Databases

  • For MySQL, the database list will appear similar, with columns for database name, size, and tables.

List MySQL Databases

Creating a New Database

To create a new database in the CCX platform:

note:

  • PostgreSQL Database Owner: When creating a database in PostgreSQL, ensure that a valid user is selected as the owner of the database.:**
  • MySQL Database Management: MySQL database creation does not require specifying an owner, but all other functions (listing, deleting) remain similar.:**
  1. Navigate to the Databases Tab:

    • Click on the Databases section from the main dashboard.
  2. Click on Create New Database:

    • A form will appear asking for the following details:
      • Database Name: The name of the new database.
      • DB Owner: The user who will own the database (applicable to PostgreSQL).

    Create Database

  3. Submit the Form:

    • After filling in the necessary information, click Create to create the new database.
  4. MySQL Database Creation:

    • For MySQL, the owner field is not required. You only need to specify the database name.

    MySQL Create Database

Dropping a Database

note:

  • MySQL/MariaDb Database locks / metadata locks: The DROP DATABASE will hang if there is a metadata lock on the database or a table/resource in the database. Use SHOW PROCESSLISTin the mysql client to identify the lock. Either release the lock, KILL the connection, or wait for the lock to be released.:**

To delete or drop a database:

  1. Locate the Database:

    • In the Databases tab, find the database you want to delete.
  2. Click the Delete Icon:

    • Click on the red delete icon next to the database entry.
    • A confirmation dialog will appear asking if you are sure about dropping the database.

    Drop Database

  3. Confirm Deletion:

    • Click OK to proceed. WARNING: All data in the database will be lost.

Troubleshooting

Drop database hangs, the icon is spinning in the frontend.

Check if there are locks preventing the database from being deleted.

  • In MySQL, the DROP DATABASE will hang if there is a metadata lock on the database or a table/resource in the database. Use SHOW PROCESSLIST in the mysql/mariadb client to identify the lock. Either release the lock, KILL the connection, or wait for the lock to be released.

8.6 - Database User Management

CCX allows you to create admin users. These users can in turn be used to create database uses with lesser privileges. Privileges and implementation is specific to the type of database. Admin users can be created for the following databases:

  • PostgreSQL
  • MySQL
  • MariaDb
  • Valkey
  • Cache 22
  • Microsoft SQL Server

List database users

To list database users do Navigate to the Users Tab::

List Database User

Creating an Admin User

To create a new admin user, follow these steps:

  1. Navigate to Users Tab:

    • Go to the Users section from the main dashboard.
  2. Click on Create Admin User:

    Below is the MySQL interface described, but the interface is similar for the other database types

    • A form will appear prompting you to enter the following details:
    • Username: Specify the username for the new admin.
    • Password: Enter a strong password for the admin user.
    • Database Name: Select or specify the database this user will be associated with.
    • Authentication Plugin: Choose the authentication method for the user. Available options:
      • caching_sha2_password (default)
      • mysql_native_password (for MySQL compatibility)

    Create Admin User

Deleting a database user

Delete User: To delete a user, click on the red delete icon beside the user entry. A confirmation dialog will appear before the user is removed.

Delete User Confirmation

Connection assistant

CCX provides a Connection Assistant to help configure connection strings for your database clients.

Connection assistant

  1. Configure Database User and Database Name:

    • Select the database user and the database name.
    • Choose the Endpoint type (Primary or Replica).
  2. Connection String Generation:

    • Based on the selected options, a connection string is generated for various technologies, including:
      • JDBC
      • ODBC
      • Python (psycopg2)
      • Node.js (pg)
  3. Example:

    String url = "jdbc:postgresql://<host>:<port>/<dbname>?verifyServerCertificate=true&useSSL=true&requireSSL=true";
    myDbConn = DriverManager.getConnection(url, "<username>", "<password>");
    

8.7 - Datastore Settings

In the Settings section of CCX, there are two primary configuration options: General and DB Parameters.

The General settings section allows you to configure high-level settings for your datastore. This may include basic configurations such as system name, storage options, and general system behavior.

The DB Parameters section is used for fine-tuning your database. Here, you can adjust specific database settings such as memory allocation, query behavior, or performance-related parameters. These settings allow for a deeper level of control and optimization of the datastore for your specific workload.

Database Parameters

Please see Configuration management.

Changing the Datastore Name in CCX

The Datastore Name in CCX is an identifier for your datastore instance, and it is important for proper organization and management of multiple datastores. The name can be set when creating a datastore or changed later to better reflect its purpose or environment.

img

Notifications in CCX

Introduced in v.1.50.

The Notifications feature in CCX allows you to configure email alerts for important system events. These notifications help ensure that you are aware of critical events happening within your environment, such as when the disk space usage exceeds a certain threshold or when important jobs are started on the datastore.

img

To configure recipients of notification emails, simply enter the email addresses in the provided field. Multiple recipients can be added by separating each email with a semicolon (;).

If no email addresses are added, notifications will be disabled.

Key Notifications:

  • Disk Space Alerts: When disk usage exceeds 85%, a notification is sent to the configured recipients.
  • Job Alerts: Notifications are sent when significant jobs (such as data processing or backups) are initiated on the datastore.

This feature ensures that system administrators and key stakeholders are always up-to-date with the health and operations of the system, reducing the risk of unexpected issues.

Auto Scaling Storage Size in CCX

Introduced in v.1.50.

CCX provides a convenient Auto Scaling Storage Size feature that ensures your system never runs out of storage capacity unexpectedly. By enabling this feature, users can automatically scale storage based on usage, optimizing space management.

img

When Auto Scale is turned ON, the system will automatically increase the storage size by 20% when the used space exceeds 85% of the allocated storage. This proactive scaling ensures that your system maintains sufficient space for operations, preventing service interruptions due to storage constraints.

Key Benefits:

  • Automatic scaling by 20% when usage exceeds 85%.
  • Ensures consistent performance and reliability.
  • Eliminates the need for manual storage interventions.

This feature is especially useful for dynamic environments where storage usage can rapidly change, allowing for seamless growth as your data expands.

Authentication in CCX

Introduced in v.1.49.

The Authentication section in CCX allows users to download credentials and CA certificates, which are essential for securing communication between the system and external services or applications.

Credentials

The Credentials download provides the necessary authentication details, such as API keys, tokens, or certificates, that are used to authenticate your system when connecting to external services or accessing certain system resources. These credentials should be securely stored and used only by authorized personnel.

img

To download the credentials, simply click the Download button.

CA Certificate

The CA Certificate ensures secure communication by verifying the identity of external systems or services through a trusted Certificate Authority (CA). This certificate is critical when establishing secure connections like HTTPS or mutual TLS (mTLS).

To download the CA Certificate, click the Download button next to the CA Certificate section.

Security Considerations:

  • Keep credentials secure: After downloading, ensure the credentials and certificates are stored in a secure location and only accessible by authorized personnel.
  • Use encryption: Where possible, encrypt your credentials and certificates both at rest and in transit.
  • Regularly rotate credentials: To maintain security, periodically rotate your credentials and update any related system configurations.

This Authentication section is vital for maintaining a secure and trustworthy communication environment in your CCX setup.

8.8 - DBaaS with Terraform

Overview and examples of managing datastores in Elastx DBaaS using Terraform

Overview

This guide will help you getting started with managing datastores in Elastx DBaaS using Terraform.
For this we will be using OAuth2 for authentication and the CCX Terraform provider. You can find more information about the latest CCX provider here.

Good To Know

  • Create/Destroy datastores supported.
  • Setting firewall rules supported.
  • Setting database parameter values supported.
  • Scale out/in nodes supported.
  • Create users and databases currently not supported.

DBaaS OAuth2 credentials

Before we get started with terraform, we need to create a new set of OAuth2 credentials.
In the DBaaS UI, go to your Account settings, select Authorization and choose Create credentials.

In the Create Credentials window, you can add a description and set an expiration date for your new OAuth2 credential.
Expiration date is based on the number of hours starting from when the credential were created. If left empty, the credential will not have an expiration date. You can however revoke and-/or remove your credentials at any time.
When you’re done select Create.

Create credential


Copy Client ID and Client Secret. We will be using them to authenticate to DBaaS with Terraform.
Make sure you’ve copied and saved the client secret before closing the popup window. The client secret cannot be obtained later and you will have to create a new one.

Copy credential


Terraform configuration

We’ll start by creating a new, empty file, and adding the Client ID and Secret as variables, which will be exported and used for authenticaton later when we apply our terraform configuration.
Add your Client ID and Client Secret.

#!/usr/bin/env bash

export CCX_BASE_URL="https://dbaas.elastx.cloud"
export CCX_CLIENT_ID="<client-id>"
export CCX_CLIENT_SECRET="<client-secret>"

Source your newly created credentials file.

source /path/to/myfile.sh

Terraform provider

Create a new terraform configuration file. In this example we create provider.tf and add the CCX provider.

terraform {
  required_providers {
    ccx = {
      source = "severalnines/ccx"
      version = "0.3.1"
    }
  }
}

Create your first datastore with Terraform

Create an additional terraform configuration file and add your prefered datastore settings. In this example we create a configuration file named main.tf and specify that his is a single node datastore with MariaDB.

resource "ccx_datastore" "elastx-dbaas" {
  name           = "my-terraform-datastore"
  db_vendor      = "mariadb"
  size           = "1"
  instance_size  = "v2-c2-m8-d80"
  volume_type    = "v2-1k"
  volume_size    = "80"
  cloud_provider = "elastx"
  cloud_region   = "se-sto"
  tags           = ["terraform", "elastx", "mariadb"]
}

Create primary/replica datastores with added firewall rules and database parameter values

This example is built upon the previous MariaDB example. Here we added a second node to create a primary/replica datastore. We’re also adding firewall rules and setting database parameter values. To see all available database parameters for your specific database type, log into the DBaaS UI, go to your specific datastore > Settings > DB Parameters.

resource "ccx_datastore" "elastx-dbaas" {
  name           = "my-terraform-datastore"
  db_vendor      = "mariadb"
  size           = "2"
  instance_size  = "v2-c2-m8-d80"
  volume_type    = "v2-1k"
  volume_size    = "80"
  cloud_provider = "elastx"
  cloud_region   = "se-sto"
  tags           = ["terraform", "elastx", "mariadb"]

# You can add multiple firewall rules here
  firewall {
    source       = "x.x.x.x/32"
    description  = "My Application"
  }

  firewall {
    source      = "x.x.x.x/32"
    description = "My database client"
  }

# Set your specific database parameter values here. Values should be comma-separated without spaces.
  db_params = {
    sql_mode = "STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER"
  }
 
}

Available options

Below you will find a table with available options you can choose from.

Resource Description
name Required - Sets the name for your new datastore.
db_vendor Required - Selects which database vendor you want to use. Available options: mysql, mariadb, redis and postgres. For specific Postgres version see the db_version option.
instance_size Required - Here you select which flavor you want to use.
cloud_provider Required - Should be set to elastx.
cloud_region Required - Should be set to se-sto.
volume_type Recommended - This will create a volume as the default storage instead of the ephemeral disk that is included with the flavor. Select the volume type name for the type of volume you want to use. You can find the full list of available volume types here: ECP/OpenStack Block Storage.
volume_size Recommended - Required if volume_type is used. Minimum volume size requirement is 80GB.
db_version Optional - Only applicable to PostgreSQL. Selects the version of PostgreSQL you want to use. You can choose between 14 and 15. Defaults to 15 if not set.
firewall Optional - Inline block for adding firewall rules. Can be set multiple times.
db_params Optional - Inline block for setting specific database parameter values using: parameter=“values”. Values should be comma-separated.
tags Optional - Add additional tags.

8.9 - Deploy A Datastore

img

MySQL or MariaDB

MySQL 8.4 is recommended if you are migrating from an existing MySQL system. MariaDB 11.4 is recommended if you are migrating from an existing MariaDB system. MySQL 8.4 offers a more sophisticated privilege system which makes database administration easier, wheres

High-availability

MySQL and MariaDB offers two configurations for High-availability.

  • Primary/replica (asynchronous replication)
  • Multi-primary (Galera replication)

Primary/replica is recommended for general purpose.

Scaling

MySQL and MariaDb can be created with one node (no high-availability) and can later be scaled with read-replicas or Primarys (in case of Multi-primary configuration).

PostgreSQL

PostgreSQL 15 and later supports the following extensions by default:

  • PostGis
  • PgVector

High-availability

High-availability is facilitated with PostgreSQL streaming replication

Scaling

PostgreSQL can be created with one node (no high-availability) but can later be scaled with read-replicas.

Cache22 (aka Redis)

deprecated

Cache22 is an in-memory data structure store.

High-availability

High-availability is facilitated with Redis replication and Redis Sentinels.

Scaling

Redis can be created with one node (no high-availability) but can later be scaled with read-replicas.

Valkey

Valkey is an in-memory data structure store.

High-availability

High-availability is facilitated with Valkey replication and Valkey Sentinels.

Scaling

Valkey can be created with one node (no high-availability) but can later be scaled with read-replicas.

MSSQL Server

Microsoft SQLServer 2022. Special license restrictions apply and this option may not be available in all CCX implementations.

8.10 - Event Viewer

The Event Viewer provides a detailed history of actions performed on the datastore. It tracks when changes were made, their status, who initiated the action, and a brief description of the action itself.

  • When: Timestamp indicating when the event occurred.
  • Status: The current status of the event (e.g., Finished for successfully completed tasks).
  • Initiated by: The user or process that initiated the action.
  • Description: A summary of the action performed.

Example Events:

Event viewer

The Event Viewer is essential for tracking the progress of tasks such as node scaling, promotions, and configuration updates. Each event is clearly labeled, providing users with transparency and insight into the state of their datastore operations.

8.11 - Firewall

This guide explains how to manage trusted sources and open ports within the firewall settings of the CCX platform. Only trusted sources are allowed to connect to the datastore.

A number of ports are open for each trusted source. One port is opened for the database service, but other ports are open for metrics. This makes it possible to connect and scrape the database nodes for metrics from a trusted source. The metrics are served using Prometheus exporters.

List Trusted Sources

Trusted sources can be managed from the Firewall tab. Only trusted sources are allowed to connect to the datastore. Here you can see:

  • Source: View the allowed IP addresses or ranges.
  • Description: Review the description of the source for identification.
  • Actions: Delete the source by clicking on the red trash icon.

Trusted Source List

Adding a Trusted Source

To allow connections from a specific IP address or range, you need to create a trusted source.

Click on Create Trusted Source:

  • A form will appear prompting you to enter the following details:
    • Source IP: Specify the IP address or CIDR range to allow. It is possible to specify a semicolon-separated list of CIDRs. If no CIDR is specified, then /32 is automatically added to the IP address.
    • Description: Add a description to identify the source (e.g., “My office”, “Data Center”).

After filling out the details, click Create to add the trusted source.

Create Trusted Source

Viewing and Managing Trusted Sources

Managing Open Ports for Each Trusted Source.

TLS access to exporter metrics are described in this section.

Each trusted source can have specific ports opened for services. To manage the ports:

  1. Expand a Trusted Source:

    • Click the down arrow beside the source IP to view the open ports.
  2. Port Configuration:

    • Port Number: The number of the open port (e.g., 9100, 5432).
    • Port Name: The name of the service associated with the port (e.g., node_exporter, postgres_exporter, service).

    The service indicates the listening port of the database server. The ports for the node_exporter and db_exporter allows you to tap in to observability metrics for the database nodes.

  3. Actions:

    • Delete a Port: Remove a port by clicking the red trash icon next to the port number.

Example Ports:

  • Port 9100: node_exporter

  • Port 9187: postgres_exporter

  • Port 5432: service

    Trusted Source List


Deleting Trusted Sources and Ports

Deleting a Trusted Source:

To remove a trusted source entirely, click on the red trash icon next to the source IP. This will remove the source and all associated ports.

Deleting an Individual Port:

To delete a specific port for a trusted source, click on the red trash icon next to the port number. This action will only remove the specific port.


This documentation covers the basic operations for managing firewall trusted sources and ports within the CCX platform. For further details, refer to the CCX platform’s official user manual or support.

8.12 - Logs Viewer

The Logs Viewer provides a comprehensive view of Database logs. The generated logs can be accessed for troubleshooting. It provides real-time access to essential logs, such as error logs, slow query logs though UI.

  • Name: The file path or identifier of the log file.
  • When: The timestamp indicating the most recent update or entry in the log file.
  • Actions: Options to view, download the log file for further analysis.

Example Logs

Logs viewer


The Logs Viewer is a critical tool for system administrators, enabling real-time monitoring and investigation of log files. With clear timestamps and actionable options, it ensures efficient identification and resolution of issues to maintain the stability of datastore operations.

8.13 - Observability

Monitor DBaaS datastore metrics via either UI or remotely

Overview

DBaaS offers metrics monitoring via the UI and remote.

Via UI there are various metrics for both databases and the nodes are presented under the datastore Monitor tab.

Remotely it is possible to monitor by using Prometheus and different exporters. The monitoring data is exposed though the exports from each node in the datastore. This is controlled under the Firewall tab in the DBaaS UI.

The ports available for the specific datastore configuration can be seen in UI under Firewall tab and the specific IP-address entry (fold the arrow to the left of the IP-address).


Exporter ports

Each exporter has its own port used by prometheus to scrape metrics.

Exporter TCP port
Node 9100
Mysql 9104
Postgres 9187
Redis 9121
MSSQL 9399

Sample visible metrics

The following tables are excerpts of metrics for the different exporters to quickly get started.

System - Hardware level metrics

Statistic Description
Load Average The overall load on your Datastore within the preset period
CPU Usage The breakdown of CPU utilisation for your Datastore, including both System and User processes
RAM Usage The amount of RAM (in Gigabytes) used and available within the preset period
Network Usage The amount of data (in Kilobits or Megabits per second) received and sent within the preset period
Disk Usage The total amount of storage used (in Gigabytes) and what is available within the preset period
Disk IO The input and output utilisation for your disk within the preset period
Disk IOPS The number of read and write operations within the preset period
Disk Throughput The amount of data (in Megabytes per second) that is being read from, or written to, the disk within the preset period

MySQL / MariaDB

MySQL metrics reference

  • Handler Stats
    Statistic Description
    Read Rnd Count of requests to read a row based on a fixed position
    Read Rnd Next Count of requests to read a subsequent row in a data file
    Read Next Count of requests to read the next row in key order
    Read Last Count of requests to read the last key in an index
    Read Prev Count of requests to read the previous row in key order
    Read First Count of requests to read a row based on an index key value
    Read Key Count of requests to read the last key in an index
    Update Count of requests to update a row
    Write Count of requests to insert to a table
  • Database Connections
    Metric Description
    Thread Connected Count of clients connected to the database
    Max Connections Count of max connections allowed to the database
    Max Used Connections Maximum number of connections in use
    Aborted Clients Number of connections aborted due to client not closing
    Aborted Connects Number of failed connection attempts
    Connections Number of connection attempts
  • Queries
    • Count of queries executed
  • Scan Operations
    • Count of operations for the operations: SELECT, UPDATE and DELETE
  • Table Locking
    Metric Description
    Table locks immediate Count of table locks that could be granted immediately
    Table locks waited Count of locks that had to be waited due to existing locks or another reason
  • Temporary Tables
    Metric Description
    Temporary tables Count of temporary tables created
    Temporary tables on Disk Count of temporary tables created on disk rather than in memory
  • Aborted Connections
    Metric Description
    Aborted Clients Number of connections aborted due to client not closing
    Aborted Connects Number of failed connection attempts
    Access Denied Errors Count of unsuccessful authentication attempts
  • Memory Utilisation
    Metric Description
    SELECT (fetched) Count of rows fetched by queries to the database
    SELECT (returned) Count of rows returned by queries to the database
    INSERT Count of rows inserted to the database
    UPDATE Count of rows updated in the database
    DELETE Count of rows deleted in the database
    Active Sessions Count of currently running queries
    Idle Sessions Count of connections to the database that are not currently in use
    Idle Sessions in transaction Count of connections that have begun a transaction but not yet completed while not actively doing work
    Idle Sessions in transaction (aborted) Count of connections that have begun a transaction but did not complete and were forcefully aborted before they could complete
    Lock tables Active locks on the database
    Checkpoints requested and timed Count of checkpoints requested and scheduled
    Checkpoint sync time Time synchronising checkpoint files to disk
    Checkpoint write time Time to write checkpoints to disk

Redis

Redis metrics reference

Metric Description
Blocked Clients Clients blocked while waiting on a command to execute
Memory Used Amount of memory used by Redis (in bytes)
Connected Clients Count of clients connected to Redis
Redis commands per second Count of commands processed per second
Total keys The total count of all keys stored by Redis
Replica Lag The lag (in seconds) between the primary and the replica(s)

8.14 - Parameter Group

Introduced in v.1.51

Parameter Groups is a powerful new feature introduced in version 1.51 of CCX. It enables users to manage and fine-tune database parameters within a group, simplifying configuration and ensuring consistency across datastores.

Overview

With Parameter Groups, users can:

  • Create new parameter groups with customized settings.
  • Assign parameter groups to specific datastores.
  • Edit and update parameters within a group.
  • Delete unused parameter groups.
  • Automatically synchronize parameter changes with associated datastores.

note:

A datastore can only be associated with one parameter group at a time. Changes to parameters are automatically propagated to all associated datastores.


Features

1. Creating a Parameter Group

Users can create a new parameter group to define custom configurations for their databases.

Steps to Create a New Parameter Group:

  1. Navigate to the DB Parameters section.
  2. Click on the + Create new group button.
  3. Fill in the required details:
    • Group Name: A unique name for the parameter group.
    • Description: A brief description of the group.
    • Vendor: Select the database type (e.g., MySQL, PostgreSQL, Redis).
    • Version: Specify the database version.
    • Configuration: Choose the type of configuration (e.g., Primary/Replica).
  4. Customize the parameter values as needed.
  5. Click Create to save the new group.

Create a parameter group


2. Assigning a Parameter Group to a Datastore

Once created, parameter groups can be assigned to datastores to apply the defined settings. The parameter can be assigned to an existing datastore or when a datastore is created.

Steps to Assign a Parameter Group in the Deployment wizard:

  1. Open the Create datastore wizard
  2. In the Configuration step, press Advanced, and select the parameter group under DB Settings. Assign a parameter group to the datastore

note Please note that atleast one parameter group must exist matching the vendor, version and configuration.

Steps to Assign a Parameter Group to an existing datastore:

  1. Navigate to the datastore you want to configure.
  2. Go to the DB Parameters tab.
  3. Click Change group or Assign group.
  4. Select the desired parameter group from the dropdown.
  5. Click Save to apply the group to the datastore.

The system will display the synchronization status (e.g., Pending or Synced) after assigning the group.

Assign a parameter group to datastore


3. Viewing and Managing Parameter Groups

Users can view all parameter groups in the DB Parameters section. For each group, the following details are displayed:

  • Group Name
  • Vendor and Version
  • Datastores: Associated datastores.
  • Descriptions

View parameter groups

From this view, users can:

  • Edit: Modify the group’s parameters.
  • Duplicate: Create a copy of the group.
  • Delete: Remove the group.

Parameter group actions


4. Editing a Parameter Group

Parameter groups can be updated to reflect new configurations. Any changes are automatically synchronized with associated datastores.

Steps to Edit a Parameter Group:

  1. Navigate to the DB Parameters section.
  2. Click on the three-dot menu next to the group you want to edit.
  3. Select Edit.
  4. Update the parameter values as needed.
  5. Click Save.

5. Deleting a Parameter Group

Unused parameter groups can be deleted to maintain a clean configuration environment.

Steps to Delete a Parameter Group:

  1. Navigate to the DB Parameters section.
  2. Click on the three-dot menu next to the group you want to delete.
  3. Select Delete.
  4. Confirm the deletion.

note A parameter group cannot be deleted if it is assigned to a datastore.


6. Synchronization

Once a parameter group is assigned to a datastore, the parameters are automatically synchronized. The status of synchronization (e.g., Pending or Synced) is visible in the DB Parameters tab of the datastore, and also in the Event Viewer.

sync parameter groups


Best Practices

  • Use Descriptive Names: Give parameter groups clear, descriptive names to make them easily identifiable.
  • Regular Updates: Regularly review and update parameter groups to optimize database performance.
  • Monitor Sync Status: Always verify that parameter changes are properly synced to the datastores.

Conclusion

Parameter Groups in CCX provide a centralized and efficient way to manage database configurations. By grouping parameters and syncing them to datastores, users can ensure consistency, reduce manual errors, and improve overall system performance.

8.15 - Promote A Replica

You may want to promote a replica to become the new primary. For instance, if you’ve scaled up with a larger instance, you might prefer to designate it as the primary. Alternatively, if you’re scaling down, you may want to switch to a smaller configuration for the primary node.

In the Nodes view, select the Promote Replica action from the action menu next to the replica you wish to promote:

Promote replica

In this example, the replica with an instance size of ‘medium’ will be promoted to the new primary.

A final confirmation screen will appear, detailing the steps that will be performed:

Promotion confirmation

8.16 - Reboot A Node

Introduced in v.1.51

The reboot command is found under the action menu of a databse node, on the Nodes page. Reboot node

Selecting “Reboot” triggers a restart of the chosen replica. Use this option when:

  • the replica needs to be refreshed due to performance issues
  • for maintenance purposes.
  • For some parameters, any change to the parameter value in a parameter group only takes effect after a reboot.

danger:

  • Ensure all tasks linked with the node are concluded before initiating a reboot to prevent data loss.
  • Only authorized personnel should perform actions within the administration panel to maintain system integrity.

note:

  • Please note that rebooting may cause temporary unavailability.
  • In Valkey, the primary may failover to a secondary if the reboot takes more than 30 seconds.

8.17 - Restore Backup

The Backup and Restore feature provides users with the ability to create, view, and restore backups for their databases. This ensures data safety and allows recovery to previous states if necessary.

Backup List View

In the Backup tab, users can view all the backups that have been created. The table provides essential information about each backup, such as:

  • Method: The tool or service used to perform the backup (e.g., mariabackup).
  • Type: The type of backup (e.g., full backup).
  • Status: The current state of the backup (e.g., Completed).
  • Started: The start time of the backup process.
  • Duration: How long the backup process took.
  • Size: The total size of the backup file.
  • Actions: Options to manage or restore backups.

Example Backup Table

Backup table

Users can manage their backups using the “Actions” menu, where options such as restoring a backup are available.

Backup Schedules View

The Backup Schedules allows users to manage scheduled backups for their datastore. Users can configure automatic backup schedules to ensure data is periodically saved without manual intervention.

Backup Schedule Table

The schedule table shows the details of each scheduled backup, including:

  • Method: The tool or service used to perform the backup (e.g., mariabackup).
  • Type: The type of backup, such as incremental or full.
  • Status: The current state of the scheduled backup (e.g., Active).
  • Created: The date and time when the backup schedule was created.
  • Recurrence: The schedule’s frequency, showing the cron expression used for the schedule (e.g., TZ=UTC 5 * * *).
  • Action: Options to manage the schedule, such as Pause or Edit.

Example Backup Schedule Table:

Backup Schedule Options

Managing Backup Schedules

The Action menu next to each schedule allows users to:

  • Pause: Temporarily stop the backup schedule.
  • Edit: Adjust the backup schedule settings, such as its frequency or time.

Editing a Backup Schedule

When editing a backup schedule, users can specify:

  • Frequency: Choose between Hourly or Daily backups.
  • Time: Set the exact time when the backup will start (e.g., 05:00 UTC).

For example, in the Edit Full Backup Schedule dialog, you can configure a full backup to run every day at a specified time. Adjust the settings as needed and click Save to apply the changes.

Example Backup Schedule Edit Dialog:

Edit Full Backup Schedule

This dialog allows you to easily adjust backup intervals, ensuring that backups align with your operational needs.

note:

Editing or pausing a schedule will not affect the current backups already created. The changes will only apply to future backups.

Restore Backup

To restore a backup, navigate to the Backup tab, find the desired backup, and select the Restore action from the Actions menu. This opens the restore dialog, where the following information is displayed:

  • Backup ID: The unique identifier of the backup.
  • Type: The type of backup (e.g., full backup).
  • Size: The total size of the backup file.

Restore Settings

  • Use Point in Time Recovery: Option to enable point-in-time recovery for finer control over the restore process. PITR is only supported by Postgres, MySQL/MariaDb, and MS SQLServer.

By default, this option is turned off, allowing a full restoration from the selected backup.

Confirmation

Before initiating the restore, users are presented with a confirmation dialog:

You are going to restore a backup
You are about to restore a backup created on 03/10/2024 05:00 UTC.
This process will completely overwrite your current data, and all changes since your last backup will be lost.

Users can then choose to either Cancel or proceed with the Restore.

Example Restore Dialog:

Restore dialog

This ensures that users are fully aware of the potential data loss before proceeding with the restore operation.

8.18 - Scale A Datastore

This section explains how to scale a datastore, including:

  • Scaling volumes
  • Scaling nodes (out, in, up and down)

A datastore can be scaled out to meet growing demands. Scaling out involves adding:

  • One or more replica nodes (for primary/replica configurations). This is useful when you need to scale up and want the primary node to have more resources, such as additional CPU cores and RAM.
  • One or more primary nodes (for multi-primary configurations). In multi-primary setups, scaling up or down must maintain an odd number of nodes to preserve quorum and the consensus protocol required by the database.

The instance type of the new nodes may differ from the current ones.

To scale a datastore, navigate to the Nodes page and select Nodes Configuration.

Scaling nodes

Scaling Up or Down, In or Out

Use the slider to adjust the datastore’s new size. In this example, we have two nodes (one primary and one replica), and we want to scale up to four nodes. You can also specify the availability zones and instance sizes for the new nodes. Later, you might choose to promote one of the replicas to be the new primary. To proceed with scaling, click Save and wait for the scaling job to complete.

Scaling from 2 to 4 nodes

Scaling Down

You can also scale down by removing replicas or primary nodes (in a multi-primary configuration). In the Nodes Configuration view, select the nodes you wish to remove, then click Save to begin the scaling process. This allows you to reduce the size of the datastore or remove nodes with unwanted instance sizes.

Scaling down to 2 nodes

Scaling Volumes

To scale storage, go to the Nodes tab and select Scale Storage. You can extend the storage size, but it cannot be reduced. All nodes in the datastore will have their storage scaled to the new size.

8.19 - Terraform Provider

The CCX Terraform provider allows to create datastores on all supported clouds. The CCX Terraform provider project is hosted on github.

Oauth2 credentials

Oauth2 credentials are used to authenticate the CCX Terraform provider with CCX. You can generate these credentials on the Account page Authorization tab. Create creds And then you will see: Created creds

Requirement

  • Terraform 0.13.x or later

Quick Start

  1. Create Oauth2 credentials.
  2. Create a terraform.tf
  3. Set client_id, client_secret, below is a terraform.tf file:
terraform {
  required_providers {
    ccx = {
      source  = "severalnines/ccx"
      version = "~> 0.4.7"
    }
  }
}

provider "ccx" {
    client_id = `client_id`
    client_secret = `client_secret`
}
```

Now, you can create a datastore using the following terraform code.
Here is an example of a parameter group:

```terraform
resource "ccx_parameter_group" "asteroid" {
    name = "asteroid"
    database_vendor = "mariadb"
    database_version = "10.11"
    database_type = "galera"

    parameters = {
      table_open_cache = 8000
      sql_mode = "STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
    }
}
```

This group can then be associated with a datastore as follows:

```terraform
resource "ccx_datastore" "luna_mysql" {
	name           = "luna_mysql"
	size           = 3
	type           = "replication"
	db_vendor      = "mysql"
	tags           = ["new", "test"]
	cloud_provider = "CCX_CLOUD"
	cloud_region   = "CCX-REGION-1"
	instance_size  = "MEGALARGE"
	volume_size    = 80
	volume_type    = "MEGAFAST"
	parameter_group = ccx_parameter_group.asteroid.id
}
```

Replace CCX_CLOUD, CCX-REGION-1, MEGALARGE and, MEGAFAST, with actual values depending on the cloud infrastructure available.

For more information and examples, visit the [terraform-provider-ccx](https://github.com/severalnines/terraform-provider-ccx) github page.

## More on parameter groups
Only one parameter group can be used at any give time by a datastore.
Also, you cannot change an existing parameter group from terraform.
If you want to change an existing parameter group, then you need to create a new parameter group:
```terraform
resource "ccx_parameter_group" "asteroid2" {
    name = "asteroid2"
    database_vendor = "mariadb"
    database_version = "10.11"
    database_type = "galera"

    parameters = {
      table_open_cache = 7000
      sql_mode = "NO_ENGINE_SUBSTITUTION"
    }
}
```
And then reference it in:
```terraform
resource "ccx_datastore" "luna_mysql" {
	name           = "luna_mysql"
  ... <same as before>
	parameter_group = ccx_parameter_group.asteroid2.id
}
```
Now you can apply this to terraform. Always test config changes first on a test system to be sure the config change works as expected.

## Features
The following settings can be updated:

- Add and remove nodes
- Volume type
- Volume size
- Notifications
- Maintenance time
- Modify firewall (add/remove) entries. Multiple entries can be specified with a comma-separated list.

### Limitations

- Change the existing parameter group is not possible after initial creation, however you can create a new parameter group and reference that.
- It is not possible to change instance type.
- Changing availability zone is not possible.

8.20 - TLS For Metrics

Overview

To enhance security, using TLS for accessing metrics is recommended. This document outlines how the metrics served securely using TLS for each exporter. Each node typically has a Node Exporter and a corresponding database-specific exporter to provide detailed metrics. Access to these metrics is limited to the sources specified in Firewall Management.

Service discovery

There is a service discovery endpoint created for each datastore. Available from CCX v1.53 onwards.

It’s available at https://<ccxFQDN>/metrics/<storeID>/targets and implements Prometheus HTTP SD Endpoint.

note:

<ccxFQDN> is the domain you see in your address bar with CCX UI open, not a datastore URL or a connection string. We’ll use ccx.example.com hereafter.

Here is an example of a scrape config for Prometheus:

scrape_configs:
  - job_name: 'my datastore'
    http_sd_configs:
      - url: 'https://ccx.example.com/metrics/50e4db2a-85cd-4190-b312-e9e263045b5b/targets'

Individual Metrics Endpoints Format

Metrics for each exporter is served on:

https://ccx.example.com/metrics/<storeID>/<nodeName>/<exporterType>

Where nodeName is short name, not full fqdn.

Exporter Type Examples:

  1. MSSQL:

    • URL: https://ccx.example.com/metrics/<storeID>/<nodeName>/mssql_exporter
  2. Redis:

    • URL: https://ccx.example.com/metrics/<storeID>/<nodeName>/redis_exporter
  3. PostgreSQL:

    • URL: https://ccx.example.com/metrics/<storeID>/<nodeName>/postgres_exporter
  4. MySQL:

    • URL: https://ccx.example.com/metrics/<storeID>/<nodeName>/mysqld_exporter
  5. MariaDB:

    • URL: https://ccx.example.com/metrics/<storeID>/<nodeName>/mysqld_exporter
  6. NodeExporter:

    • URL: https://ccx.example.com/metrics/<storeID>/<nodeName>/node_exporter

By serving metrics over HTTPS with TLS, we ensure secure monitoring access for customers.

8.21 - Upgrade Lifecycle Mgmt

CCX will keep your system updated with the latest security patches for both the operating system and the database software.

You will be informed when there is a pending update and you have two options:

  • Apply the update now
  • Schedule a time for the update

The update will be performed using a roll-forward upgrade algorithm:

  1. The oldest replica (or primary if no replica exist) will be selected first
  2. A new node will be added with the same specification as the oldest node and join the datastore
  3. The oldest node will be removed
  4. 1-3 continues until all replicas (or primaries in case of a multi-primary setup) are updated.
  5. If it is a primary-replica configuration then the primary will be updated last. A new node will be added, the new node will be promoted to become the new primary, and the old primary will be removed.

upgrade

Upgrade now

This option will start the upgrade now.

Scheduled upgrade

The upgrade will start at a time (in UTC) and on a weekday which suits the application. Please note, that for primary-replica configurations, the update will cause the current primary to be changed.

Upgrade database major version

To upgrade the database major version from e.g MariaDB 10.6 to 10.11, you need to create a new datastore from backup, alternatively take mysqldump or pgdump and apply it to your new datastore.

8.22 - Connect Kubernetes with DBaaS

Overview on what is needed to connect Kubernetes with DBaaS

Overview

To connect your Kubernetes cluster with DBaaS, you need to allow the external IP addresses of your worker nodes, including reserved IP adresses, in DBaaS UIs firewall. You can find the reserved IPs in your clusters Openstack project or ask the support for help.

Get your worker nodes external IP with the CLI tool kubectl: kubectl get nodes -o wide

NAME                                            STATUS   ROLES           AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
company-stage1-control-plane-1701435699-7s27c   Ready    control-plane   153d   v1.28.4   10.128.0.40    <none>        Ubuntu 22.04.3 LTS   5.15.0-88-generic   containerd://1.7.6
company-stage1-control-plane-1701435699-9spjg   Ready    control-plane   153d   v1.28.4   10.128.1.160   <none>        Ubuntu 22.04.3 LTS   5.15.0-88-generic   containerd://1.7.6
company-stage1-control-plane-1701435699-wm8pd   Ready    control-plane   153d   v1.28.4   10.128.3.13    <none>        Ubuntu 22.04.3 LTS   5.15.0-88-generic   containerd://1.7.6
company-stage1-worker-sto1-1701436487-dwr5f     Ready    <none>          153d   v1.28.4   10.128.3.227   1.2.3.5       Ubuntu 22.04.3 LTS   5.15.0-88-generic   containerd://1.7.6
company-stage1-worker-sto2-1701436613-d2wgw     Ready    <none>          153d   v1.28.4   10.128.2.180   1.2.3.6       Ubuntu 22.04.3 LTS   5.15.0-88-generic   containerd://1.7.6
company-stage1-worker-sto3-1701437761-4d9bl     Ready    <none>          153d   v1.28.4   10.128.0.134   1.2.3.7       Ubuntu 22.04.3 LTS   5.15.0-88-generic   containerd://1.7.6

Copy the external IP for each worker node, in this case the three nodes with the ROLE <none>.

In the DBaaS UI, go to Datastores -> Firewall -> Create trusted source, Add the external IP with CIDR notation /32 for each IP address (E.g. 1.2.3.5/32).