This is the multi-page printable view of this section. Click here to print.
Reference
- 1: Datastore Statuses
- 2: Glossary
- 3: Notifications
- 4: Observability
- 4.1: Metrics
- 4.1.1: Introduction
- 4.1.2: MySQL And MariaDB
- 4.1.3: PostgreSQL
- 4.1.4: Redis
- 4.1.5: System
- 4.1.6: Valkey
- 5: Products
- 5.1: MariaDb
- 5.1.1: Backup
- 5.1.2: Configuration
- 5.1.3: Importing Data
- 5.1.4: Limitations
- 5.1.5: Overview
- 5.1.6: Restore
- 5.1.7: TLS Connection
- 5.2: MSSQLServer
- 5.2.1: Configurations
- 5.2.2: Limitations
- 5.2.3: Overview
- 5.2.4: User Management
- 5.3: MySQL
- 5.3.1: Backup
- 5.3.2: Configuration
- 5.3.3: Importing Data
- 5.3.4: Importing Data From AWS RDS
- 5.3.5: Importing Data From GCP
- 5.3.6: Limitations
- 5.3.7: Overview
- 5.3.8: Restore
- 5.3.9: TLS Connection
- 5.3.10: User Management
- 5.4: PostgreSQL
- 5.4.1: Backup
- 5.4.2: Configuration
- 5.4.3: Extensions
- 5.4.4: Importing Data
- 5.4.5: Limitations
- 5.4.6: Restore
- 5.5: Redis
- 5.5.1: Backup
- 5.5.2: Configuration
- 5.5.3: User Management
- 5.6: Valkey
- 5.6.1: Backup
- 5.6.2: Configuration
- 5.6.3: User Management
- 6: Supported Databases
1 - Datastore Statuses
When you deploy a Datastore, you will see a Status reported in the CCX Dashboard. This article outlines the statuses and what they mean.
| Status | Description | Action Required? |
|---|---|---|
| Deploying | Your Datastore is being configured and deployed into the Cloud you specified |
No |
| Available | Your Datastore is up and running with no reported issues |
No |
| Unreachable | Your Datastore might be running but CCX is not able to communicate directly with one or more Node(s). |
Verify you can access the Datastore and contact Support |
| Maintenance | Your Datastore is applying critical security updates during the specific maintenance window. |
No |
| Deleting | You have requested the deletion of your Datastore and it is currently being processed. |
No, unless this deletion was not requested by you or the Datastore has been in this state for more than 2 hours |
| Deleted | Your Datastore has been deleted. |
No |
| Failed | Your Datastore has failed, this can be a hardware or software fault |
Contact Support |
2 - Glossary
| Term | Definition | AKA | Area |
|---|---|---|---|
| Datastore | A deployment of a Database on CCX. A Datastore has a unique ID, it is essential to include this when contacting Support with issues or queries. |
Service | Deployment |
| Node | A Virtual Machine (VM) in a Cloud that makes up a Datastore. A Node consists of:- CPU - the number of cores - RAM - the amount (GB) of memory - Storage - the amount (GB/TB) of persistent storage |
Virtual Machine (VM) Node Server Instance |
Compute |
| Storage | The amount of persistent data for your Datastore.Storage comes in multiple different formats and not all are supported by all Clouds. There are cost and performance considerations when choosing the storage. |
Storage | |
| Volumes | The types of Storage available. Typically, this is measured in IOPS and the higher IOPS has increased performance with an increased cost per GB |
||
| Database | The engine deployed and configured for your Datastore. To see these options, check Supported Databases |
Database Management System (DBMS) | General |
| Virtual Private Cloud (VPC) | A private network configured that is unique to your account and ensures that any traffic between your Datastore does not go over the Public Internet |
Private Network | Networking |
| Cloud | An infrastructure provider where Datastores can be deployed |
Deployment | |
| Region | A geographic region with one or more Datacentres owned or operated by a Cloud. A Datastore is deployed into a single Region |
Deployment | |
| Availability Zone (AZ) | A Region can have one or more Availability Zones. More than one Availability Zones allows one Datacentre to go down without bringing down all of the Nodes in your Datastore.CCX will automatically attempt to deploy each Node in a Datastore into a different AZ (if the Region supports it) |
Deployment | |
| Replication | A method of exchanging data between two Nodes that ensures they stay in sync and allows one Node to fail without bringing your Datastore down |
Operations | |
| Primary / Replica | The recommended deployment for a Production Datastore with 2 or more Nodes, one acting as the Primary and the other(s) acting as the Replica |
Highly Available High Availability |
Operations |
| Multi-Primary | Multiple Nodes deployed with the same role, all of them acting as the Primary. This topology is not supported by all Databases |
Clustered | Operations |
| Status | The last known status of your Datastore. For details of the possible statuses, see here |
State | Operations |
| Maintenance | The application of critical security updates to your Datastore. These are applied in your Maintenance Window which can be configured per Datastore. |
Operations | |
| Monitoring | This is the metrics of the hardware and software for your Datastore. These can be accessed in the CCX Dashboard and can be shown per Node. For details of the metrics available, see here. |
Observability |
3 - Notifications
CCX notifies users by email in case of certain events. Recipients can be configured on the Datastore Settings page or in the Datastore wizard.
| Alert | Description | Action Required? |
|---|---|---|
| Cluster Upgrade | Cluster is being upgraded | No |
| Cluster Storage Resized | Cluster storage has been automatically resized from size to new_size. | No |
| HostAutoScaleDiskSpaceReached | The cluster is running out of storage and will be automatically scaled. | No |
4 - Observability
4.1 - Metrics
4.1.1 - Introduction
CCX uses Prometheus and exporters for monitoring. The monitoring data is exposed though the exports from each node. This is a controlled under the Firewall tab in the CCX UI.
4.1.2 - MySQL And MariaDB
- MySQL / MariaDB
- Handler Stats
- Statistics for the handler. Shown as:
- Read Rnd
- Count of requests to read a row based on a fixed position
- Read Rnd Next
- Count of requests to read a subsequent row in a data file
- Read Next
- Count of requests to read the next row in key order
- Read Last
- Count of requests to read the last key in an index
- Read Prev
- Count of requests to read the previous row in key order
- Read First
- Count of requests to read a row based on an index key value
- Read Key
- Count of requests to read the last key in an index
- Update
- Count of requests to update a row
- Write
- Count of requests to insert to a table
- Read Rnd
- Statistics for the handler. Shown as:
- Handler Transaction Stats
- Database Connections
- Count of connections to the database. Shown as:
- Thread Connected
- Count of clients connected to the database
- Max Connections
- Count of max connections allowed to the database
- Max Used Connections
- Maximum number of connections in use
- Aborted Clients
- Number of connections aborted due to client not closing
- Aborted Connects
- Number of failed connection attempts
- Connections
- Number of connection attempts
- Thread Connected
- Count of connections to the database. Shown as:
- Queries
- Count of queries executed
- Scan Operations
- Count of operations for the operations: SELECT, UPDATE and DELETE
- Table Locking
- Count of table locks. Shown as:
- Table locks immediate
- Count of table locks that could be granted immediately
- Table locks waited
- Count of locks that had to be waited due to existing locks or another reason
- Table locks immediate
- Count of table locks. Shown as:
- Temporary Tables
- Count of temporary tables created. Shown as:
- Temporary tables
- Count of temporary tables created
- Temporary tables on Disk
- Count of temporary tables created on disk rather than in memory
- Temporary tables
- Count of temporary tables created. Shown as:
- Sorting
- Aborted Connections
- Count of failed or aborted connections to the database. Shown as:
- Aborted Clients
- Number of connections aborted due to client not closing
- Aborted Connects
- Number of failed connection attempts
- Access Denied Errors
- Count of unsuccessful authentication attempts
- Aborted Clients
- Memory Utilisation
- Count of failed or aborted connections to the database. Shown as:
- Handler Stats
4.1.3 - PostgreSQL
- PostgreSQL
- SELECT (fetched)
- Count of rows fetched by queries to the database
- SELECT (returned)
- Count of rows returned by queries to the database
- INSERT
- Count of rows inserted to the database
- UPDATE
- Count of rows updated in the database
- DELETE
- Count of rows deleted in the database
- Active Sessions
- Count of currently running queries
- Idle Sessions
- Count of connections to the database that are not currently in use
- Idle Sessions in transaction
- Count of connections that have begun a transaction but not yet completed while not actively doing work
- Idle Sessions in transaction (aborted)
- Count of connections that have begun a transaction but did not complete and were forcefully aborted before they could complete
- Lock tables
- Active locks on the database
- Checkpoints requested and timed
- Count of checkpoints requested and scheduled
- Checkpoint sync time
- Time synchronising checkpoint files to disk
- Checkpoint write time
- Time to write checkpoints to disk
- SELECT (fetched)
4.1.4 - Redis
- Redis
- Blocked Clients
- Clients blocked while waiting on a command to execute
- Memory Used
- Amount of memory used by Redis (in bytes)
- Connected Clients
- Count of clients connected to Redis
- Redis commands per second
- Count of commands processed per second
- Total keys
- The total count of all keys stored by Redis
- Replica Lag
- The lag (in seconds) between the primary and the replica(s)
- Blocked Clients
4.1.5 - System
- System - Hardware level metrics for your Datastore
- Load Average
- The overall load on your Datastore within the preset period
- CPU Usage
- The breakdown of CPU utilisation for your Datastore, including both
SystemandUserprocesses
- The breakdown of CPU utilisation for your Datastore, including both
- RAM Usage
- The amount of RAM (in Gigabytes) used and available within the preset period
- Network Usage
- The amount of data (in Kilobits or Megabits per second) received and sent within the preset period
- Disk Usage
- The total amount of storage used (in Gigabytes) and what is available within the preset period
- Disk IO
- The input and output utilisation for your disk within the preset period
- Disk IOPS
- The number of read and write operations within the preset period
- Disk Throughput
- The amount of data (in Megabytes per second) that is being read from, or written to, the disk within the preset period
- Load Average
4.1.6 - Valkey
- Valkey
- Blocked Clients
- Clients blocked while waiting on a command to execute
- Memory Used
- Amount of memory used by Valkey (in bytes)
- Connected Clients
- Count of clients connected to Valkey
- Valkey commands per second
- Count of commands processed per second
- Total keys
- The total count of all keys stored by Valkey
- Replica Lag
- The lag (in seconds) between the primary and the replica(s)
- Blocked Clients
5 - Products
5.1 - MariaDb
5.1.1 - Backup
Mariabackup is used to create backups.
CCX backups the Primary server. In multi-primary setups the node with the highest wsrep_local_index is elected.
Backups are streamed directly to S3 staroge.
Mariabackup blocks DDL operations during the backup using the --lock-ddl flag.
Any attempt to CREATE, ALTER, DROP, TRUNCATE a table during backup creation will be locked with the status Waiting for backup lock (see SHOW FULL PROCESSLIST).
In this case, wait for the backup to finish and, perform the operation later.
Also see the section ‘Schedule’.
Schedule
The backup schedule can be tuned and backups can be paused
5.1.2 - Configuration
max_connections
- 75 connections / GB of RAM.
- Example: 4GB of RAM yields 300 connections.
- This setting cannot be changed as it affects system stability.
InnoDB settings
- These setting cannot be changed as it affects system stability.
innodb_buffer_pool_size
- 50% of RAM if total RAM is > 4GB
- 25% of RAM if total RAM is <= 4GB
innodb_log_file_size
- 1024 MB if innodb_buffer_pool_size >= 8192MB
- 512 MB if innodb_buffer_pool_size < 8192MB
innodb_buffer_pool_instances
- 8
InnoDB options
| variable_name | variable_value |
|---|---|
| innodb_buffer_pool_size | Depends on instance size |
| innodb_flush_log_at_trx_commit | 2 |
| innodb_file_per_table | 1 |
| innodb_data_file_path | Depends on instance |
| innodb_read_io_threads | 4 |
| innodb_write_io_threads | 4 |
| innodb_doublewrite | 1 |
| innodb_buffer_pool_instances | Depends on instance size |
| innodb_redo_log_capacity | 8G |
| innodb_thread_concurrency | 0 |
| innodb_flush_method | O_DIRECT |
| innodb_autoinc_lock_mode | 2 |
| innodb_stats_on_metadata | 0 |
| default_storage_engine | innodb |
General options
| variable_name | variable_value |
|---|---|
| tmp_table_size | 64M |
| max_heap_table_size | 64M |
| max_allowed_packet | 1G |
| sort_buffer_size | 256K |
| read_buffer_size | 256K |
| read_rnd_buffer_size | 512K |
| memlock | 0 |
| sysdate_is_now | 1 |
| max_connections | Depends on instance size |
| thread_cache_size | 512 |
| table_open_cache | 4000 |
| table_open_cache_instances | 16 |
| lower_case_table_names | 0 |
Storage
Recommended storage size
- We recommend a maximum of 100GB storage per GB of RAM.
- Example: 4GB of RAM yields 400GB of storage.
- The recommendation is not enforced by the CCX platform.
5.1.3 - Importing Data
This procedure describes how to import data to a MariaDB datastore located in CCX.
- The MariaDB Datastore on CCX is denoted as the ‘replica’
- The source of the data is denoted as the ‘source’
note:
If you do not want to setup replication, then you can chose to only apply the sections:
- Create a database dump file
- Apply the dumpfile on the replica
Limitations of MariaDB
MariaDB does not offer as fine grained control over privileges as MySQL. Nor does it have the same level of replication features.
The following properties must be respected in order to comply with the SLA:
- There must be no user management happening on the source, while the data is imported and the replication link is active. This is avoid corruption of the mysql database and possibly other system databases.
- It is recommended to set
binlog-ignore-dbon the source to ‘mysql, performance_schema, and sys’ during the data import/sync process.
Preparations
Ensure that the source is configured to act as a replication source.
- Binary logging is enabled.
server_idis set to non 0.
Also, prepare the replica with the databases you wish to replicate from the source to the master:
- Using the CCX UI, go to Databases, and issue a Create Database for each database that will be replicated.
Ensure the CCX Firewall is updated:
- Add the replication source as a Trusted Source in the Firewall section of the CCX UI.
Create a replication user on the source
Create a replication user with sufficient privileges on the source:
CREATE USER 'repluser'@'%' IDENTIFIED BY '<SECRET>';
GRANT REPLICATION SLAVE ON *.* TO 'repluser'@'%';
Prepare the replica to replicate from the source
The replica must be instrucuted to replicate from the source.
Make sure to change <SOURCE_IP>, <SOURCE_PORT>, and <SECRET>.
Run the following on the source:
CHANGE MASTER TO MASTER_HOST=<SOURCE_IP>, MASTER_PORT=<SOURCE_PORT>, MASTER_USER='repluser', MASTER_PASSWORD='<SECRET>', MASTER_SSL=1;
Create a database dump file of the source
The database dump contains the data that you wish to import into the replica. Only partial dumps are possible. The dump must not contains any mysql or other system databases.
danger: The dump must not contains any mysql or other system databases.
On the source, issue the following command. Change ADMIN, SECRET and DATABASES:
mysqldump -uADMIN -p<SECRET> --master-data --single-transaction --triggers --routines --events --databases DATABASES > dump.sql`
If your database dump contains SPROCs, triggers or events, then you must replace DEFINER. This may take a while:
sed 's/\sDEFINER=`[^`]*`@`[^`]*`//g' -i dump.sql
Apply the dumpfile on the replica
cat dump.sql | mysql -uccxadmin -p -h<REPLICA_PRIMARY>
Start the replica
On the replica do:
START SLAVE
followed by
SHOW SLAVE STATUS;
And verify that:
Slave_IO_State: Waiting for source to send event
..
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
When the migration is ready
STOP SLAVE;
RESET SLAVE ALL;
Troubleshooting
If the replication fails to start then verify:
- All the steps above has been followed.
- Ensure that the replication source is added as a Trusted Source in the Firewall section of the CCX UI.
- Ensure that you have the correct IP/FQDN of the replication source.
- Ensure that users are created correctly and using the correct password.
- Ensure that the dump is fresh.
5.1.4 - Limitations
Every product has limitations. Here is a list MariaDB limitations:
Permissions
The privilege system is not as flexible as in MySQL.
The ‘ccxadmin’ user has the following privileges:
Global / all databases (.):
- CREATE USER, REPLICATION SLAVE, REPLICATION SLAVE ADMIN, SLAVE MONITOR
On databases created from CCX, the admin user can create new users and grant privileges:
- ALL PRIVILEGES WITH GRANT OPTION
This means that users can only create databases from the CCX UI. Once the database has been created from the CCX UI, then the ccxadmin user can create users and grant user privileges on the database using MariaDB CLI.
5.1.5 - Overview
CCX supports two types of MariaDB clustering:
- MariaDB Replication (Primary-replica configuration)x
- MariaDB Cluster (Multi-primary configuration)
For general purpose applications we recommend using MariaDB Replication, and we only recommend to use MariaDB Cluster if you are migrating from an existing application that uses MariaDB Cluster.
If you are new to MariaDB Cluster we stronly recommend to read about the MariaDB Cluster 10.x limitations and MariaDB Cluster Overview to understand if your application can benefit from MariaDB Cluster.
MariaDB Replication uses the standard asynchronous replication based on GTIDs.
Scaling
Storage and nodes can be scaled online.
Nodes (horizonal)
- The maximum number of database nodes in a datastore is 5.
- Multi-primary configuration must contain an odd number of nodes (1, 3 and 5).
Nodes (vertical)
A node cannot be scaled vertically currently. To scale to large instance type, then a larger instance must be added and then remove the unwanted smaller instances.
Storage
- Maximum size depends on the service provider and instance size
- Volume type cannot currently be changed
Further Reading
5.1.6 - Restore
There are two options to restore a backup:
- Restore a backup on the existing datastore
- Restore a backup on a new datastore
Please note that restoring a backup may be a long running process.
This option allows to restore a backup with point in time recovery. The WAL logs are replayed until the desired PITR.
Warning! Running several restores may change the timelines.
This option allows to restore a backup on a new datastore. This option does not currently support PITR.
5.1.7 - TLS Connection
SSL Modes
CCX currently supports connections to MariaDB in two SSL modes:
REQUIRED: This mode requires an SSL connection. If a client attempts to connect without SSL, the server rejects the connection.VERIFY_CA: This mode requires an SSL connection and the server must verify the client’s certificate against the CA certificates that it has.
CA Certificate
The Certificate Authority (CA) certificate required for VERIFY_CA mode can be downloaded from your datastore on CCX using an API call or through the user interface on page https://{your_ccx_domain}/projects/default/data-stores/{datastore_id}/settings.
This certificate is used for the VERIFY_CA SSL mode.
Example Commands
Here are example commands for connecting to the MySQL server using the two supported SSL modes:
-
REQUIREDmode:mysql --ssl-mode=REQUIRED -u username -p -h hostname -
VERIFY_CAmode:mysql --ssl-mode=VERIFY_CA --ssl-ca=ca.pem -u username -p -h hostname
require_secure_transport
This is a MariaDB setting that governs if connections to the datastore are required to use SSL. You can change this setting in CCX in Settings -> DB Parameters
| Scenario | Server Parameter Settings | Description |
|---|---|---|
| Disable SSL enforcement | require_secure_transport = OFF |
This is the default to support legacy applications. If your legacy application doesn’t support encrypted connections, you can disable enforcement of encrypted connections by setting require_secure_transport=OFF. However, connections are encrypted unless SSL is disabled on the client. See examples |
| Enforce SSL | require_secure_transport = ON |
This is the recommended configuratuion. |
Examples
ssl-mode=DISABLED and require_secure_transport=OFF
mysql -uccxadmin -p -h... -P3306 ccxdb --ssl-mode=disabled
...
mysql> \s
--------------
...
Connection id: 52
Current database: ccxdb
Current user: ccxadmin@...
*SSL: Not in use*
Current pager: stdout
...
ssl-mode=PREFERRED and require_secure_transport=OFF
mysql -uccxadmin -p -h... -P3306 ccxdb --ssl-mode=preferred
...
mysql> \s
--------------
...
Connection id: 52
Current database: ccxdb
Current user: ccxadmin@...
SSL: Cipher in use is TLS_AES_256_GCM_SHA384
Current pager: stdout
...
ssl-mode=DISABLED and require_secure_transport=ON
mysql -uccxadmin -p -h... -P3306 ccxdb --ssl-mode=disabled
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 3159 (08004): Connections using insecure transport are prohibited while --require_secure_transport=ON.
ssl-mode=PREFERRED|REQUIRED and require_secure_transport=ON
mysql -uccxadmin -p -h... -P3306 ccxdb --ssl-mode=preferred|required
mysql> \s
--------------
...
Connection id: 52
Current database: ccxdb
Current user: ccxadmin@...
SSL: Cipher in use is TLS_AES_256_GCM_SHA384
Current pager: stdout
...
tls_version
The tls_version is set to the following by default:
| Variable_name | Value |
|---|---|
| tls_version | TLSv1.2,TLSv1.3 |
5.2 - MSSQLServer
5.2.1 - Configurations
Important default values
max_connections
- SQL Server has no direct “max connection per GB of RAM” rule. The actual number of user connections allowed depends on the version of SQL Server that you are using, and also the limits of your application(s), and hardware.
- SQL Server allows a maximum of 32,767 user connections.
- User connections is a dynamic (self-configuring) option, SQL Server adjusts the maximum number of user connections automatically as needed, up to the maximum value allowable.
- In most cases, you do not have to change the value for this option. The default is 0, which means that the maximum (32,767) user connections are allowed.
- To determine the maximum number of user connections that your system allows, you can execute
sp_configureor query thesys.configurationcatalog view. - For more info: https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/configure-the-user-connections-server-configuration-option?view=sql-server-ver16&viewFallbackFrom=sql-server-ver16.
5.2.2 - Limitations
Every product has limitations. Below is a list of Microsoft SQL Server limitations:
License
- The standard license is applied.
Configurations
- Single node (no High Availability)
- Always On (2 nodes, asynchronous commit mode, High Availability)
Always On-specific limitations
- Refer to the Microsoft standard license for a complete list of limitations.
- Only asynchronous commit mode is currently supported.
- The
ccxdbis currently the only supported Always On enabled database. - Scaling is not supported as the standard license does not permit more than two nodes.
User-created databases (not Always On) are not transferred to the replica
- In the Always On configuration, only the
ccxdbis replicated. - Data loss may occur for other user-created databases, as they are not transferred to the replica during the add node process. Therefore, they may be lost if a failover, automatic repair, or any other life-cycle management event occurs.
5.2.3 - Overview
CCX supports two Microsoft SQLServer 2022 configurations:
- Single-node (No high-availability)
- Always-on, two nodes, async-commit mode (high-availability) in an primary-replica configuration.
The ‘standard’ license is applied.
Scaling
Scaling is not supported in SQLServer as of the standard license.
Storage
- Maximum size depends on the service provider and instance size
- Volume type cannot currently be changed
Further Reading
5.2.4 - User Management
CCX supports creating database users from the web interface.
The database user is created as follows:
CREATE LOGIN username WITH PASSWORD = 'SECRET', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
ALTER SERVER ROLE [sysadmin] ADD MEMBER [username]
5.3 - MySQL
5.3.1 - Backup
Percona Xtrabackup is used to create backups.
CCX backups the Primary server. In multi-primary setups the node with the highest wsrep_local_index is elected.
Backups are streamed directly to S3 staroge.
Percona Xtrabackup blocks DDL operations during the backup using the --lock-ddl flag.
Any attempt to CREATE, ALTER, DROP, TRUNCATE a table during backup creation will be locked with the status Waiting for backup lock (see SHOW FULL PROCESSLIST).
In this case, wait for the backup to finish and, perform the operation later.
Also see the section ‘Schedule’.
Schedule
The backup schedule can be tuned and backups can be paused.
5.3.2 - Configuration
max_connections
- 75 connections / GB of RAM.
- Example: 4GB of RAM yields 300 connections.
- This setting cannot be changed as it affects system stability.
InnoDB settings
- These setting cannot be changed as it affects system stability.
innodb_buffer_pool_size
- 50% of RAM if total RAM is > 4GB
- 25% of RAM if total RAM is <= 4GB
innodb_log_file_size
- 1024 MB if innodb_buffer_pool_size >= 8192MB
- 512 MB if innodb_buffer_pool_size < 8192MB
innodb_buffer_pool_instances
- 8
InnoDB options
| variable_name | variable_value |
|---|---|
| innodb_buffer_pool_size | Depends on instance size |
| innodb_flush_log_at_trx_commit | 2 |
| innodb_file_per_table | 1 |
| innodb_data_file_path | Depends on instance size |
| innodb_read_io_threads | 4 |
| innodb_write_io_threads | 4 |
| innodb_doublewrite | 1 |
| innodb_buffer_pool_instances | Depends on instance size |
| innodb_redo_log_capacity | 8G |
| innodb_thread_concurrency | 0 |
| innodb_flush_method | O_DIRECT |
| innodb_autoinc_lock_mode | 2 |
| innodb_stats_on_metadata | 0 |
| default_storage_engine | innodb |
General options
| variable_name | variable_value |
|---|---|
| tmp_table_size | 64M |
| max_heap_table_size | 64M |
| max_allowed_packet | 1G |
| sort_buffer_size | 256K |
| read_buffer_size | 256K |
| read_rnd_buffer_size | 512K |
| memlock | 0 |
| sysdate_is_now | 1 |
| max_connections | Depends on instance size |
| thread_cache_size | 512 |
| table_open_cache | 4000 |
| table_open_cache_instances | 16 |
| lower_case_table_names | 0 |
Storage
Recommended storage size
- We recommend a maximum of 100GB storage per GB of RAM.
- Example: 4GB of RAM yields 400GB of storage.
- The recommendation is not enforced by the CCX platform.
5.3.3 - Importing Data
This procedure describes how to import data to a MySQL datastore located in CCX.
- The MySQL Datastore on CCX is denoted as the ‘replica’
- The source of the data is denoted as the ‘source’
note:
If you dont want to setup replication, then you can chose to only apply the sections:
- Create a database dump file
- Apply the dumpfile on the replica
Preparations
Ensure that the source is configured to act as a replication source:
- Binary logging is enabled.
server_idis set to non 0.
Ensure the CCX Firewall is updated:
- Add the replication source as a Trusted Source in the Firewall section of the CCX UI.
Create a replication user on the source
Create a replication user with sufficient privileges on the source:
CREATE USER 'repluser'@'%' IDENTIFIED BY '<SECRET>';
GRANT REPLICATION SLAVE ON *.* TO 'repluser'@'%';
Prepare the replica to replicate from the source
The replica must be instructed to replicate from the source:
Make sure to change <SOURCE_IP>, <SOURCE_PORT>, and <SECRET>.
CHANGE REPLICATION SOURCE TO SOURCE_HOST=<SOURCE_IP>, SOURCE_PORT=<SOURCE_PORT>, SOURCE_USER='repluser', SOURCE_PASSWORD='<SECRET>', SOURCE_SSL=1;
Create a replication filter on the replica
The replica filter prevents corruption of the datastore.
If the datastore’s system tables are corrupted using replication then the SLA is void and the datastore must be recreated.
CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB=(mysql,sys, performance_schema);
Create a database dump file
The database dump contains the data that you wish to import into the replica. Only partial dumps are possible. The dump must not contains any mysql or other system databases.
On the source, issue the following command. Change USER, SECRET and DATABASES:
mysqldump --set-gtid-purged=OFF -uUSER -pSECRET --master-data --single-transaction --triggers --routines --events --databases DATABASES > dump.sql
Important! If your database dump contains SPROCs, triggers or events, then you must replace DEFINER:
sed 's/\sDEFINER=`[^`]*`@`[^`]*`//g' -i dump.sql
Apply the dumpfile on the replica
cat dump.sql | mysql -uccxadmin -p -h<REPLICA_PRIMARY>
Start the replica
On the replica do:
START REPLICA;
followed by
SHOW REPLICA STATUS;
And verify that:
Replica_IO_State: Waiting for source to send event
..
Replica_IO_Running: Yes
Replica_SQL_Running: Yes
When the migration is ready
STOP REPLICA;
RESET REPLICA ALL;
CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB=();
Troubleshooting
If the replication fails to start then verify:
- All the steps above has been followed.
- Ensure that the replication source is added as a Trusted Source in the Firewall section of the CCX UI.
- Ensure that you have the correct IP/FQDN of the replication source.
- Ensure that users are created correctly and using the correct password.
- Ensure that the dump is fresh.
5.3.4 - Importing Data From AWS RDS
This procedure describes how to import data from Amazon RDS to a MySQL datastore located in CCX.
- The MySQL Datastore on CCX is referred to as the ‘CCX Primary’
- The RDS Source of the data is referred to as the ‘RDS Writer’
Schematically, this is what we will set up:

warning:
AWS RDS makes it intentionally difficult to migrate away from. Many procedures on the internet, as well as AWS’s own procedures, will not work.
The migration we suggest here (and is the only one we know works) requires that the RDS Writer instance be blocked for writes until a mysqldump has been completed. However, AWS RDS blocks operations such as
FLUSH TABLES WITH READ LOCK:mysqldump: Couldn't execute 'FLUSH TABLES WITH READ LOCK': Access denied for user 'admin'@'%' (using password: YES) (1045)Therefore, the actual applications must be blocked from writing.
Also, some procedures on the internet suggest creating a read-replica. This will not work either, as the AWS RDS Read-replica is crippled and lacks GTID support.
note:
If you don’t want to set up replication, you can choose to only apply the following sections:
- Create a database dump file of the RDS Writer
- Apply the dump file on the CCX replica
Also, practice this a few times before you actually do the migration.
Preparations
- Create a datastore on CCX. Note that you can also replicate from MySQL 8.0 to MySQL 8.4.
- Get the endpoint of the CCX Primary (under the Nodes section).
The endpoint in our case is db-9bq15.471ed518-8524-4f37-a3b2-136c68ed3aa6.user-ccx.mydbservice.net. - Get the endpoint of the RDS Writer. In this example, the endpoint is
database-1.cluster-cqc4xehkpymd.eu-north-1.rds.amazonaws.com - Update the Security group on AWS RDS to allow the IP address of the CCX Primary to connect. To get the IP address of the CCX Primary, run:
dig db-9bq15.471ed518-8524-4f37-a3b2-136c68ed3aa6.user-ccx.mydbservice.net - Ensure you can connect a MySQL client to both the CCX Primary and the RDS Writer.
Create a Replication User On the RDS Writer Instance
Create a replication user with sufficient privileges on the RDS Writer.
In the steps below, we will use repl and replpassword as the credentials when setting up the replica on CCX.
CREATE USER 'repl'@'%' IDENTIFIED BY 'replpassword';
GRANT REPLICATION SLAVE ON *.* TO 'repluser'@'%'; #mysql 8.0
GRANT REPLICATION REPLICATION_SLAVE_ADMIN ON *.* TO 'repluser'@'%';
Block Writes to the RDS Writer Instance
This is the most challenging part. You must ensure your applications cannot write to the Writer instance.
Unfortunately, AWS RDS blocks operations like FLUSH TABLES WITH READ LOCK.
Create a Consistent Dump
Assuming that writes are now blocked on the RDS Writer Instance, you must get the binary log file and the position of the RDS Writer instance.
Get the Replication Start Position
The start position (binary log file name and position) is used to tell the replica where to start replicating data from.
MySQL 8.0: SHOW MASTER STATUS\G
MySQL 8.4 and later: SHOW BINARY LOG STATUS\G
It will output:
*************************** 1. row ***************************
File: mysql-bin-changelog.000901
Position: 584
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: 796aacf3-24ed-11f0-949d-0605a27ab4b9:1-876
1 row in set (0.02 sec)
Record the File: mysql-bin-changelog.000901 and the Position: 584 as they will be used to set up replication.
Create the mysqldump
Be sure to specify the database you wish to replicate. You must omit any system databases. In this example, we will dump the databases prod and crm.
mysqldump -uadmin -p -hdatabase-1.cluster-cqc4xehkpymd.eu-north-1.rds.amazonaws.com --databases prod crm --triggers --routines --events --set-gtid_purged=OFF --single-transaction > dump.sql
Wait for it to complete.
Unblock Writes to the RDS Writer Instance
At this stage, it is safe to enable application writes again.
Load the Dump On the Replica
Create a Replication Filter On the Replica
The replica filter prevents corruption of the datastore, and we are not interested in changes logged by AWS RDS to mysql.rds* tables anyway. Also add other databases that you do not wish to replicate to the filter.
note:
If the CCX datastore’s system tables are corrupted using replication, then the datastore must be recreated.
CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB=(mysql, sys, performance_schema);
Important! If your database dump contains stored procedures, triggers, or events, then you must replace DEFINER:
sed 's/\sDEFINER=`[^`]*`@`[^`]*`//g' -i dump.sql
Apply the Dump File On the CCX Primary:
cat dump.sql | mysql -uccxadmin -p -hCCX_PRIMARY
Connect the CCX Primary to the RDS Writer Instance
The CCX Primary must be instructed to replicate from the RDS Writer. We have the binlog file and position from the earlier step:
- mysql-bin-changelog.000901
- 584
CHANGE REPLICATION SOURCE TO SOURCE_HOST='database-1.cluster-cqc4xehkpymd.eu-north-1.rds.amazonaws.com', SOURCE_PORT=3306, SOURCE_USER='repl', SOURCE_PASSWORD='replpassword', SOURCE_SSL=1, SOURCE_LOG_FILE='mysql-bin-changelog.000901', SOURCE_LOG_POS=584;
Start the Replica
On the replica, run:
START REPLICA;
followed by:
SHOW REPLICA STATUS;
And verify that:
Replica_IO_State: Waiting for source to send event
...
Replica_IO_Running: Yes
Replica_SQL_Running: Yes
When the Migration is Ready
At some point, you will need to point your applications to the new datastore. Ensure:
- Prevent writes to the RDS Writer
- Make sure the CCX Primary has applied all data (use
SHOW REPLICA STATUS) - Connect the applications to the new datastore
STOP REPLICA;
RESET REPLICA ALL;
CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB=();
Troubleshooting
If the replication fails to start, verify:
- All the steps above have been followed
- Ensure that the IP address of the CCX Primary is added to the security group used by the RDS Writer instance
- Ensure that you have the correct IP/FQDN of the RDS Writer instance
- Ensure that users are created correctly and using the correct password
- Ensure that the dump is fresh
5.3.5 - Importing Data From GCP
This procedure describes how to import data from Google Cloud SQL to a MySQL datastore located in CCX.
- The MySQL Datastore on CCX is referred to as the ‘CCX Primary’
- The GCP Source of the data is referred to as the ‘GCP Primary’
Schematically, this is what we will set up:

note:
If you don’t want to set up replication, you can choose to only apply the following sections:
- Create a database dump file of the GCP Primary
- Apply the dump file on the CCX replica
Also, practice this a few times before you actually do the migration.
Preparations
- Create a datastore on CCX. Note that you can also replicate from MySQL 8.0 to MySQL 8.4.
- Get the endpoint of the CCX Primary (under the Nodes section).
The endpoint in our case is db-9bq15.471ed518-8524-4f37-a3b2-136c68ed3aa6.user-ccx.mydbservice.net. - The GCP Primary must have a Public IP.
- Get the endpoint of the GCP Primary. In this example, the endpoint is
34.51.xxx.xxx - Update the Security group on GCP to allow the IP address of the CCX Primary to connect. To get the IP address of the CCX Primary, run:
dig db-9bq15.471ed518-8524-4f37-a3b2-136c68ed3aa6.user-ccx.mydbservice.net - Ensure you can connect a MySQL client to both the CCX Primary and the GCP Primary.
Create a Replication User on the GCP Primary Instance
Create a replication user with sufficient privileges on the GCP Primary.
In the steps below, we will use repl and replpassword as the credentials when setting up the replica on CCX.
CREATE USER 'repl'@'%' IDENTIFIED BY 'replpassword';
GRANT REPLICATION SLAVE ON *.* TO 'repluser'@'%'; #mysql 8.0
GRANT REPLICATION REPLICATION_SLAVE_ADMIN ON *.* TO 'repluser'@'%';
Create the mysqldump
Be sure to specify the database you wish to replicate. You must omit any system databases. In this example, we will dump the databases prod and crm.
mysqldump -uroot -p -h34.51.xxx.xxx --databases prod crm --triggers --routines --events --set-gtid_purged=OFF --source-data --single-transaction > dump.sql
Wait for it to complete.
Load the Dump on the Replica
Create a Replication Filter on the Replica
The replica filter prevents corruption of the datastore, and we are not interested in changes logged by GCP to mysql.rds* tables anyway. Also add other databases that you do not wish to replicate to the filter.
note:
If the CCX datastore’s system tables are corrupted using replication, then the datastore must be recreated.
CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB=(mysql, sys, performance_schema);
Important! If your database dump contains stored procedures, triggers, or events, then you must replace DEFINER:
sed 's/\sDEFINER=`[^`]*`@`[^`]*`//g' -i dump.sql
Apply the Dump File on the CCX Primary:
cat dump.sql | mysql -uccxadmin -p -hCCX_PRIMARY
Connect the CCX Primary to the GCP Primary
Issue the following commands on the CCX Primary:
CHANGE REPLICATION SOURCE TO SOURCE_HOST='34.51.xxx.xxx', SOURCE_PORT=3306, SOURCE_USER='repl', SOURCE_PASSWORD='replpassword', SOURCE_SSL=1;
Start the Replica
On the CCX Primary, run:
START REPLICA;
followed by:
SHOW REPLICA STATUS\G
And verify that:
Replica_IO_State: Waiting for source to send event
..
Replica_IO_Running: Yes
Replica_SQL_Running: Yes
When the Migration is Ready
At some point, you will need to point your applications to the new datastore. Ensure:
- There are no application writes to the GCP Primary
- The CCX Primary has applied all data (use
SHOW REPLICA STATUS \G, check theSeconds_Behind_Master) - Connect the applications to the new datastore
Then you can clean up the replication link on the CCX Primary:
STOP REPLICA;
RESET REPLICA ALL;
CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB=();
Troubleshooting
If the replication fails to start, verify:
- All the steps above have been followed
- Ensure that the IP address of the CCX Primary is added to the security group used by the GCP Primary instance
- Ensure that you have the correct IP/FQDN of the GCP Primary instance
- Ensure that users are created correctly and using the correct password
- Ensure that the dump is fresh
5.3.6 - Limitations
Every product has limitations. Here is a list MySQL limitations:
Permissions
The privilege system in MySQL is offers more capabilties than MariaDB. Hence, the ‘ccxadmin’ user has more privileges in MySQL than in MariaDB.
The ‘ccxadmin’ user has the following privileges:
- Global / all databases (
*.*):- SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, REPLICATION_SLAVE_ADMIN, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, GRANT
This means that the ‘ccxadmin’ may assign privileges to users on all databases.
Restrictions:
‘ccxadmin’ is not allowed to modify the following databases
mysql.*sys.*
For those database, the following privileges have been revoked from ‘ccxadmin’:
- INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER
5.3.7 - Overview
CCX supports two types of MySQL clustering:
- MySQL Replication (Primary-replica configuration)
- Percona XtraDb Cluster (Multi-primary configuration)
For general purpose applications we recommend using MySQL Replication, and we only recommend to use Percona XtraDb Cluster if you are migrating from an existing application that uses Percona XtraDb Cluster.
If you are new to Percona XtraDb Cluster we stronly recommend to read about the Percona XtraDb Cluster limitations and Percona XtraDb Cluster Overview to understand if your application can benefit from Percona XtraDb Cluster.
MySQL Replication uses the standard asynchronous replication based on GTIDs.
Scaling
Storage and nodes can be scaled online.
Nodes (horizonal)
- The maximum number of database nodes in a datastore is 5.
- Multi-primary configuration must contain an odd number of nodes (1, 3 and 5).
Nodes (vertical)
A node cannot be scaled vertically currently. To scale to large instance type, then a larger instance must be added and then remove the unwanted smaller instances.
Storage
- Maximum size depends on the service provider and instance size
- Volume type cannot currently be changed
Further Reading
5.3.8 - Restore
There are two options to restore a backup:
- Restore a backup on the existing datastore
- Restore a backup on a new datastore
Please note that restoring a backup may be a long running process.
This option allows to restore a backup with point in time recovery. The WAL logs are replayed until the desired PITR. Warning! Running several restores may change the timelines.
This option allows to restore a backup on a new datastore. This option does not currently support PITR.
5.3.9 - TLS Connection
SSL Modes
CCX currently supports connections to MySQL in two SSL modes:
-
REQUIRED: This mode requires an SSL connection. If a client attempts to connect without SSL, the server rejects the connection. -
VERIFY_CA: This mode requires an SSL connection and the server must verify the client’s certificate against the CA certificates that it has.
CA Certificate
The Certificate Authority (CA) certificate required for VERIFY_CA mode can be downloaded from your datastore on CCX using an API call or through the user interface on page https://{your_ccx_domain}/projects/default/data-stores/{datastore_id}/settings.
This certificate is used for the VERIFY_CA SSL mode.
Example Commands
Here are example commands for connecting to the MySQL server using the two supported SSL modes:
-
REQUIREDmode:mysql --ssl-mode=REQUIRED -u username -p -h hostname -
VERIFY_CAmode:mysql --ssl-mode=VERIFY_CA --ssl-ca=ca.pem -u username -p -h hostname
require_secure_transport
This is a MySQL setting that governs if connections to the datastore are required to use SSL. You can change this setting in CCX in Settings -> DB Parameters:
| Scenario | Server Parameter Settings | Description |
|---|---|---|
| Disable SSL enforcement | require_secure_transport = OFF |
This is the default to support legacy applications. If your legacy application doesn’t support encrypted connections, you can disable enforcement of encrypted connections by setting require_secure_transport=OFF. However, connections are encrypted unless SSL is disabled on the client. See examples |
| Enforce SSL | require_secure_transport = ON |
This is the recommended configuration. |
Examples
ssl-mode=DISABLED and require_secure_transport=OFF
mysql -uccxadmin -p -h... -P3306 ccxdb --ssl-mode=disabled
...
mysql> \s
--------------
...
Connection id: 52
Current database: ccxdb
Current user: ccxadmin@...
*SSL: Not in use*
Current pager: stdout
...
ssl-mode=PREFERRED and require_secure_transport=OFF
mysql -uccxadmin -p -h... -P3306 ccxdb --ssl-mode=preferred
...
mysql> \s
--------------
...
Connection id: 52
Current database: ccxdb
Current user: ccxadmin@...
SSL: Cipher in use is TLS_AES_256_GCM_SHA384
Current pager: stdout
...
ssl-mode=DISABLED and require_secure_transport=ON
mysql -uccxadmin -p -h... -P3306 ccxdb --ssl-mode=disabled
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 3159 (08004): Connections using insecure transport are prohibited while --require_secure_transport=ON.
ssl-mode=PREFERRED|REQUIRED and require_secure_transport=ON
mysql -uccxadmin -p -h... -P3306 ccxdb --ssl-mode=preferred|required
mysql> \s
--------------
...
Connection id: 52
Current database: ccxdb
Current user: ccxadmin@...
SSL: Cipher in use is TLS_AES_256_GCM_SHA384
Current pager: stdout
...
tls_version
The tls_version is set to the following by default:
| Variable_name | Value |
|---|---|
| tls_version | TLSv1.2,TLSv1.3 |
5.3.10 - User Management
CCX supports creating database users from the web interface. The database user has the following privileges:
- Global / all databases (
*.*):- SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, REPLICATION_SLAVE_ADMIN, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, GRANT
This means that the database user may assign privileges to users on all databases.
Restrictions:
The database user is not allowed to modify the following databases
mysql.*sys.*
For those database, the following privileges have been revoked from ‘ccxadmin’:
- INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER
5.4 - PostgreSQL
5.4.1 - Backup
Pg_basebackup is used to create backup. Also see the section ‘Schedule’.
CCX backups the Secondary server.
Backups are streamed directly to S3 staroge.
Schedule
The backup schedule can be tuned and backups can be paused
5.4.2 - Configuration
These settings cannot be changed as it affects system stability
Important default values
| Parameter | Default value |
|---|---|
| wal_keep_size | 1024 (v.1.50+) / 512 |
| max_wal_senders | min 16, max 4 x Db Node count |
| wal_level | replica |
| hot_standby | ON |
| max_connections | see below |
| shared_buffers | instance_memory x 0.25 |
| effective_cache_size | instance_memory x 0.75 |
| work_mem | instance_memory / max_connections |
| maintenance_work_mem | instance_memory/16 |
Max connections
The maximum number of connections depends on the instance size. The number of connections can be scaled by adding a new database secondary allowing of a larger instance size. The new replica can then be promoted to the new primary. See Promoting a replica for more information.
| Instance size (GiB RAM) | Max connections |
|---|---|
| < 4 | 100 |
| 8 | 200 |
| 16 | 400 |
| 32 | 800 |
| 64+ | 1000 |
Archive mode
All nodes are configured with archive_mode=always.
Auto-vacuum
Auto-vacuum settings are set to default. Please read more about automatic vaccuming here
5.4.3 - Extensions
Supported extentions
| Extension | Postgres version |
|---|---|
| vector (pgvector) | 15 and later |
| postgis | 15 and later |
Creating an extension
Connect to PostgreSQL using an admin account (e.g ccxadmin).
CREATE EXTENSION vector;
CREATE EXTENSION
See Postgres documentation for more information.
5.4.4 - Importing Data
This procedure describes how to import data to a PostgreSQL datastore located in CCX.
- The PostgreSQL Datastore on CCX is denoted as the ‘replica’
- The source of the data is denoted as the ‘source’
Create a database dump file
Dump the database schema from the <DATABASE> you wish to replicate:
pg_dump --no-owner -d<DATABASE> > /tmp/DATABASE.sql
Apply the dumpfile on the replica
postgres=# CREATE DATABASE <DATABASE>;
Copy the DSN from Nodes, Connection Information in the CCX UI.
Change ‘ccxdb’ to <DATABASE>:
psql postgres://ccxadmin:.../<DATABASE> < /tmp/DATABASE.sql
5.4.5 - Limitations
Every product has limitations. Here is a list PostgreSQL limitations:
Permissions
PostgreSQL users are created with the following permissions:
- NOSUPERUSER, CREATEROLE, LOGIN, CREATEDB
5.4.6 - Restore
Postgres configures archive_command and archive_mode=always.
Morever, during the restore the restore_command is set.
There are two options to restore a backup:
- Restore a backup on the existing datastore
- Restore a backup on a new datastore
Please note that restoring a backup may be a long running process.
This option allows to restore a backup with point in time recovery. The WAL logs are replayed until the desired PITR. Warning! Running several restores may change the timelines.
This option allows to restore a backup on a new datastore. This option does not currently support PITR.
5.5 - Redis
5.5.1 - Backup
A backup of Redis consists of both RDB and AOF.
Schedule
The backup schedule can be tuned and backups can be paused
5.5.2 - Configuration
Volume size
Since Redis is an in-memory database, the storage size is fixed and twice the amount of the RAM. Thus, it is not possible to:
- specify the storage size in the deployment wizard.
- scale the storage.
Persistance
Redis is configured to use both AOF and RDB for persistance. The following configuration parameters are set:
- appendonly yes
- default values for AOF
- default values for RDB
5.5.3 - User Management
CCX simplifies Redis user management by providing a clear and intuitive user interface for managing privileges and accounts. Below are detailed instructions and explanations for managing Redis users within the CCX environment.
Viewing Existing Users
To view existing Redis users:
- Navigate to the Users section in your CCX Redis cluster.
- Here you’ll see a list of existing user accounts along with their associated privileges.

User Information Displayed:
- Account: Username of the Redis user.
- Privileges: Specific privileges granted or filtered out.
- Actions: Options to manage (modify/delete) the user.
Note: By default, the
-@adminand-@dangerousprivileges are filtered out for security purposes.
Creating a New Redis Admin User

To create a new Redis admin user:
-
Click on the Create Admin user button.
-
Fill in the required fields:
- Username: Enter the desired username.
- Password: Enter a secure password for the user.
- Categories: Enter the privilege categories. By default, using
+@allwill grant all privileges except those explicitly filtered (like-@adminand-@dangerous).
-
Optionally, you can define more granular restrictions:
-
Commands: Enter commands to explicitly allow (
+) or disallow (-). For example:- Allow command:
+get - Disallow command:
-get
- Allow command:
-
Channels: Specify Redis Pub/Sub channels. You can allow (
&channel) or disallow (-&channel). -
Keys: Specify key access patterns. Use the syntax
~keyto allow or~-keyto disallow access to specific keys or patterns.
-
-
After completing the form, click on the Create button to save the new user.
Default Privilege Filtering
CCX ensures the security of your Redis instance by automatically filtering potentially harmful privileges:
-@admin: Restricts administrative commands.-@dangerous: Restricts commands that could compromise the cluster’s stability.
These privileges cannot be granted through CCX’s standard user interface for security reasons.
Firewall and Access Control
User accounts in CCX Redis clusters are protected by built-in firewall rules:
- Accounts are only allowed to connect from trusted sources defined in the firewall settings.
Ensure your firewall rules are properly configured to maintain secure access control to your Redis users.
5.6 - Valkey
5.6.1 - Backup
A backup of Valkey consists of both RDB and AOF.
Schedule
The backup schedule can be tuned and backups can be paused
5.6.2 - Configuration
Volume size
Since Valkey is an in-memory database, the storage size is fixed and twice the amount of the RAM. Thus, it is not possible to:
- specify the storage size in the deployment wizard.
- scale the storage.
Persistance
Redis is configured to use both AOF and RDB for persistance. The following configuration parameters are set:
- appendonly yes
- default values for AOF
- default values for RDB
5.6.3 - User Management
CCX simplifies Valkey user management by providing a clear and intuitive user interface for managing privileges and accounts. Below are detailed instructions and explanations for managing Valkey users within the CCX environment.
Viewing Existing Users
To view existing Valkey users:
- Navigate to the Users section in your CCX Valkey cluster.
- Here you’ll see a list of existing user accounts along with their associated privileges.

User Information Displayed:
- Account: Username of the Valkey user.
- Privileges: Specific privileges granted or filtered out.
- Actions: Options to manage (modify/delete) the user.
Note: By default, the
-@adminand-@dangerousprivileges are filtered out for security purposes.
Creating a New Valkey Admin User

To create a new Valkey admin user:
-
Click on the Create Admin user button.
-
Fill in the required fields:
- Username: Enter the desired username.
- Password: Enter a secure password for the user.
- Categories: Enter the privilege categories. By default, using
+@allwill grant all privileges except those explicitly filtered (like-@adminand-@dangerous).
-
Optionally, you can define more granular restrictions:
-
Commands: Enter commands to explicitly allow (
+) or disallow (-). For example:- Allow command:
+get - Disallow command:
-get
- Allow command:
-
Channels: Specify Valkey Pub/Sub channels. You can allow (
&channel) or disallow (-&channel). -
Keys: Specify key access patterns. Use the syntax
~keyto allow or~-keyto disallow access to specific keys or patterns.
-
-
After completing the form, click on the Create button to save the new user.
Default Privilege Filtering
CCX ensures the security of your Valkey instance by automatically filtering potentially harmful privileges:
-@admin: Restricts administrative commands.-@dangerous: Restricts commands that could compromise the cluster’s stability.
These privileges cannot be granted through CCX’s standard user interface for security reasons.
Firewall and Access Control
User accounts in CCX Valkey clusters are protected by built-in firewall rules:
- Accounts are only allowed to connect from trusted sources defined in the firewall settings.
Ensure your firewall rules are properly configured to maintain secure access control to your Valkey users.
6 - Supported Databases
| Database | Topology | CCX Supported | EOL | Notes |
|---|---|---|---|---|
| MariaDB | Primary/Replica Multi-Primary |
10.11 | 16 Feb 2028 | |
| Primary/Replica Multi-Primary |
11.4 | 29 May 2029 | ||
| MySQL | Primary/Replica Multi-Primary |
8.0 | April, 2026 | |
| Primary/Replica Multi-Primary |
8.4 | 30 Apr 2029 | ||
| PostgreSQL | Primary/Replica | 14 | 12 Nov 2026 | |
| Primary/Replica | 15 | 11 Nov 2027 | ||
| Primary/Replica | 16 | 8 Nov 2028 | ||
| Redis | Sentinel | 7.2 | deprecated | |
| Valkey | Sentinel | 8 | tbd | |
| Microsoft SQL Server for Linux | Single Instance | 2022 | 2027? | |
| Primary/Replica (always on) | 2022 | 2027? |