1 - Backup and Restore via CLI

Backup and Restore databases with the help of CLI tools

Overview

This guide will help you getting started with creating and restoring your own backups using various database CLI tools.
For the built-in backup functionality, please see here.

PostgreSQL

Backup

The client we are using in this guide is pg_dump which is included in the PostgreSQL client package. It is recommended to use the same client version as your server version.

The basic syntax and an example to dump a PostgreSQL database with the official tool pg_dump is shown below. To connect and authenticate with a remote host you can either specify this information either with options, environment variables or a password file.

Usage & example

pg_dump [OPTION]... [DBNAME]
  -h, --host=HOSTNAME      database server host or socket directory (default: "local socket")
  -p, --port=PORT          database server port (default: "5432")
  -U, --username=USERNAME  database user name (default: "$USER")
  -f, --file=FILENAME      output file or directory name
  -d, --dbname=DBNAME      database to dump
pg_dump -h mydatabaseserver -U mydatabaseuser -f dump.sql -d mydatabase

Environment variables

As mentioned, we can also specify the options, connection and authentication information via environment variables, by default the client checks if the below environment variables are set.

For a full list, check out the documentation under PostgreSQL Documentation.

PGDATABASE
PGHOST
PGOPTIONS
PGPORT
PGUSER

It is not recommended to specify the password via the above methods, and thus not listed here. For the password it is better to use a so called password file. By default the client checks the user’s home directory for a file named .pgpass. Read more about the password file by going to the official documentation linked under PostgreSQL Documentation.

Restore

To restore a database we will use the client psql which is also included in the PostgreSQL client package. It is recommended to use the same client version as your server version.

Usage & example

psql [OPTION]... [DBNAME [USERNAME]]
  -h, --host=HOSTNAME      database server host or socket directory (default: "local socket")
  -p, --port=PORT          database server port (default: "5432")
  -U, --username=USERNAME  database user name (default: "$USER")
  -f, --file=FILENAME      execute commands from file, then exit
  -d, --dbname=DBNAME      database name to connect to
psql -h mydatabaseserver -U mydatabaseuser -f dump.sql -d mydatabase

PostgreSQL Documentation

  • PostgreSQL 11/14 - pgdump
  • PostgreSQL 11/14 - The Password file
  • PostgreSQL 11/14 - Environment variables
  • PostgreSQL 11/14 - SQL Dump

MariaDB

Backup

The client we are using in this guide is mariadb-dump which is included in the MariaDB client package.

The basic syntax and an example to dump a MariaDB database with the official tool mariadb-dump is shown below together with some of the options we will use.

Usage & example

mariadb-dump [OPTIONS] database [tables]
OR     mariadb-dump [OPTIONS] --databases DB1 [DB2 DB3...]
-h, --host=name       Connect to host.
-B, --databases       Dump several databases...
-q, --quick           Don't buffer query, dump directly to stdout.
--single-transaction  Creates a consistent snapshot by dumping all tables
                      in a single transaction...
--skip-lock-tables    Disable the default setting to lock tables

For a full list of options, check out the documentation under MariaDB Documentation.

Depending on your specific needs and the scope of the backup you might need to use the pre-created database user. This is because any subsequent users created in the portal are set up with permissions to a specific database while the pre-existing admin user have more global permissions that are needed for some of the dump options.

mariadb-dump -h mydatabaseserver -B mydatabase --quick --single-transaction --skip-lock-tables > dump.sql

It is not recommended to specify the password via the command line. Consider using an option file instead, by default the client checks the user’s home directory for a file named .my.cnf. You can read more about option files in the official documentation linked under MariaDB Documentation.

Restore

To restore the database from the dump file we will use the tool mariadb that is also included in the MariaDB client package.

Usage & example

mariadb [OPTIONS] [database]
-h, --host=name     Connect to host
mariadb -h mydatabaseserver mydatabase < dump.sql

MariaDB Documentation

MySQL

Backup

The client we are using in this guide is mysqldump which is included in the MySQL client package.

The basic syntax and an example to dump a MySQL database with the official tool mysqldump is shown below together with some of the options we will use.

Usage & example

mysqldump [OPTIONS] database [tables]
OR     mysqldump [OPTIONS] --databases DB1 [DB2 DB3...]
-h, --host=name       Connect to host.
-B, --databases       Dump several databases...
-q, --quick           Don't buffer query, dump directly to stdout.
--single-transaction  Creates a consistent snapshot by dumping all tables
                      in a single transaction...
--skip-lock-tables    Disable the default setting to lock tables
--no-tablespaces      Do not write any CREATE LOGFILE GROUP or 
                      CREATE TABLESPACE statements in output 

For a full list of options, check out the documentation under MySQL Documentation.

Depending on your specific needs and the scope of the backup you might need to use the pre-created database user. This is because any subsequent users created in the portal are set up with permissions to a specific database while the pre-existing admin user have more global permissions that are needed for some of the dump options.

mysqldump -h mydatabaseserver -B mydatabase --quick --single-transaction --skip-lock-tables --no-tablespaces > dump.sql`

It is not recommended to specify the password via the command line. Consider using an option file instead, by default the client checks the user’s home directory for a file named .my.cnf. You can read more about option files in the official documentation linked under MySQL Documentation.

Restore

To restore the database from the dump file we will use the tool mysql that is also included in the MySQL client package.

Usage & example

mysql [OPTIONS] [database]
-h, --host=name     Connect to host
mysql -h mydatabaseserver mydatabase < dump.sql

MySQL Documentation

2 - Backup and Restore via DBaaS UI

Overview and examples of Elastx DBaaS built-in backup functionality

Overview

All our supported database types comes with built-in backup functionality and is enabled by default. Backups are stored in our object storage, which is encrypted at rest and also utilizes all of our availability zones for highest availability. You can easily set the amount of backups per day, the prefered time of day and the retention period in our DBaaS UI. For MySQL, MariaDB and PostgreSQL we also support creating new datastores from backup, making it easy to create a new database cluster using another cluster as a base.
For backup pricing, you can use our DBaaS price calculator found here: ECP-DBaaS

Good to know

Beaware: Please note that if you delete a datastore, all backups for that datastore will also be deleted. This action cannot be reverted.

  • Backups are taken for the whole datastore.
  • Maximum backup retention period is 90 days. Default value is 7 days.
  • There’s no storage quota for backups.
  • Incremental backups are supported and enabled by default on MySQL and MariaDB.
  • Backups cannot be downloaded locally. To create an offsite backup, you can use one of the CLI-tools. See here for some examples.
  • Creating new datastores from previously taken backups is supported for MySQL, MariaDB and PostgreSQL.

Manage backups

Begin by logging into your Elastx DBaaS account, choose your datastore and go to Backups.
Under this tab you will see all the previously taken backups for the chosen datastore, if you just created this datastore, it might be empty.

Retention Period

To change retention period click on Backup settings at the top right corner, set your prefered retention period and click Save.

Backup schedules

For datastores running MySQL and MariaDB you have the ability to set schedules for both full and incremental backups.
To change how often and when your backups should run, click on Backup Schedules in the left corner.
Select the backup type you want to change and choose edit:

  • Incremental backups can be set to run every 15, 30 and 60 minutes.
  • Full backups can be set to run hourly or daily. Set your prefered time in UTC.

Restore backup on your running datastore

Beaware: Please note that this process will completely overwrite your current data and all changes since your last backup will be lost.

Go to the Backups tab for the datastore you want to restore. Select the prefered backup and click on the three dots under Actions and choose restore.

Create a new datastore from backup

For MySQL, MariaDB and PostgreSQL you have the ability to use a backup as a base for a new datastore.
Go to backups and click on the three dots under actions for the backup you want to use as a base and select Create Datastore.
A new datastore will be created with the same specification and name (with extension _Copy) as the base datastore.
When it’s finished, you can rename your new datastore by going to Settings > Datastore name.

Disable backups

Beaware: Not recommended. Please note that if you disable full backups, no backups will be taken after this point until you manually enable it again.

Go to the backups tab for the datastore you want to pause backups. Select Backup Schedules, click on the three dots for type of backup you want to disable and choose pause. To re-enable backups again, take the same steps and choose enable.

3 - Config Management

note: Deprecated in v1.51 in favor of parameter groups Please see to Parameter Groups

In CCX, you have the ability to fine-tune your database performance by adjusting various DB Parameters. These parameters control the behavior of the database server and can impact performance, resource usage, and compatibility.

img

Available DB Parameters

This is an example, and is subject to change and depends on the configuration of CCX.

  1. group_concat_max_len

    • Description: Specifies the maximum allowed result length of the GROUP_CONCAT() function.
    • Max: 104857600 | Min: 1024 | Default: 1024
  2. interactive_timeout

    • Description: Sets the number of seconds the server waits for activity on an interactive connection before closing it.
    • Max: 28800 | Min: 3000 | Default: 28800
  3. max_allowed_packet

    • Description: Specifies the maximum size of a packet or a generated/intermediate string.
    • Max: 1073741824 | Min: 536870912 | Default: 536870912
  4. sql_mode

    • Description: Defines the SQL mode for MySQL, which affects behaviors such as handling of invalid dates and zero values.
    • Default: ONLY_FULL_GROUP_BY, STRICT_TRANS_TABLES, NO_ZERO_IN_DATE, NO_ZERO_DATE, ERROR_FOR_DIVISION_BY_ZERO, NO_ENGINE_SUBSTITUTION
  5. table_open_cache

    • Description: Sets the number of open tables for all threads.
    • Max: 10000 | Min: 4000 | Default: 4000
  6. wait_timeout

    • Description: Defines the number of seconds the server waits for activity on a non-interactive connection before closing it.
    • Max: 28800 | Min: 3000 | Default: 28800

How to Change Parameters

  1. Navigate to the DB Parameters tab within the Settings section.
  2. Review the list of available parameters and their current values.
  3. Click on the Edit Parameters button in the upper-right corner.
  4. Adjust the values as necessary within the defined minimum and maximum limits.
  5. Once you’ve made the required changes, save the new configuration.

note: The latest saved settings are applied when adding a node (either as part of Scaling, during Lifecycle management, or during automatic repair).

Best Practices

  • Understand the impact: Changing certain parameters can significantly impact the performance and stability of your database. Make sure to test changes in a staging environment if possible.
  • Stay within limits: Ensure that your values respect the maximum and minimum bounds defined for each parameter.
  • Monitor after changes: After adjusting any parameter, monitor your database performance to ensure the changes have the desired effect.

By properly configuring these parameters, you can optimize your database for your specific workload and operational requirements.

4 - Create Datastore From Backup

In CCX, it is possible to create a new datastore from a backup Supported databases: MySQL, MariaDb, Postgres.

Select the backup you wish to restore in the Backup tab and select “Create datastore” from the action menu next to the backup. This process may take some time depending on the size of the backup. The new datastore will have the name datastore as the parent but will be suffixed with _Copy.

This allows you to:

  • create a datastore from a backup for development and testing purposes.
  • Investigate and analyse data without interfering with the production environment.

Limitations

PITR is not supported yet.

5 - Database Db Management

This guide explains how to create, list, and manage databases within the CCX platform for both PostgreSQL and MySQL systems. Databases is not a concept in Redis, and in Microsoft SQLServer creating databases is not supported.

Listing Existing Databases

Once databases are created, you can view the list of databases in the Databases tab.

  • The Database Name column shows the names of the databases.
  • The Size column displays the size of the database.
  • The Tables column indicates the number of tables within each database.

List Databases

  • For MySQL, the database list will appear similar, with columns for database name, size, and tables.

List MySQL Databases

Creating a New Database

To create a new database in the CCX platform:

note:

  • PostgreSQL Database Owner: When creating a database in PostgreSQL, ensure that a valid user is selected as the owner of the database.:**
  • MySQL Database Management: MySQL database creation does not require specifying an owner, but all other functions (listing, deleting) remain similar.:**
  1. Navigate to the Databases Tab:

    • Click on the Databases section from the main dashboard.
  2. Click on Create New Database:

    • A form will appear asking for the following details:
      • Database Name: The name of the new database.
      • DB Owner: The user who will own the database (applicable to PostgreSQL).

    Create Database

  3. Submit the Form:

    • After filling in the necessary information, click Create to create the new database.
  4. MySQL Database Creation:

    • For MySQL, the owner field is not required. You only need to specify the database name.

    MySQL Create Database

Dropping a Database

note:

  • MySQL/MariaDb Database locks / metadata locks: The DROP DATABASE will hang if there is a metadata lock on the database or a table/resource in the database. Use SHOW PROCESSLISTin the mysql client to identify the lock. Either release the lock, KILL the connection, or wait for the lock to be released.:**

To delete or drop a database:

  1. Locate the Database:

    • In the Databases tab, find the database you want to delete.
  2. Click the Delete Icon:

    • Click on the red delete icon next to the database entry.
    • A confirmation dialog will appear asking if you are sure about dropping the database.

    Drop Database

  3. Confirm Deletion:

    • Click OK to proceed. WARNING: All data in the database will be lost.

Troubleshooting

Drop database hangs, the icon is spinning in the frontend.

Check if there are locks preventing the database from being deleted.

  • In MySQL, the DROP DATABASE will hang if there is a metadata lock on the database or a table/resource in the database. Use SHOW PROCESSLIST in the mysql/mariadb client to identify the lock. Either release the lock, KILL the connection, or wait for the lock to be released.

6 - Database User Management

CCX allows you to create admin users. These users can in turn be used to create database uses with lesser privileges. Privileges and implementation is specific to the type of database. Admin users can be created for the following databases:

  • PostgreSQL
  • MySQL
  • MariaDb
  • Valkey
  • Cache 22
  • Microsoft SQL Server

List database users

To list database users do Navigate to the Users Tab::

List Database User

Creating an Admin User

To create a new admin user, follow these steps:

  1. Navigate to Users Tab:

    • Go to the Users section from the main dashboard.
  2. Click on Create Admin User:

    Below is the MySQL interface described, but the interface is similar for the other database types

    • A form will appear prompting you to enter the following details:
    • Username: Specify the username for the new admin.
    • Password: Enter a strong password for the admin user.
    • Database Name: Select or specify the database this user will be associated with.
    • Authentication Plugin: Choose the authentication method for the user. Available options:
      • caching_sha2_password (default)
      • mysql_native_password (for MySQL compatibility)

    Create Admin User

Deleting a database user

Delete User: To delete a user, click on the red delete icon beside the user entry. A confirmation dialog will appear before the user is removed.

Delete User Confirmation

Connection assistant

CCX provides a Connection Assistant to help configure connection strings for your database clients.

Connection assistant

  1. Configure Database User and Database Name:

    • Select the database user and the database name.
    • Choose the Endpoint type (Primary or Replica).
  2. Connection String Generation:

    • Based on the selected options, a connection string is generated for various technologies, including:
      • JDBC
      • ODBC
      • Python (psycopg2)
      • Node.js (pg)
  3. Example:

    String url = "jdbc:postgresql://<host>:<port>/<dbname>?verifyServerCertificate=true&useSSL=true&requireSSL=true";
    myDbConn = DriverManager.getConnection(url, "<username>", "<password>");
    

7 - Datastore Settings

In the Settings section of CCX, there are two primary configuration options: General and DB Parameters.

The General settings section allows you to configure high-level settings for your datastore. This may include basic configurations such as system name, storage options, and general system behavior.

The DB Parameters section is used for fine-tuning your database. Here, you can adjust specific database settings such as memory allocation, query behavior, or performance-related parameters. These settings allow for a deeper level of control and optimization of the datastore for your specific workload.

Database Parameters

Please see Configuration management.

Changing the Datastore Name in CCX

The Datastore Name in CCX is an identifier for your datastore instance, and it is important for proper organization and management of multiple datastores. The name can be set when creating a datastore or changed later to better reflect its purpose or environment.

img

Notifications in CCX

Introduced in v.1.50.

The Notifications feature in CCX allows you to configure email alerts for important system events. These notifications help ensure that you are aware of critical events happening within your environment, such as when the disk space usage exceeds a certain threshold or when important jobs are started on the datastore.

img

To configure recipients of notification emails, simply enter the email addresses in the provided field. Multiple recipients can be added by separating each email with a semicolon (;).

If no email addresses are added, notifications will be disabled.

Key Notifications:

  • Disk Space Alerts: When disk usage exceeds 85%, a notification is sent to the configured recipients.
  • Job Alerts: Notifications are sent when significant jobs (such as data processing or backups) are initiated on the datastore.

This feature ensures that system administrators and key stakeholders are always up-to-date with the health and operations of the system, reducing the risk of unexpected issues.

Auto Scaling Storage Size in CCX

Introduced in v.1.50.

CCX provides a convenient Auto Scaling Storage Size feature that ensures your system never runs out of storage capacity unexpectedly. By enabling this feature, users can automatically scale storage based on usage, optimizing space management.

img

When Auto Scale is turned ON, the system will automatically increase the storage size by 20% when the used space exceeds 85% of the allocated storage. This proactive scaling ensures that your system maintains sufficient space for operations, preventing service interruptions due to storage constraints.

Key Benefits:

  • Automatic scaling by 20% when usage exceeds 85%.
  • Ensures consistent performance and reliability.
  • Eliminates the need for manual storage interventions.

This feature is especially useful for dynamic environments where storage usage can rapidly change, allowing for seamless growth as your data expands.

Authentication in CCX

Introduced in v.1.49.

The Authentication section in CCX allows users to download credentials and CA certificates, which are essential for securing communication between the system and external services or applications.

Credentials

The Credentials download provides the necessary authentication details, such as API keys, tokens, or certificates, that are used to authenticate your system when connecting to external services or accessing certain system resources. These credentials should be securely stored and used only by authorized personnel.

img

To download the credentials, simply click the Download button.

CA Certificate

The CA Certificate ensures secure communication by verifying the identity of external systems or services through a trusted Certificate Authority (CA). This certificate is critical when establishing secure connections like HTTPS or mutual TLS (mTLS).

To download the CA Certificate, click the Download button next to the CA Certificate section.

Security Considerations:

  • Keep credentials secure: After downloading, ensure the credentials and certificates are stored in a secure location and only accessible by authorized personnel.
  • Use encryption: Where possible, encrypt your credentials and certificates both at rest and in transit.
  • Regularly rotate credentials: To maintain security, periodically rotate your credentials and update any related system configurations.

This Authentication section is vital for maintaining a secure and trustworthy communication environment in your CCX setup.

8 - DBaaS with Terraform

Overview and examples of managing datastores in Elastx DBaaS using Terraform

Overview

This guide will help you getting started with managing datastores in Elastx DBaaS using Terraform.
For this we will be using OAuth2 for authentication and the CCX Terraform provider. You can find more information about the latest CCX provider here.

Good To Know

  • Create/Destroy datastores supported.
  • Setting firewall rules supported.
  • Setting database parameter values supported.
  • Scale out/in nodes supported.
  • Create users and databases currently not supported.

DBaaS OAuth2 credentials

Before we get started with terraform, we need to create a new set of OAuth2 credentials.
In the DBaaS UI, go to your Account settings, select Authorization and choose Create credentials.

In the Create Credentials window, you can add a description and set an expiration date for your new OAuth2 credential.
Expiration date is based on the number of hours starting from when the credential were created. If left empty, the credential will not have an expiration date. You can however revoke and-/or remove your credentials at any time.
When you’re done select Create.

Create credential


Copy Client ID and Client Secret. We will be using them to authenticate to DBaaS with Terraform.
Make sure you’ve copied and saved the client secret before closing the popup window. The client secret cannot be obtained later and you will have to create a new one.

Copy credential


Terraform configuration

We’ll start by creating a new, empty file, and adding the Client ID and Secret as variables, which will be exported and used for authenticaton later when we apply our terraform configuration.
Add your Client ID and Client Secret.

#!/usr/bin/env bash

export CCX_BASE_URL="https://dbaas.elastx.cloud"
export CCX_CLIENT_ID="<client-id>"
export CCX_CLIENT_SECRET="<client-secret>"

Source your newly created credentials file.

source /path/to/myfile.sh

Terraform provider

Create a new terraform configuration file. In this example we create provider.tf and add the CCX provider.

terraform {
  required_providers {
    ccx = {
      source = "severalnines/ccx"
      version = "0.3.1"
    }
  }
}

Create your first datastore with Terraform

Create an additional terraform configuration file and add your prefered datastore settings. In this example we create a configuration file named main.tf and specify that his is a single node datastore with MariaDB.

resource "ccx_datastore" "elastx-dbaas" {
  name           = "my-terraform-datastore"
  db_vendor      = "mariadb"
  size           = "1"
  instance_size  = "v2-c2-m8-d80"
  volume_type    = "v2-1k"
  volume_size    = "80"
  cloud_provider = "elastx"
  cloud_region   = "se-sto"
  tags           = ["terraform", "elastx", "mariadb"]
}

Create primary/replica datastores with added firewall rules and database parameter values

This example is built upon the previous MariaDB example. Here we added a second node to create a primary/replica datastore. We’re also adding firewall rules and setting database parameter values. To see all available database parameters for your specific database type, log into the DBaaS UI, go to your specific datastore > Settings > DB Parameters.

resource "ccx_datastore" "elastx-dbaas" {
  name           = "my-terraform-datastore"
  db_vendor      = "mariadb"
  size           = "2"
  instance_size  = "v2-c2-m8-d80"
  volume_type    = "v2-1k"
  volume_size    = "80"
  cloud_provider = "elastx"
  cloud_region   = "se-sto"
  tags           = ["terraform", "elastx", "mariadb"]

# You can add multiple firewall rules here
  firewall {
    source       = "x.x.x.x/32"
    description  = "My Application"
  }

  firewall {
    source      = "x.x.x.x/32"
    description = "My database client"
  }

# Set your specific database parameter values here. Values should be comma-separated without spaces.
  db_params = {
    sql_mode = "STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER"
  }
 
}

Available options

Below you will find a table with available options you can choose from.

Resource Description
name Required - Sets the name for your new datastore.
db_vendor Required - Selects which database vendor you want to use. Available options: mysql, mariadb, redis and postgres. For specific Postgres version see the db_version option.
instance_size Required - Here you select which flavor you want to use.
cloud_provider Required - Should be set to elastx.
cloud_region Required - Should be set to se-sto.
volume_type Recommended - This will create a volume as the default storage instead of the ephemeral disk that is included with the flavor. Select the volume type name for the type of volume you want to use. You can find the full list of available volume types here: ECP/OpenStack Block Storage.
volume_size Recommended - Required if volume_type is used. Minimum volume size requirement is 80GB.
db_version Optional - Only applicable to PostgreSQL. Selects the version of PostgreSQL you want to use. You can choose between 14 and 15. Defaults to 15 if not set.
firewall Optional - Inline block for adding firewall rules. Can be set multiple times.
db_params Optional - Inline block for setting specific database parameter values using: parameter=“values”. Values should be comma-separated.
tags Optional - Add additional tags.

9 - Deploy A Datastore

img

MySQL or MariaDB

MySQL 8.4 is recommended if you are migrating from an existing MySQL system. MariaDB 11.4 is recommended if you are migrating from an existing MariaDB system. MySQL 8.4 offers a more sophisticated privilege system which makes database administration easier, wheres

High-availability

MySQL and MariaDB offers two configurations for High-availability.

  • Primary/replica (asynchronous replication)
  • Multi-primary (Galera replication)

Primary/replica is recommended for general purpose.

Scaling

MySQL and MariaDb can be created with one node (no high-availability) and can later be scaled with read-replicas or Primarys (in case of Multi-primary configuration).

PostgreSQL

PostgreSQL 15 and later supports the following extensions by default:

  • PostGis
  • PgVector

High-availability

High-availability is facilitated with PostgreSQL streaming replication

Scaling

PostgreSQL can be created with one node (no high-availability) but can later be scaled with read-replicas.

Cache22 (aka Redis)

deprecated

Cache22 is an in-memory data structure store.

High-availability

High-availability is facilitated with Redis replication and Redis Sentinels.

Scaling

Redis can be created with one node (no high-availability) but can later be scaled with read-replicas.

Valkey

Valkey is an in-memory data structure store.

High-availability

High-availability is facilitated with Valkey replication and Valkey Sentinels.

Scaling

Valkey can be created with one node (no high-availability) but can later be scaled with read-replicas.

MSSQL Server

Microsoft SQLServer 2022. Special license restrictions apply and this option may not be available in all CCX implementations.

10 - Event Viewer

The Event Viewer provides a detailed history of actions performed on the datastore. It tracks when changes were made, their status, who initiated the action, and a brief description of the action itself.

  • When: Timestamp indicating when the event occurred.
  • Status: The current status of the event (e.g., Finished for successfully completed tasks).
  • Initiated by: The user or process that initiated the action.
  • Description: A summary of the action performed.

Example Events:

Event viewer

The Event Viewer is essential for tracking the progress of tasks such as node scaling, promotions, and configuration updates. Each event is clearly labeled, providing users with transparency and insight into the state of their datastore operations.

11 - Firewall

This guide explains how to manage trusted sources and open ports within the firewall settings of the CCX platform. Only trusted sources are allowed to connect to the datastore.

A number of ports are open for each trusted source. One port is opened for the database service, but other ports are open for metrics. This makes it possible to connect and scrape the database nodes for metrics from a trusted source. The metrics are served using Prometheus exporters.

List Trusted Sources

Trusted sources can be managed from the Firewall tab. Only trusted sources are allowed to connect to the datastore. Here you can see:

  • Source: View the allowed IP addresses or ranges.
  • Description: Review the description of the source for identification.
  • Actions: Delete the source by clicking on the red trash icon.

Trusted Source List

Adding a Trusted Source

To allow connections from a specific IP address or range, you need to create a trusted source.

Click on Create Trusted Source:

  • A form will appear prompting you to enter the following details:
    • Source IP: Specify the IP address or CIDR range to allow. It is possible to specify a semicolon-separated list of CIDRs. If no CIDR is specified, then /32 is automatically added to the IP address.
    • Description: Add a description to identify the source (e.g., “My office”, “Data Center”).

After filling out the details, click Create to add the trusted source.

Create Trusted Source

Viewing and Managing Trusted Sources

Managing Open Ports for Each Trusted Source.

TLS access to exporter metrics are described in this section.

Each trusted source can have specific ports opened for services. To manage the ports:

  1. Expand a Trusted Source:

    • Click the down arrow beside the source IP to view the open ports.
  2. Port Configuration:

    • Port Number: The number of the open port (e.g., 9100, 5432).
    • Port Name: The name of the service associated with the port (e.g., node_exporter, postgres_exporter, service).

    The service indicates the listening port of the database server. The ports for the node_exporter and db_exporter allows you to tap in to observability metrics for the database nodes.

  3. Actions:

    • Delete a Port: Remove a port by clicking the red trash icon next to the port number.

Example Ports:

  • Port 9100: node_exporter

  • Port 9187: postgres_exporter

  • Port 5432: service

    Trusted Source List


Deleting Trusted Sources and Ports

Deleting a Trusted Source:

To remove a trusted source entirely, click on the red trash icon next to the source IP. This will remove the source and all associated ports.

Deleting an Individual Port:

To delete a specific port for a trusted source, click on the red trash icon next to the port number. This action will only remove the specific port.


This documentation covers the basic operations for managing firewall trusted sources and ports within the CCX platform. For further details, refer to the CCX platform’s official user manual or support.

12 - Logs Viewer

The Logs Viewer provides a comprehensive view of Database logs. The generated logs can be accessed for troubleshooting. It provides real-time access to essential logs, such as error logs, slow query logs though UI.

  • Name: The file path or identifier of the log file.
  • When: The timestamp indicating the most recent update or entry in the log file.
  • Actions: Options to view, download the log file for further analysis.

Example Logs

Logs viewer


The Logs Viewer is a critical tool for system administrators, enabling real-time monitoring and investigation of log files. With clear timestamps and actionable options, it ensures efficient identification and resolution of issues to maintain the stability of datastore operations.

13 - Observability

Monitor DBaaS datastore metrics via either UI or remotely

Overview

DBaaS offers metrics monitoring via the UI and remote.

Via UI there are various metrics for both databases and the nodes are presented under the datastore Monitor tab.

Remotely it is possible to monitor by using Prometheus and different exporters. The monitoring data is exposed though the exports from each node in the datastore. This is controlled under the Firewall tab in the DBaaS UI.

The ports available for the specific datastore configuration can be seen in UI under Firewall tab and the specific IP-address entry (fold the arrow to the left of the IP-address).


Exporter ports

Each exporter has its own port used by prometheus to scrape metrics.

Exporter TCP port
Node 9100
Mysql 9104
Postgres 9187
Redis 9121
MSSQL 9399

Sample visible metrics

The following tables are excerpts of metrics for the different exporters to quickly get started.

System - Hardware level metrics

Statistic Description
Load Average The overall load on your Datastore within the preset period
CPU Usage The breakdown of CPU utilisation for your Datastore, including both System and User processes
RAM Usage The amount of RAM (in Gigabytes) used and available within the preset period
Network Usage The amount of data (in Kilobits or Megabits per second) received and sent within the preset period
Disk Usage The total amount of storage used (in Gigabytes) and what is available within the preset period
Disk IO The input and output utilisation for your disk within the preset period
Disk IOPS The number of read and write operations within the preset period
Disk Throughput The amount of data (in Megabytes per second) that is being read from, or written to, the disk within the preset period

MySQL / MariaDB

MySQL metrics reference

  • Handler Stats
    Statistic Description
    Read Rnd Count of requests to read a row based on a fixed position
    Read Rnd Next Count of requests to read a subsequent row in a data file
    Read Next Count of requests to read the next row in key order
    Read Last Count of requests to read the last key in an index
    Read Prev Count of requests to read the previous row in key order
    Read First Count of requests to read a row based on an index key value
    Read Key Count of requests to read the last key in an index
    Update Count of requests to update a row
    Write Count of requests to insert to a table
  • Database Connections
    Metric Description
    Thread Connected Count of clients connected to the database
    Max Connections Count of max connections allowed to the database
    Max Used Connections Maximum number of connections in use
    Aborted Clients Number of connections aborted due to client not closing
    Aborted Connects Number of failed connection attempts
    Connections Number of connection attempts
  • Queries
    • Count of queries executed
  • Scan Operations
    • Count of operations for the operations: SELECT, UPDATE and DELETE
  • Table Locking
    Metric Description
    Table locks immediate Count of table locks that could be granted immediately
    Table locks waited Count of locks that had to be waited due to existing locks or another reason
  • Temporary Tables
    Metric Description
    Temporary tables Count of temporary tables created
    Temporary tables on Disk Count of temporary tables created on disk rather than in memory
  • Aborted Connections
    Metric Description
    Aborted Clients Number of connections aborted due to client not closing
    Aborted Connects Number of failed connection attempts
    Access Denied Errors Count of unsuccessful authentication attempts
  • Memory Utilisation
    Metric Description
    SELECT (fetched) Count of rows fetched by queries to the database
    SELECT (returned) Count of rows returned by queries to the database
    INSERT Count of rows inserted to the database
    UPDATE Count of rows updated in the database
    DELETE Count of rows deleted in the database
    Active Sessions Count of currently running queries
    Idle Sessions Count of connections to the database that are not currently in use
    Idle Sessions in transaction Count of connections that have begun a transaction but not yet completed while not actively doing work
    Idle Sessions in transaction (aborted) Count of connections that have begun a transaction but did not complete and were forcefully aborted before they could complete
    Lock tables Active locks on the database
    Checkpoints requested and timed Count of checkpoints requested and scheduled
    Checkpoint sync time Time synchronising checkpoint files to disk
    Checkpoint write time Time to write checkpoints to disk

Redis

Redis metrics reference

Metric Description
Blocked Clients Clients blocked while waiting on a command to execute
Memory Used Amount of memory used by Redis (in bytes)
Connected Clients Count of clients connected to Redis
Redis commands per second Count of commands processed per second
Total keys The total count of all keys stored by Redis
Replica Lag The lag (in seconds) between the primary and the replica(s)

14 - Parameter Group

Introduced in v.1.51

Parameter Groups is a powerful new feature introduced in version 1.51 of CCX. It enables users to manage and fine-tune database parameters within a group, simplifying configuration and ensuring consistency across datastores.

Overview

With Parameter Groups, users can:

  • Create new parameter groups with customized settings.
  • Assign parameter groups to specific datastores.
  • Edit and update parameters within a group.
  • Delete unused parameter groups.
  • Automatically synchronize parameter changes with associated datastores.

note:

A datastore can only be associated with one parameter group at a time. Changes to parameters are automatically propagated to all associated datastores.


Features

1. Creating a Parameter Group

Users can create a new parameter group to define custom configurations for their databases.

Steps to Create a New Parameter Group:

  1. Navigate to the DB Parameters section.
  2. Click on the + Create new group button.
  3. Fill in the required details:
    • Group Name: A unique name for the parameter group.
    • Description: A brief description of the group.
    • Vendor: Select the database type (e.g., MySQL, PostgreSQL, Redis).
    • Version: Specify the database version.
    • Configuration: Choose the type of configuration (e.g., Primary/Replica).
  4. Customize the parameter values as needed.
  5. Click Create to save the new group.

Create a parameter group


2. Assigning a Parameter Group to a Datastore

Once created, parameter groups can be assigned to datastores to apply the defined settings. The parameter can be assigned to an existing datastore or when a datastore is created.

Steps to Assign a Parameter Group in the Deployment wizard:

  1. Open the Create datastore wizard
  2. In the Configuration step, press Advanced, and select the parameter group under DB Settings. Assign a parameter group to the datastore

note Please note that atleast one parameter group must exist matching the vendor, version and configuration.

Steps to Assign a Parameter Group to an existing datastore:

  1. Navigate to the datastore you want to configure.
  2. Go to the DB Parameters tab.
  3. Click Change group or Assign group.
  4. Select the desired parameter group from the dropdown.
  5. Click Save to apply the group to the datastore.

The system will display the synchronization status (e.g., Pending or Synced) after assigning the group.

Assign a parameter group to datastore


3. Viewing and Managing Parameter Groups

Users can view all parameter groups in the DB Parameters section. For each group, the following details are displayed:

  • Group Name
  • Vendor and Version
  • Datastores: Associated datastores.
  • Descriptions

View parameter groups

From this view, users can:

  • Edit: Modify the group’s parameters.
  • Duplicate: Create a copy of the group.
  • Delete: Remove the group.

Parameter group actions


4. Editing a Parameter Group

Parameter groups can be updated to reflect new configurations. Any changes are automatically synchronized with associated datastores.

Steps to Edit a Parameter Group:

  1. Navigate to the DB Parameters section.
  2. Click on the three-dot menu next to the group you want to edit.
  3. Select Edit.
  4. Update the parameter values as needed.
  5. Click Save.

5. Deleting a Parameter Group

Unused parameter groups can be deleted to maintain a clean configuration environment.

Steps to Delete a Parameter Group:

  1. Navigate to the DB Parameters section.
  2. Click on the three-dot menu next to the group you want to delete.
  3. Select Delete.
  4. Confirm the deletion.

note A parameter group cannot be deleted if it is assigned to a datastore.


6. Synchronization

Once a parameter group is assigned to a datastore, the parameters are automatically synchronized. The status of synchronization (e.g., Pending or Synced) is visible in the DB Parameters tab of the datastore, and also in the Event Viewer.

sync parameter groups


Best Practices

  • Use Descriptive Names: Give parameter groups clear, descriptive names to make them easily identifiable.
  • Regular Updates: Regularly review and update parameter groups to optimize database performance.
  • Monitor Sync Status: Always verify that parameter changes are properly synced to the datastores.

Conclusion

Parameter Groups in CCX provide a centralized and efficient way to manage database configurations. By grouping parameters and syncing them to datastores, users can ensure consistency, reduce manual errors, and improve overall system performance.

15 - Promote A Replica

You may want to promote a replica to become the new primary. For instance, if you’ve scaled up with a larger instance, you might prefer to designate it as the primary. Alternatively, if you’re scaling down, you may want to switch to a smaller configuration for the primary node.

In the Nodes view, select the Promote Replica action from the action menu next to the replica you wish to promote:

Promote replica

In this example, the replica with an instance size of ‘medium’ will be promoted to the new primary.

A final confirmation screen will appear, detailing the steps that will be performed:

Promotion confirmation

16 - Reboot A Node

Introduced in v.1.51

The reboot command is found under the action menu of a databse node, on the Nodes page. Reboot node

Selecting “Reboot” triggers a restart of the chosen replica. Use this option when:

  • the replica needs to be refreshed due to performance issues
  • for maintenance purposes.
  • For some parameters, any change to the parameter value in a parameter group only takes effect after a reboot.

danger:

  • Ensure all tasks linked with the node are concluded before initiating a reboot to prevent data loss.
  • Only authorized personnel should perform actions within the administration panel to maintain system integrity.

note:

  • Please note that rebooting may cause temporary unavailability.
  • In Valkey, the primary may failover to a secondary if the reboot takes more than 30 seconds.

17 - Restore Backup

The Backup and Restore feature provides users with the ability to create, view, and restore backups for their databases. This ensures data safety and allows recovery to previous states if necessary.

Backup List View

In the Backup tab, users can view all the backups that have been created. The table provides essential information about each backup, such as:

  • Method: The tool or service used to perform the backup (e.g., mariabackup).
  • Type: The type of backup (e.g., full backup).
  • Status: The current state of the backup (e.g., Completed).
  • Started: The start time of the backup process.
  • Duration: How long the backup process took.
  • Size: The total size of the backup file.
  • Actions: Options to manage or restore backups.

Example Backup Table

Backup table

Users can manage their backups using the “Actions” menu, where options such as restoring a backup are available.

Backup Schedules View

The Backup Schedules allows users to manage scheduled backups for their datastore. Users can configure automatic backup schedules to ensure data is periodically saved without manual intervention.

Backup Schedule Table

The schedule table shows the details of each scheduled backup, including:

  • Method: The tool or service used to perform the backup (e.g., mariabackup).
  • Type: The type of backup, such as incremental or full.
  • Status: The current state of the scheduled backup (e.g., Active).
  • Created: The date and time when the backup schedule was created.
  • Recurrence: The schedule’s frequency, showing the cron expression used for the schedule (e.g., TZ=UTC 5 * * *).
  • Action: Options to manage the schedule, such as Pause or Edit.

Example Backup Schedule Table:

Backup Schedule Options

Managing Backup Schedules

The Action menu next to each schedule allows users to:

  • Pause: Temporarily stop the backup schedule.
  • Edit: Adjust the backup schedule settings, such as its frequency or time.

Editing a Backup Schedule

When editing a backup schedule, users can specify:

  • Frequency: Choose between Hourly or Daily backups.
  • Time: Set the exact time when the backup will start (e.g., 05:00 UTC).

For example, in the Edit Full Backup Schedule dialog, you can configure a full backup to run every day at a specified time. Adjust the settings as needed and click Save to apply the changes.

Example Backup Schedule Edit Dialog:

Edit Full Backup Schedule

This dialog allows you to easily adjust backup intervals, ensuring that backups align with your operational needs.

note:

Editing or pausing a schedule will not affect the current backups already created. The changes will only apply to future backups.

Restore Backup

To restore a backup, navigate to the Backup tab, find the desired backup, and select the Restore action from the Actions menu. This opens the restore dialog, where the following information is displayed:

  • Backup ID: The unique identifier of the backup.
  • Type: The type of backup (e.g., full backup).
  • Size: The total size of the backup file.

Restore Settings

  • Use Point in Time Recovery: Option to enable point-in-time recovery for finer control over the restore process. PITR is only supported by Postgres, MySQL/MariaDb, and MS SQLServer.

By default, this option is turned off, allowing a full restoration from the selected backup.

Confirmation

Before initiating the restore, users are presented with a confirmation dialog:

You are going to restore a backup
You are about to restore a backup created on 03/10/2024 05:00 UTC.
This process will completely overwrite your current data, and all changes since your last backup will be lost.

Users can then choose to either Cancel or proceed with the Restore.

Example Restore Dialog:

Restore dialog

This ensures that users are fully aware of the potential data loss before proceeding with the restore operation.

18 - Scale A Datastore

This section explains how to scale a datastore, including:

  • Scaling volumes
  • Scaling nodes (out, in, up and down)

A datastore can be scaled out to meet growing demands. Scaling out involves adding:

  • One or more replica nodes (for primary/replica configurations). This is useful when you need to scale up and want the primary node to have more resources, such as additional CPU cores and RAM.
  • One or more primary nodes (for multi-primary configurations). In multi-primary setups, scaling up or down must maintain an odd number of nodes to preserve quorum and the consensus protocol required by the database.

The instance type of the new nodes may differ from the current ones.

To scale a datastore, navigate to the Nodes page and select Nodes Configuration.

Scaling nodes

Scaling Up or Down, In or Out

Use the slider to adjust the datastore’s new size. In this example, we have two nodes (one primary and one replica), and we want to scale up to four nodes. You can also specify the availability zones and instance sizes for the new nodes. Later, you might choose to promote one of the replicas to be the new primary. To proceed with scaling, click Save and wait for the scaling job to complete.

Scaling from 2 to 4 nodes

Scaling Down

You can also scale down by removing replicas or primary nodes (in a multi-primary configuration). In the Nodes Configuration view, select the nodes you wish to remove, then click Save to begin the scaling process. This allows you to reduce the size of the datastore or remove nodes with unwanted instance sizes.

Scaling down to 2 nodes

Scaling Volumes

To scale storage, go to the Nodes tab and select Scale Storage. You can extend the storage size, but it cannot be reduced. All nodes in the datastore will have their storage scaled to the new size.

19 - Terraform Provider

The CCX Terraform provider allows to create datastores on all supported clouds. The CCX Terraform provider project is hosted on github.

Oauth2 credentials

Oauth2 credentials are used to authenticate the CCX Terraform provider with CCX. You can generate these credentials on the Account page Authorization tab. Create creds And then you will see: Created creds

Requirement

  • Terraform 0.13.x or later

Quick Start

  1. Create Oauth2 credentials.
  2. Create a terraform.tf
  3. Set client_id, client_secret, below is a terraform.tf file:
terraform {
  required_providers {
    ccx = {
      source  = "severalnines/ccx"
      version = "~> 0.4.7"
    }
  }
}

provider "ccx" {
    client_id = `client_id`
    client_secret = `client_secret`
}
```

Now, you can create a datastore using the following terraform code.
Here is an example of a parameter group:

```terraform
resource "ccx_parameter_group" "asteroid" {
    name = "asteroid"
    database_vendor = "mariadb"
    database_version = "10.11"
    database_type = "galera"

    parameters = {
      table_open_cache = 8000
      sql_mode = "STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
    }
}
```

This group can then be associated with a datastore as follows:

```terraform
resource "ccx_datastore" "luna_mysql" {
	name           = "luna_mysql"
	size           = 3
	type           = "replication"
	db_vendor      = "mysql"
	tags           = ["new", "test"]
	cloud_provider = "CCX_CLOUD"
	cloud_region   = "CCX-REGION-1"
	instance_size  = "MEGALARGE"
	volume_size    = 80
	volume_type    = "MEGAFAST"
	parameter_group = ccx_parameter_group.asteroid.id
}
```

Replace CCX_CLOUD, CCX-REGION-1, MEGALARGE and, MEGAFAST, with actual values depending on the cloud infrastructure available.

For more information and examples, visit the [terraform-provider-ccx](https://github.com/severalnines/terraform-provider-ccx) github page.

## More on parameter groups
Only one parameter group can be used at any give time by a datastore.
Also, you cannot change an existing parameter group from terraform.
If you want to change an existing parameter group, then you need to create a new parameter group:
```terraform
resource "ccx_parameter_group" "asteroid2" {
    name = "asteroid2"
    database_vendor = "mariadb"
    database_version = "10.11"
    database_type = "galera"

    parameters = {
      table_open_cache = 7000
      sql_mode = "NO_ENGINE_SUBSTITUTION"
    }
}
```
And then reference it in:
```terraform
resource "ccx_datastore" "luna_mysql" {
	name           = "luna_mysql"
  ... <same as before>
	parameter_group = ccx_parameter_group.asteroid2.id
}
```
Now you can apply this to terraform. Always test config changes first on a test system to be sure the config change works as expected.

## Features
The following settings can be updated:

- Add and remove nodes
- Volume type
- Volume size
- Notifications
- Maintenance time
- Modify firewall (add/remove) entries. Multiple entries can be specified with a comma-separated list.

### Limitations

- Change the existing parameter group is not possible after initial creation, however you can create a new parameter group and reference that.
- It is not possible to change instance type.
- Changing availability zone is not possible.

20 - TLS For Metrics

Overview

To enhance security, using TLS for accessing metrics is recommended. This document outlines how the metrics served securely using TLS for each exporter. Each node typically has a Node Exporter and a corresponding database-specific exporter to provide detailed metrics. Access to these metrics is limited to the sources specified in Firewall Management.

Service discovery

There is a service discovery endpoint created for each datastore. Available from CCX v1.53 onwards.

It’s available at https://<ccxFQDN>/metrics/<storeID>/targets and implements Prometheus HTTP SD Endpoint.

note:

<ccxFQDN> is the domain you see in your address bar with CCX UI open, not a datastore URL or a connection string. We’ll use ccx.example.com hereafter.

Here is an example of a scrape config for Prometheus:

scrape_configs:
  - job_name: 'my datastore'
    http_sd_configs:
      - url: 'https://ccx.example.com/metrics/50e4db2a-85cd-4190-b312-e9e263045b5b/targets'

Individual Metrics Endpoints Format

Metrics for each exporter is served on:

https://ccx.example.com/metrics/<storeID>/<nodeName>/<exporterType>

Where nodeName is short name, not full fqdn.

Exporter Type Examples:

  1. MSSQL:

    • URL: https://ccx.example.com/metrics/<storeID>/<nodeName>/mssql_exporter
  2. Redis:

    • URL: https://ccx.example.com/metrics/<storeID>/<nodeName>/redis_exporter
  3. PostgreSQL:

    • URL: https://ccx.example.com/metrics/<storeID>/<nodeName>/postgres_exporter
  4. MySQL:

    • URL: https://ccx.example.com/metrics/<storeID>/<nodeName>/mysqld_exporter
  5. MariaDB:

    • URL: https://ccx.example.com/metrics/<storeID>/<nodeName>/mysqld_exporter
  6. NodeExporter:

    • URL: https://ccx.example.com/metrics/<storeID>/<nodeName>/node_exporter

By serving metrics over HTTPS with TLS, we ensure secure monitoring access for customers.

21 - Upgrade Lifecycle Mgmt

CCX will keep your system updated with the latest security patches for both the operating system and the database software.

You will be informed when there is a pending update and you have two options:

  • Apply the update now
  • Schedule a time for the update

The update will be performed using a roll-forward upgrade algorithm:

  1. The oldest replica (or primary if no replica exist) will be selected first
  2. A new node will be added with the same specification as the oldest node and join the datastore
  3. The oldest node will be removed
  4. 1-3 continues until all replicas (or primaries in case of a multi-primary setup) are updated.
  5. If it is a primary-replica configuration then the primary will be updated last. A new node will be added, the new node will be promoted to become the new primary, and the old primary will be removed.

upgrade

Upgrade now

This option will start the upgrade now.

Scheduled upgrade

The upgrade will start at a time (in UTC) and on a weekday which suits the application. Please note, that for primary-replica configurations, the update will cause the current primary to be changed.

Upgrade database major version

To upgrade the database major version from e.g MariaDB 10.6 to 10.11, you need to create a new datastore from backup, alternatively take mysqldump or pgdump and apply it to your new datastore.

22 - Connect Kubernetes with DBaaS

Overview on what is needed to connect Kubernetes with DBaaS

Overview

To connect your Kubernetes cluster with DBaaS, you need to allow the external IP addresses of your worker nodes, including reserved IP adresses, in DBaaS UIs firewall. You can find the reserved IPs in your clusters Openstack project or ask the support for help.

Get your worker nodes external IP with the CLI tool kubectl: kubectl get nodes -o wide

NAME                                            STATUS   ROLES           AGE    VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
company-stage1-control-plane-1701435699-7s27c   Ready    control-plane   153d   v1.28.4   10.128.0.40    <none>        Ubuntu 22.04.3 LTS   5.15.0-88-generic   containerd://1.7.6
company-stage1-control-plane-1701435699-9spjg   Ready    control-plane   153d   v1.28.4   10.128.1.160   <none>        Ubuntu 22.04.3 LTS   5.15.0-88-generic   containerd://1.7.6
company-stage1-control-plane-1701435699-wm8pd   Ready    control-plane   153d   v1.28.4   10.128.3.13    <none>        Ubuntu 22.04.3 LTS   5.15.0-88-generic   containerd://1.7.6
company-stage1-worker-sto1-1701436487-dwr5f     Ready    <none>          153d   v1.28.4   10.128.3.227   1.2.3.5       Ubuntu 22.04.3 LTS   5.15.0-88-generic   containerd://1.7.6
company-stage1-worker-sto2-1701436613-d2wgw     Ready    <none>          153d   v1.28.4   10.128.2.180   1.2.3.6       Ubuntu 22.04.3 LTS   5.15.0-88-generic   containerd://1.7.6
company-stage1-worker-sto3-1701437761-4d9bl     Ready    <none>          153d   v1.28.4   10.128.0.134   1.2.3.7       Ubuntu 22.04.3 LTS   5.15.0-88-generic   containerd://1.7.6

Copy the external IP for each worker node, in this case the three nodes with the ROLE <none>.

In the DBaaS UI, go to Datastores -> Firewall -> Create trusted source, Add the external IP with CIDR notation /32 for each IP address (E.g. 1.2.3.5/32).