1. Setup High Availability For ProxySQL via KeepAlived In AWS

Setup High Availability For ProxySQL via KeepAlived In AWS

Usually application don’t connect directly to Percona XtraDB Cluster, but go through a proxy – ProxySQL, for instance. However, if only one proxy node is used it becomes a single point of failure.

Not long ago Marco Tusa wrote about how to configure two ProxySQL nodes in front of XtraDB cluster. If deployed on EC2 instances, it doesn’t work that way because Amazon doesn’t allow to assign a secondary IP address on an interface.

This post describes how to configure highly available ProxySQL with keepalived, proxysql-tools and AWS Elastic Network Interface (ENI).
The application connects to a single Virtual IP. The VIP is assigned to ENI which is managed by keepalived. Proxysql-tools moves ENI between ProxySQL instances and monitors health of XtraDB Cluster nodes.

Setup Percona XtraDB Cluster

    Node #1: 172.31.8.51
    Node #2: 172.31.12.128
    Node #3: 172.31.3.159

The XtraDB Cluster setup is pretty straightforward and not much different from any other case.

    wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
    sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
    sudo apt-get update
    sudo apt-get install percona-xtradb-cluster-57
    sudo service mysql stop

MySQL configuration file on other nodes differs only in wsrep_node_address.

    cat /etc/mysql/my.cnf
    !includedir /etc/mysql/conf.d/
    [mysqld]
    datadir=/var/lib/mysql
    user=mysql
    wsrep_provider=/usr/lib/libgalera_smm.so
    wsrep_cluster_address=gcomm://172.31.8.51,172.31.12.128,172.31.3.159
    binlog_format=ROW
    default_storage_engine=InnoDB
    innodb_autoinc_lock_mode=2
    wsrep_node_address=172.31.8.51
    wsrep_sst_method=xtrabackup-v2
    wsrep_cluster_name=cluster_1
    wsrep_sst_auth="sstuser:s3cretPass"

After this step, we can bootstrap our cluster. I did this with the help of this command:

    /etc/init.d/mysql bootstrap-pxc

When the first node has been started, cluster status can be checked by:

    mysql> show status like 'wsrep%';
    +----------------------------+--------------------------------------+
    |          Variable_name     |                                Value |
    +----------------------------+--------------------------------------+
    |     wsrep_local_state_uuid | cc00ee27-3433-11e7-84e5-4a9beaabe9c9 |
    ...
    |          wsrep_local_state |                                    4 |
    |  wsrep_local_state_comment |                               Synced |
    ...
    |         wsrep_cluster_size |                                    1 |
    |       wsrep_cluster_status |                              Primary |
    |            wsrep_connected |                                   ON |
    ...
    |                wsrep_ready |                                   ON |
    +----------------------------+--------------------------------------+

When all nodes are running, wsrep_cluster_size will be equal to the number of clusters.
Cluster setup is complete at this step.

Setup ProxySQL and KeepAlived

    ProxySQL active: 172.31.24.212
    ProxySQL passive: 172.31.19.155
    ENI address: 172.31.26.237

It’s worth noting that ProxySQL instances and ENI must be on the same subnet.

First, we must configure our ProxySQL instances for the proxy request to cluster nodes.

    apt install proxysql
    apt-get install percona-xtradb-cluster-client-5.7

Now, we’ll setup ProxySQL to work with our Galera nodes. I will use a default hostgroup.

    service proxysql start
    mysql -u admin -p -h 127.0.0.1 -P 6032
    mysql@proxysql1> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'172.31.8.51',3306);
    mysql@proxysql1> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'172.31.12.128',3306);
    mysql@proxysql1> INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'172.31.3.159',3306);

Next, we must create a user for monitoring Percona XtraDB Cluster nodes in ProxySQL. You can do it on any cluster node.

    mysql@node1> CREATE USER 'monitor'@'%' IDENTIFIED BY '*****';
    mysql@node1> GRANT USAGE ON *.* TO 'monitor'@'%';

Update settings in proxysql:

    mysql@proxysql1> UPDATE global_variables SET variable_value='monitor' WHERE variable_name='mysql-monitor_username';
    mysql@proxysql1> UPDATE global_variables SET variable_value='*****' WHERE variable_name='mysql-monitor_password';
    mysql@proxysql1> LOAD MYSQL VARIABLES TO RUNTIME;
    mysql@proxysql1> SAVE MYSQL VARIABLES TO DISK;

After these changes, let’s make sure the ProxySQL pings nodes:

    mysql@proxysql1> SELECT * FROM monitor.mysql_server_ping_log DESC LIMIT 6;
    +---------------+------+------------------+----------------------+------------+
    |      hostname | port |    time_start_us | ping_success_time_us | ping_error |
    +---------------+------+------------------+----------------------+------------+
    | 172.31.12.128 | 3306 | 1494492166885382 |                  627 |       NULL |
    |  172.31.3.159 | 3306 | 1494492166887154 |                  586 |       NULL |
    |   172.31.8.51 | 3306 | 1494492166888947 |                  501 |       NULL |
    | 172.31.12.128 | 3306 | 1494492176885541 |                  596 |       NULL |
    |  172.31.3.159 | 3306 | 1494492176887442 |                  599 |       NULL |
    |   172.31.8.51 | 3306 | 1494492176889317 |                  527 |       NULL |
    +---------------+------+------------------+----------------------+------------+
    6 rows in set (0.00 sec)

Previous examples show that the ProxySQL is able to connect and ping the nodes you added.
To enable monitoring of these nodes, load them at runtime:

    mysql@proxysql1> LOAD MYSQL SERVERS TO RUNTIME;

Now, we must create a user that will work with our XtraDB Cluster nodes.
To add a user, insert credentials into the mysql_users table:

    mysql@proxysql1> INSERT INTO mysql_users (username,password) VALUES ('proxy_user','*****');
    Query OK, 1 row affected (0.00 sec)
    mysql@proxysql1> LOAD MYSQL USERS TO RUNTIME;

To provide reading/writing access to the cluster for ProxySQL, add this user on one of the Percona XtraDB Cluster nodes:

    mysql@node2> CREATE USER 'proxy_user'@'%' IDENTIFIED BY '*****';
    Query OK, 0 rows affected (0.01 sec)

    mysql@node2> GRANT ALL ON *.* TO 'proxy_user'@'%';
    Query OK, 0 rows affected (0.00 sec)

proxysql-tools can monitor health of XtraDB Cluster nodes and take out failed nodes of rotation.

Install it to your proxysql instances using pip install proxysql-tools  and add it into the ProxySQL scheduler:

    mysql@proxysql1> INSERT INTO scheduler (id,interval_ms,filename,arg1,arg2) VALUES (1, '15000', '/usr/local/bin/proxysql_tools', 'galera', 'register');
    mysql@proxysql1> LOAD SCHEDULER TO RUNTIME;

    # To make sure that the script has been loaded, check the runtime_scheduler table:

    mysql@proxysql1> SELECT * FROM runtime_schedulerG
    *************************** 1. row ***************************
             id: 1
         active: 1
    interval_ms: 15000
       filename: /usr/local/bin/proxysql_tools
           arg1: galera
           arg2: register
           arg3: NULL
           arg4: NULL
           arg5: NULL
        comment:
    1 row in set (0.00 sec)

Repeat steps above for another ProxySQL instance.
Now, ProxySQL instances are configured. But we haven’t added high availability yet.

Adding High Availability

keepalived on each of ProxySQL nodes will monitor each other. When one of the nodes becomes active, we will attach an ENI to the active node and configure the VIP on it. The application will use the VIP to connect to MySQL. No configuration changes are needed when the active ProxySQL node changes.

Install KeepAlived on ProxySQL instances and create the network interface in AWS that will be in the same subnet as the ProxySQL instances.

proxysql-tools attaches the network interface to an active node. Let’s configure it:
Create config file /etc/twindb/proxysql-tools.cfg and add content below:

    cat /etc/twindb/proxysql-tools.cfg
    [proxysql]
    # ProxySQL admin interface connectivity information
    admin_host=127.0.0.1
    admin_port=6032
    admin_username=admin
    admin_password=***
    
    # MySQL user used by ProxySQL monitoring module to monitor MySQL servers
    monitor_username=monitor
    monitor_password=***
    
    # Virtual IP for HA configuration
    virtual_ip=172.31.26.237
    virtual_netmask=255.255.240.0
    
    [aws]
    aws_access_key_id=***
    aws_secret_access_key=***
    aws_default_region=***

Virtual IP is address of your ENI. After this, we must configure KeepAlived.

    cat /etc/keepalived/keepalived.conf
    global_defs {
        notification_email {
            root@localhost
        }
        notification_email_from keepalived@localhost
        smtp_server localhost
        smtp_connect_timeout 30
    }
    vrrp_script chk_proxysql {
        script "/usr/bin/mysqladmin ping -h 127.0.0.1 -P 3306 > /dev/null 2>&1"
        interval 1
        timeout 1
        fall 3
        rise 3
        user root
    }
    vrrp_instance proxysql_instance {
        notify_master "/usr/local/bin/proxysql-tool aws notify_master"
        virtual_router_id 41
        state BACKUP
        interface eth0
        dont_track_primary
        unicast_peer {
            172.31.19.155
        }
        priority 200
        authentication {
            auth_type PASS
            auth_pass 1066
        }
        track_script {
        chk_proxysql
    }
    nopreempt
    debug

unicast_peer is the IP on eth0 of other ProxySQL node proxysql-tool with arguments aws notify_master attach ENI Virtual IP to the instance. On passive ProxySQL, config is the same, except unicast_peer:

    cat /etc/keepalived/keepalived.conf
    global_defs {
        notification_email {
            root@localhost
        }
        notification_email_from keepalived@localhost
        smtp_server localhost
        smtp_connect_timeout 30
    }
    vrrp_script chk_proxysql {
        script "/usr/bin/mysqladmin ping -h 127.0.0.1 -P 3306 > /dev/null 2>&1"
        interval 1
        timeout 1
        fall 3
        rise 3
        user root
    }
    vrrp_instance proxysql_instance {
        notify_master "/usr/local/bin/proxysql-tool aws notify_master"
        virtual_router_id 41
        state BACKUP
        interface eth0
        dont_track_primary
        unicast_peer {
            172.31.19.212
        }
        priority 200
        authentication {
            auth_type PASS
            auth_pass 1066
        }
        track_script {
        chk_proxysql
    }
    nopreempt
    debug

Finally, you can start keepalived as service. One of the ProxySQL nodes will become active, /usr/local/bin/proxysql-tool aws notify_master will move the ENI to the active node, and assign the VIP to it.

Previous Post Next Post