zabbix for database backup and table partitioning

  • 2020-05-14 05:36:43
  • OfStack

Due to the low configuration of the zabbix server used on the test environment and the frequent performance bottlenecks (mainly database and disk I/O, etc.), I was forced to use some methods to alleviate these problems.

It is mainly that the script of the backup database used before is fully prepared for zabbix database, and mysqldump, which is the tool of mysql, is used. When the data volume is large, it will take a long time to fully prepare, which will cause the database lock read. This causes the zabbix service to assume that mysql is dead, generating a whole bunch of alarms.

It turned out that it was the large tables in the zabbix database that caused the huge increase in the amount of data in the database. Therefore, when backing up the database, you can choose to skip these tables for backup, which will greatly reduce the time taken for database backup (PS: it took about 10 minutes to back up the database before, but now it will take around 1S to back up the database, which greatly reduces the time taken to back up the database).

Here's a script written by someone to backup and restore the zabbix database:


#!/bin/bash
#author: itnihao
red='\e[0;31m' #  red   
RED='\e[1;31m' 
green='\e[0;32m' #  green   
GREEN='\e[1;32m' 
blue='\e[0;34m' #  blue   
BLUE='\e[1;34m' 
purple='\e[0;35m' #  purple   
PURPLE='\e[1;35m' 
NC='\e[0m' #  No color   
source /etc/bashrc
source /etc/profile
MySQL_USER=zabbix
MySQL_PASSWORD=zabbix
MySQL_HOST=localhost
MySQL_PORT=3306
MySQL_DUMP_PATH=/opt/backup
MYSQL_BIN_PATH=/opt/software/mysql/bin/mysql
MYSQL_DUMP_BIN_PATH=/opt/software/mysql/bin/mysqldump
MySQL_DATABASE_NAME=zabbix
DATE=$(date '+%Y%m%d')
MySQLDUMP () {
    [ -d ${MySQL_DUMP_PATH} ] || mkdir ${MySQL_DUMP_PATH}
    cd ${MySQL_DUMP_PATH}
    [ -d logs    ] || mkdir logs
    [ -d ${DATE} ] || mkdir ${DATE}
    cd ${DATE}
    
    #TABLE_NAME_ALL=$(${MYSQL_BIN_PATH} -u${MySQL_USER} -p${MySQL_PASSWORD}  -h${MySQL_HOST} ${MySQL_DATABASE_NAME} -e "show tables"|egrep -v "(Tables_in_zabbix)")
    TABLE_NAME_ALL=$(${MYSQL_BIN_PATH} -u${MySQL_USER} -p${MySQL_PASSWORD}  -h${MySQL_HOST} ${MySQL_DATABASE_NAME} -e "show tables"|egrep -v "(Tables_in_zabbix|history*|trends*|acknowledges|alerts|auditlog|events|service_alarms)")
    for TABLE_NAME in ${TABLE_NAME_ALL}
    do
        ${MYSQL_DUMP_BIN_PATH} --opt -u${MySQL_USER} -p${MySQL_PASSWORD} -P${MySQL_PORT} -h${MySQL_HOST} ${MySQL_DATABASE_NAME} ${TABLE_NAME} >${TABLE_NAME}.sql
        sleep 0.01
    done
    [ "$?" == 0 ] && echo "${DATE}: Backup zabbix succeed"     >> ${MySQL_DUMP_PATH}/logs/ZabbixMysqlDump.log
    [ "$?" != 0 ] && echo "${DATE}: Backup zabbix not succeed" >> ${MySQL_DUMP_PATH}/logs/ZabbixMysqlDump.log
    
    cd ${MySQL_DUMP_PATH}/
    rm -rf $(date +%Y%m%d --date='5 days ago')
    exit 0
}
MySQLImport () {
    cd ${MySQL_DUMP_PATH}
    DATE=$(ls  ${MySQL_DUMP_PATH} |egrep "\b^[0-9]+$\b")
    echo -e "${green}${DATE}"
    echo -e "${blue}what DATE do you want to import,please input date:${NC}"
    read SELECT_DATE
    if [ -d "${SELECT_DATE}" ];then
        echo -e "you select is ${green}${SELECT_DATE}${NC}, do you want to contine,if,input ${red}(yes|y|Y)${NC},else then exit"
        read Input
        [[ 'yes|y|Y' =~ "${Input}" ]]
        status="$?"
        if [ "${status}" == "0"  ];then
            echo "now import SQL....... Please wait......."
        else
            exit 1
        fi
        cd ${SELECT_DATE}
        for PER_TABEL_SQL in $(ls *.sql)
        do
           ${MYSQL_BIN_PATH} -u${MySQL_USER} -p${MySQL_PASSWORD}  -h${MySQL_HOST} ${MySQL_DATABASE_NAME} < ${PER_TABEL_SQL}
           echo -e "import ${PER_TABEL_SQL} ${PURPLE}........................${NC}"
        done 
        echo "Finish import SQL,Please check Zabbix database"
    else 
        echo "Don't exist ${SELECT_DATE} DIR" 
    fi
}
case "$1" in
MySQLDUMP|mysqldump)
    MySQLDUMP
    ;;
MySQLImport|mysqlimport)
    MySQLImport
    ;;
*)
    echo "Usage: $0 {(MySQLDUMP|mysqldump) (MySQLImport|mysqlimport)}"
    ;;
esac

The foot source in this https: / / github com/itnihao/zabbix - book/blob/master / 03 - chapter/Zabbix_MySQLdump_per_table_v2 sh

This is the script I made in the god of the script after the formation of my own backup script, you can also modify their own backup script. The effect of this script has been mentioned above. When doing full preparation before, there was almost 4G data volume, but now only backup configuration file data volume is less than 10M, which significantly saves time and space.

However, in this case, the data backup cannot be guaranteed. I am currently considering using xtradbbackup to conduct incremental data backup, which has not been realized yet, so I will leave it to be done in two days.

Ok, about the database backup, and then also need to table partitioning of large amount of data table, reference 1 article https zabbix website: / / www zabbix. org wiki/Docs howto/mysql_partition you are interested can go and see, I will be here the summary in the 1, 1 point is more convenient.

A table partition can physically split a table with a large amount of data into multiple files, but logically it is still a single table, transparent to the application. In addition, breaking the large table into many smaller tables will make the data query faster. You can also delete old data partitions at any time and delete expired data. This method is suitable for large data table, but less query application scenarios. If it is a large amount of data table, and a large number of queries, it is recommended to carry out sub-library sub-table operation.

Okay, enough with that. Let's do our homework.

First, log into the database (PS: I won't show you that)

Then log into the zabbix library and modify the structure of the two tables:


use zabbix;
Alter table history_text drop primary key, add index (id), drop index history_text_2, add index history_text_2 (itemid, id);
Alter table history_log drop primary key, add index (id), drop index history_log_2, add index history_log_2 (itemid, id);

After the modification, create 4 stored procedures according to the procedures on the official website:


DELIMITER $$
CREATE PROCEDURE `partition_create`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), PARTITIONNAME VARCHAR(64), CLOCK INT)
BEGIN
        /*
           SCHEMANAME = The DB schema in which to make changes
           TABLENAME = The table with partitions to potentially delete
           PARTITIONNAME = The name of the partition to create
        */
        /*
           Verify that the partition does not already exist
        */
 
        DECLARE RETROWS INT;
        SELECT COUNT(1) INTO RETROWS
        FROM information_schema.partitions
        WHERE table_schema = SCHEMANAME AND TABLE_NAME = TABLENAME AND partition_description >= CLOCK;
 
        IF RETROWS = 0 THEN
                /*
                   1. Print a message indicating that a partition was created.
                   2. Create the SQL to create the partition.
                   3. Execute the SQL from #2.
                */
                SELECT CONCAT( "partition_create(", SCHEMANAME, ",", TABLENAME, ",", PARTITIONNAME, ",", CLOCK, ")" ) AS msg;
                SET @SQL = CONCAT( 'ALTER TABLE ', SCHEMANAME, '.', TABLENAME, ' ADD PARTITION (PARTITION ', PARTITIONNAME, ' VALUES LESS THAN (', CLOCK, '));' );
                PREPARE STMT FROM @SQL;
                EXECUTE STMT;
                DEALLOCATE PREPARE STMT;
        END IF;
END
$$DELIMITER ;

DELIMITER $$
CREATE PROCEDURE `partition_drop`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), DELETE_BELOW_PARTITION_DATE BIGINT)
BEGIN
        /*
           SCHEMANAME = The DB schema in which to make changes
           TABLENAME = The table with partitions to potentially delete
           DELETE_BELOW_PARTITION_DATE = Delete any partitions with names that are dates older than this one (yyyy-mm-dd)
        */
        DECLARE done INT DEFAULT FALSE;
        DECLARE drop_part_name VARCHAR(16);
 
        /*
           Get a list of all the partitions that are older than the date
           in DELETE_BELOW_PARTITION_DATE.  All partitions are prefixed with
           a "p", so use SUBSTRING TO get rid of that character.
        */
        DECLARE myCursor CURSOR FOR
                SELECT partition_name
                FROM information_schema.partitions
                WHERE table_schema = SCHEMANAME AND TABLE_NAME = TABLENAME AND CAST(SUBSTRING(partition_name FROM 2) AS UNSIGNED) < DELETE_BELOW_PARTITION_DATE;
        DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
 
        /*
           Create the basics for when we need to drop the partition.  Also, create
           @drop_partitions to hold a comma-delimited list of all partitions that
           should be deleted.
        */
        SET @alter_header = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " DROP PARTITION ");
        SET @drop_partitions = "";
 
        /*
           Start looping through all the partitions that are too old.
        */
        OPEN myCursor;
        read_loop: LOOP
                FETCH myCursor INTO drop_part_name;
                IF done THEN
                        LEAVE read_loop;
                END IF;
                SET @drop_partitions = IF(@drop_partitions = "", drop_part_name, CONCAT(@drop_partitions, ",", drop_part_name));
        END LOOP;
        IF @drop_partitions != "" THEN
                /*
                   1. Build the SQL to drop all the necessary partitions.
                   2. Run the SQL to drop the partitions.
                   3. Print out the table partitions that were deleted.
                */
                SET @full_sql = CONCAT(@alter_header, @drop_partitions, ";");
                PREPARE STMT FROM @full_sql;
                EXECUTE STMT;
                DEALLOCATE PREPARE STMT;
 
                SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, @drop_partitions AS `partitions_deleted`;
        ELSE
                /*
                   No partitions are being deleted, so print out "N/A" (Not applicable) to indicate
                   that no changes were made.
                */
                SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, "N/A" AS `partitions_deleted`;
        END IF;
END$$
DELIMITER ;

DELIMITER $$
CREATE PROCEDURE `partition_maintenance`(SCHEMA_NAME VARCHAR(32), TABLE_NAME VARCHAR(32), KEEP_DATA_DAYS INT, HOURLY_INTERVAL INT, CREATE_NEXT_INTERVALS INT)
BEGIN
        DECLARE OLDER_THAN_PARTITION_DATE VARCHAR(16);
        DECLARE PARTITION_NAME VARCHAR(16);
        DECLARE LESS_THAN_TIMESTAMP INT;
        DECLARE CUR_TIME INT;
 
        CALL partition_verify(SCHEMA_NAME, TABLE_NAME, HOURLY_INTERVAL);
        SET CUR_TIME = UNIX_TIMESTAMP(DATE_FORMAT(NOW(), '%Y-%m-%d 00:00:00'));
 
        SET @__interval = 1;
        create_loop: LOOP
                IF @__interval > CREATE_NEXT_INTERVALS THEN
                        LEAVE create_loop;
                END IF;
 
                SET LESS_THAN_TIMESTAMP = CUR_TIME + (HOURLY_INTERVAL * @__interval * 3600);
                SET PARTITION_NAME = FROM_UNIXTIME(CUR_TIME + HOURLY_INTERVAL * (@__interval - 1) * 3600, 'p%Y%m%d%H00');
                CALL partition_create(SCHEMA_NAME, TABLE_NAME, PARTITION_NAME, LESS_THAN_TIMESTAMP);
                SET @__interval=@__interval+1;
        END LOOP;
 
        SET OLDER_THAN_PARTITION_DATE=DATE_FORMAT(DATE_SUB(NOW(), INTERVAL KEEP_DATA_DAYS DAY), '%Y%m%d0000');
        CALL partition_drop(SCHEMA_NAME, TABLE_NAME, OLDER_THAN_PARTITION_DATE);
 
END$$
DELIMITER ;

DELIMITER $$
CREATE PROCEDURE `partition_verify`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), HOURLYINTERVAL INT(11))
BEGIN
        DECLARE PARTITION_NAME VARCHAR(16);
        DECLARE RETROWS INT(11);
        DECLARE FUTURE_TIMESTAMP TIMESTAMP;
 
        /*
         * Check if any partitions exist for the given SCHEMANAME.TABLENAME.
         */
        SELECT COUNT(1) INTO RETROWS
        FROM information_schema.partitions
        WHERE table_schema = SCHEMANAME AND TABLE_NAME = TABLENAME AND partition_name IS NULL;
 
        /*
         * If partitions do not exist, go ahead and partition the table
         */
        IF RETROWS = 1 THEN
                /*
                 * Take the current date at 00:00:00 and add HOURLYINTERVAL to it.  This is the timestamp below which we will store values.
                 * We begin partitioning based on the beginning of a day.  This is because we don't want to generate a random partition
                 * that won't necessarily fall in line with the desired partition naming (ie: if the hour interval is 24 hours, we could
                 * end up creating a partition now named "p201403270600" when all other partitions will be like "p201403280000").
                 */
                SET FUTURE_TIMESTAMP = TIMESTAMPADD(HOUR, HOURLYINTERVAL, CONCAT(CURDATE(), " ", '00:00:00'));
                SET PARTITION_NAME = DATE_FORMAT(CURDATE(), 'p%Y%m%d%H00');
 
                -- Create the partitioning query
                SET @__PARTITION_SQL = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " PARTITION BY RANGE(`clock`)");
                SET @__PARTITION_SQL = CONCAT(@__PARTITION_SQL, "(PARTITION ", PARTITION_NAME, " VALUES LESS THAN (", UNIX_TIMESTAMP(FUTURE_TIMESTAMP), "));");
 
                -- Run the partitioning query
                PREPARE STMT FROM @__PARTITION_SQL;
                EXECUTE STMT;
                DEALLOCATE PREPARE STMT;
        END IF;
END$$
DELIMITER ;

The four stored procedures above will be available after execution


CALL partition_maintenance('<zabbix_db_name>', '<table_name>', <days_to_keep_data>, <hourly_interval>, <num_future_intervals_to_create>)

The command partitions the table that you want to partition. I'm going to explain the parameter 1 here.

Here's an example:


CALL partition_maintenance(zabbix, 'history_uint', 31, 24, 14);

zabbix_db_name: library name

table_name: name of the table

days_to_keep_data: how many days of data are saved

hourly_interval: how often do you generate a partition

num_future_intervals_to_create: how many partitions are generated this time

The example is the history_uint table, which holds data for a maximum of 31 days, and generates one partition every 24 hours. This time, 1 generates a total of 14 partitions

The above four stored procedures can be saved as one file and imported into the database. I will put the file in the attachment later. The command used here is: mysql-uzabbix-pzabbix zabbix < partition_call.sql

Then you can make the CALL series 1 call into a file as well. The content of the series 1 call is as follows:


DELIMITER $$
CREATE PROCEDURE `partition_maintenance_all`(SCHEMA_NAME VARCHAR(32))
BEGIN
       CALL partition_maintenance(SCHEMA_NAME, 'history', 31, 24, 14);
       CALL partition_maintenance(SCHEMA_NAME, 'history_log', 31, 24, 14);
       CALL partition_maintenance(SCHEMA_NAME, 'history_str', 31, 24, 14);
       CALL partition_maintenance(SCHEMA_NAME, 'history_text', 31, 24, 14);
       CALL partition_maintenance(SCHEMA_NAME, 'history_uint', 31, 24, 14);
       CALL partition_maintenance(SCHEMA_NAME, 'trends', 180, 24, 14);
       CALL partition_maintenance(SCHEMA_NAME, 'trends_uint', 180, 24, 14);
END$$
DELIMITER ;

Import the file into the database as well, using the command mysql-uzabbix-pzabbix zabbix < partition_all.sql

Now that you're there, you can partition the table using the following command:


mysql -uzabbix -pzabbix zabbix -e "CALL partition_maintenance_all('zabbix');"
+----------------+--------------------+
| table          | partitions_deleted |
+----------------+--------------------+
| zabbix.history | N/A                |
+----------------+--------------------+
+--------------------+--------------------+
| table              | partitions_deleted |
+--------------------+--------------------+
| zabbix.history_log | N/A                |
+--------------------+--------------------+
+--------------------+--------------------+
| table              | partitions_deleted |
+--------------------+--------------------+
| zabbix.history_str | N/A                |
+--------------------+--------------------+
+---------------------+--------------------+
| table               | partitions_deleted |
+---------------------+--------------------+
| zabbix.history_text | N/A                |
+---------------------+--------------------+
+---------------------+--------------------+
| table               | partitions_deleted |
+---------------------+--------------------+
| zabbix.history_uint | N/A                |
+---------------------+--------------------+
+---------------+--------------------+
| table         | partitions_deleted |
+---------------+--------------------+
| zabbix.trends | N/A                |
+---------------+--------------------+
+--------------------+--------------------+
| table              | partitions_deleted |
+--------------------+--------------------+
| zabbix.trends_uint | N/A                |
+--------------------+--------------------+

You can also see the newly generated table partition file in the Mysql number data directory. (PS: note that it is best to empty the data of history_uint table before executing the above command, otherwise the conversion time will be quite long due to the large amount of data in this table. The command to empty the data in the table is: truncate table history_uint;)

Ok, so you can partition the table.

Write the above command to the scheduled task as follows:


use zabbix;
Alter table history_text drop primary key, add index (id), drop index history_text_2, add index history_text_2 (itemid, id);
Alter table history_log drop primary key, add index (id), drop index history_log_2, add index history_log_2 (itemid, id);
0

Perform once every night at 1:01. In addition, the script written before to backup the database also needs to perform the scheduled task:


use zabbix;
Alter table history_text drop primary key, add index (id), drop index history_text_2, add index history_text_2 (itemid, id);
Alter table history_log drop primary key, add index (id), drop index history_log_2, add index history_log_2 (itemid, id);
1

So we're done, and then we're going to try zabbix's web page again and see if it's faster than it used to be.


Related articles: