mysqldump备份数据库命令

1、备份命令

格式:mysqldump -h主机名 -P端口 -u用户名 -p密码 –database 数据库名 > 文件名.sql

例如: mysqldump -h 192.168.1.100 -p 3306 -uroot -ppassword –database cmdb > /data/backup/cmdb.sql

2、备份压缩

导出的数据有可能比较大,不好备份到远程,这时候就需要进行压缩

格式:mysqldump -h主机名 -P端口 -u用户名 -p密码 –database 数据库名 | gzip > 文件名.sql.gz

例如: mysqldump -h192.168.1.100 -p 3306 -uroot -ppassword –database cmdb | gzip > /data/backup/cmdb.sql.gz

3、备份同个库多个表

格式:mysqldump -h主机名 -P端口 -u用户名 -p密码 –database 数据库名 表1 表2 …. > 文件名.sql

例如 mysqldump -h192.168.1.100 -p3306 -uroot -ppassword cmdb t1 t2 > /data/backup/cmdb_t1_t2.sql

4、同时备份多个库

格式:mysqldump -h主机名 -P端口 -u用户名 -p密码 –databases 数据库名1 数据库名2 数据库名3 > 文件名.sql

例如:mysqldump -h192.168.1.100 -uroot -ppassword –databases cmdb bbs blog > /data/backup/mutil_db.sql

5、备份实例上所有的数据库

格式:mysqldump -h主机名 -P端口 -u用户名 -p密码 –all-databases > 文件名.sql

例如:mysqldump -h192.168.1.100 -p3306 -uroot -ppassword –all-databases > /data/backup/all_db.sql

6、备份数据出带删除数据库或者表的sql备份

格式:mysqldump -h主机名 -P端口 -u用户名 -p密码 –add-drop-table –add-drop-database 数据库名 > 文件名.sql

例如:mysqldump -uroot -ppassword –add-drop-table –add-drop-database cmdb > /data/backup/all_db.sql

7、备份数据库结构,不备份数据

格式:mysqldump -h主机名 -P端口 -u用户名 -p密码 –no-data 数据库名1 数据库名2 数据库名3 > 文件名.sql

例如:mysqldump –no-data –databases db1 db2 cmdb > /data/backup/structure.sql

8、例子

备份:

mysqldump -uroot -p -hhost db > back.sql
mysqldump -uroot -p -hhost –databases db –tables t1 t2 > back_0.sql
mysqldump -uroot -p -hhost –databases db –tables t3 t4 t5 t6 > back_1.sql
mysqldump -uroot -p -hhost –databases db –tables t7 t8 t9 t10 > back_2.sql

恢复:
mysql -uroot -p dbname < back.sql
mysql -uroot -p dbname < back_0.sql
mysql -uroot -p dbname < back_1.sql
mysql -uroot -p dbname < back_2.sql

mysql执行delete大数据时建议的方法

一张表可以存10几亿条的数据,特别是硬件上报的日志,特别多,而且还很多没有什么用的,某些数据库(本人试验mysql5.1)在批量删除数据时会拖慢数据库,本来跟删除的表无关,但还是拖慢了,也许是服务器的问题。

建议的方式:

  1. 创建一张备份表
  2. 如果删除的数据大于存留的数据,那么将存留的数据拷贝到备份表去
  3. 使用TRUNCATE TABLE命令,直接清空原表的数据
  4. 将备份表的数据拷贝到原表中
  5. 使用TRUNCATE TABLE命令清空备份表的数据

CREATE TABLE tb_device_report_log_bak SELECT * FROM tb_device_report_log_2 WHERE 1;
INSERT INTO tb_device_report_log_bak SELECT * FROM tb_device_report_log_2 a WHERE a.msgId!=10009;
TRUNCATE tb_device_report_log_2;
SELECT count(1) as zs ,msgId FROM `tb_device_report_log_2` WHERE 1 group by msgId order by zs desc;
INSERT INTO tb_device_report_log_2 SELECT * FROM tb_device_report_log_bak a WHERE 1;
TRUNCATE tb_device_report_log_bak;

SELECT count(1) as zs ,msgId FROM `tb_device_report_log_2` WHERE 1 group by msgId order by zs desc;

这行命令我是用来对比数据的,ok了再TRUNCATE表。

 

 

 

 

 

MySQL8.0新特性

1. 默认字符集由latin1变为utf8mb4

在8.0版本之前,默认字符集为latin1,utf8指向的是utf8mb3,8.0版本默认字符集为utf8mb4,utf8默认指向的也是utf8mb4。
注:在Percona Server 8.0.15版本上测试,utf8仍然指向的是utf8mb3,与官方文档有出入。

Warning | 3719 | 'utf8' is currently an alias for the character set UTF8MB3, but will be an alias for UTF8MB4 in a future release. Please consider using UTF8MB4 in order to be unambiguous. |

2. MyISAM系统表全部换成InnoDB表

系统表全部换成事务型的innodb表,默认的MySQL实例将不包含任何MyISAM表,除非手动创建MyISAM表。

# MySQL 5.7
mysql> select distinct(ENGINE) from information_schema.tables;
+--------------------+
| ENGINE             |
+--------------------+
| MEMORY             |
| InnoDB             |
| MyISAM             |
| CSV                |
| PERFORMANCE_SCHEMA |
| NULL               |
+--------------------+
6 rows in set (0.00 sec)
 
# MySQL 8.0
mysql> select distinct(ENGINE) from information_schema.tables;
+--------------------+
| ENGINE             |
+--------------------+
| NULL               |
| InnoDB             |
| CSV                |
| PERFORMANCE_SCHEMA |
+--------------------+
4 rows in set (0.00 sec)

3. 自增变量持久化

在8.0之前的版本,自增主键AUTO_INCREMENT的值如果大于max(primary key)+1,在MySQL重启后,会重置AUTO_INCREMENT=max(primary key)+1,这种现象在某些情况下会导致业务主键冲突或者其他难以发现的问题。自增主键重启重置的问题很早就被发现(https://bugs.mysql.com/bug.php?id=199),一直到8.0才被解决,8.0版本将会对AUTO_INCREMENT值进行持久化,MySQL重启后,该值将不会改变。

4. DDL原子化

InnoDB表的DDL支持事务完整性,要么成功要么回滚,将DDL操作回滚日志写入到data dictionary 数据字典表 mysql.innodb_ddl_log 中用于回滚操作,该表是隐藏的表,通过show tables无法看到。通过设置参数,可将ddl操作日志打印输出到mysql错误日志中。

mysql> set global log_error_verbosity=3;
mysql> set global innodb_print_ddl_logs=1;
mysql> create table t1(c int) engine=innodb;
 
# MySQL错误日志:
2018-06-26T11:25:25.817245+08:00 44 [Note] [MY-012473] [InnoDB] InnoDB: DDL log insert : [DDL record: DELETE SPACE, id=41, thread_id=44, space_id=6, old_file_path=./db/t1.ibd]
2018-06-26T11:25:25.817369+08:00 44 [Note] [MY-012478] [InnoDB] InnoDB: DDL log delete : by id 41
2018-06-26T11:25:25.819753+08:00 44 [Note] [MY-012477] [InnoDB] InnoDB: DDL log insert : [DDL record: REMOVE CACHE, id=42, thread_id=44, table_id=1063, new_file_path=db/t1]
2018-06-26T11:25:25.819796+08:00 44 [Note] [MY-012478] [InnoDB] InnoDB: DDL log delete : by id 42
2018-06-26T11:25:25.820556+08:00 44 [Note] [MY-012472] [InnoDB] InnoDB: DDL log insert : [DDL record: FREE, id=43, thread_id=44, space_id=6, index_id=140, page_no=4]
2018-06-26T11:25:25.820594+08:00 44 [Note] [MY-012478] [InnoDB] InnoDB: DDL log delete : by id 43
2018-06-26T11:25:25.825743+08:00 44 [Note] [MY-012485] [InnoDB] InnoDB: DDL log post ddl : begin for thread id : 44
2018-06-26T11:25:25.825784+08:00 44 [Note] [MY-012486] [InnoDB] InnoDB: DDL log post ddl : end for thread id : 44

来看另外一个例子,库里只有一个t1表,drop table t1,t2; 试图删除t1,t2两张表,在5.7中,执行报错,但是t1表被删除,在8.0中执行报错,但是t1表没有被删除,证明了8.0 DDL操作的原子性,要么全部成功,要么回滚。

# MySQL 5.7
mysql> show tables;
+---------------+
| Tables_in_db |
+---------------+
| t1            |
+---------------+
1 row in set (0.00 sec)
mysql> drop table t1, t2;
ERROR 1051 (42S02): Unknown table 'db.t2'
mysql> show tables;
Empty set (0.00 sec)
 
# MySQL 8.0
mysql> show tables;
+---------------+
| Tables_in_db |
+---------------+
| t1            |
+---------------+
1 row in set (0.00 sec)
mysql> drop table t1, t2;
ERROR 1051 (42S02): Unknown table 'db.t2'
mysql> show tables;
+---------------+
| Tables_in_db |
+---------------+
| t1            |
+---------------+
1 row in set (0.00 sec)

5. 参数修改持久化

MySQL 8.0版本支持在线修改全局参数并持久化,通过加上PERSIST关键字,可以将修改的参数持久化到新的配置文件(mysqld-auto.cnf)中,重启MySQL时,可以从该配置文件获取到最新的配置参数。
例如执行:
set PERSIST expire_logs_days=10 ;
系统会在数据目录下生成一个包含json格式的 mysqld-auto.cnf 的文件,格式化后如下所示,当 my.cnf 和 mysqld-auto.cnf 同时存在时,后者具有更高优先级。

{
    "Version": 1,
    "mysql_server": {
        "expire_logs_days": {
            "Value": "10",
            "Metadata": {
                "Timestamp": 1529657078851627,
                "User": "root",
                "Host": "localhost"
            }
        }
    }
}

6. 新增降序索引

MySQL在语法上很早就已经支持降序索引,但实际上创建的仍然是升序索引,如下MySQL 5.7 所示,c2字段降序,但是从show create table看c2仍然是升序。8.0可以看到,c2字段降序。

# MySQL 5.7
mysql> create table t1(c1 int,c2 int,index idx_c1_c2(c1,c2 desc));
Query OK, 0 rows affected (0.03 sec)
mysql> show create table t1\G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `c1` int(11) DEFAULT NULL,
  `c2` int(11) DEFAULT NULL,
  KEY `idx_c1_c2` (`c1`,`c2`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4
1 row in set (0.00 sec)
 
# MySQL 8.0
mysql> create table t1(c1 int,c2 int,index idx_c1_c2(c1,c2 desc));
Query OK, 0 rows affected (0.06 sec)
mysql> show create table t1\G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `c1` int(11) DEFAULT NULL,
  `c2` int(11) DEFAULT NULL,
  KEY `idx_c1_c2` (`c1`,`c2` DESC)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC
1 row in set (0.00 sec)

再来看看降序索引在执行计划中的表现,在t1表插入10万条随机数据,查看select * from t1 order by c1 , c2 desc;的执行计划。从执行计划上可以看出,5.7的扫描数100113远远大于8.0的5行,并且使用了filesort。

DELIMITER ;;
CREATE PROCEDURE test_insert ()
BEGIN
DECLARE i INT DEFAULT 1;
WHILE i<100000
DO
insert into t1 select rand()*100000, rand()*100000;
SET i=i+1;
END WHILE ;
commit;
END;;
DELIMITER ;
CALL test_insert();
 
# MySQL 5.7
mysql> explain select * from t1 order by c1 , c2 desc limit 5;
+----+-------------+-------+------------+-------+---------------+-----------+---------+------+--------+----------+-----------------------------+
| id | select_type | table | partitions | type  | possible_keys | key       | key_len | ref  | rows   | filtered | Extra                       |
+----+-------------+-------+------------+-------+---------------+-----------+---------+------+--------+----------+-----------------------------+
|  1 | SIMPLE      | t1    | NULL       | index | NULL          | idx_c1_c2 | 10      | NULL | 100113 |   100.00 | Using index; Using filesort |
+----+-------------+-------+------------+-------+---------------+-----------+---------+------+--------+----------+-----------------------------+
1 row in set, 1 warning (0.00 sec)
 
# MySQL 8.0
mysql> explain select * from t1 order by c1 , c2 desc limit 5;
+----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-------------+
| id | select_type | table | partitions | type  | possible_keys | key       | key_len | ref  | rows | filtered | Extra       |
+----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-------------+
|  1 | SIMPLE      | t1    | NULL       | index | NULL          | idx_c1_c2 | 10      | NULL |    5 |   100.00 | Using index |
+----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-------------+
1 row in set, 1 warning (0.00 sec)

降序索引只是对查询中特定的排序顺序有效,如果使用不当,反而查询效率更低,比如上述查询排序条件改为 order by c1 desc, c2 desc,这种情况下,5.7的执行计划要明显好于8.0的,如下:

# MySQL 5.7
mysql> explain select * from t1  order by c1 desc , c2 desc limit 5;
+----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-------------+
| id | select_type | table | partitions | type  | possible_keys | key       | key_len | ref  | rows | filtered | Extra       |
+----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-------------+
|  1 | SIMPLE      | t1    | NULL       | index | NULL          | idx_c1_c2 | 10      | NULL |    5 |   100.00 | Using index |
+----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-------------+
1 row in set, 1 warning (0.01 sec)
 
# MySQL 8.0
mysql> explain select * from t1 order by c1 desc , c2 desc limit 5;
+----+-------------+-------+------------+-------+---------------+-----------+---------+------+--------+----------+-----------------------------+
| id | select_type | table | partitions | type  | possible_keys | key       | key_len | ref  | rows   | filtered | Extra                       |
+----+-------------+-------+------------+-------+---------------+-----------+---------+------+--------+----------+-----------------------------+
|  1 | SIMPLE      | t1    | NULL       | index | NULL          | idx_c1_c2 | 10      | NULL | 100429 |   100.00 | Using index; Using filesort |
+----+-------------+-------+------------+-------+---------------+-----------+---------+------+--------+----------+-----------------------------+
1 row in set, 1 warning (0.01 sec)

7. group by 不再隐式排序

mysql 8.0 对于group by 字段不再隐式排序,如需要排序,必须显式加上order by 子句。

# 表结构
mysql> show create table tb1\G
*************************** 1. row ***************************
       Table: tb1
Create Table: CREATE TABLE `tb1` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `name` varchar(50) DEFAULT NULL,
  `group_own` int(11) DEFAULT '0',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=11 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci ROW_FORMAT=DYNAMIC
1 row in set (0.00 sec)
 
# 表数据
mysql> select * from tb1;
+----+------+-----------+
| id | name | group_own |
+----+------+-----------+
|  1 | 1    |         0 |
|  2 | 2    |         0 |
|  3 | 3    |         0 |
|  4 | 4    |         0 |
|  5 | 5    |         5 |
|  8 | 8    |         1 |
| 10 | 10   |         5 |
+----+------+-----------+
7 rows in set (0.00 sec)
 
# MySQL 5.7
mysql> select count(id), group_own from tb1 group by group_own;
+-----------+-----------+
| count(id) | group_own |
+-----------+-----------+
|         4 |         0 |
|         1 |         1 |
|         2 |         5 |
+-----------+-----------+
3 rows in set (0.00 sec)
 
# MySQL 8.0.11
mysql> select count(id), group_own from tb1 group by group_own;
+-----------+-----------+
| count(id) | group_own |
+-----------+-----------+
|         4 |         0 |
|         2 |         5 |
|         1 |         1 |
+-----------+-----------+
3 rows in set (0.00 sec)
 
# MySQL 8.0.11显式地加上order by进行排序
mysql> select count(id), group_own from tb1 group by group_own order by group_own;
+-----------+-----------+
| count(id) | group_own |
+-----------+-----------+
|         4 |         0 |
|         1 |         1 |
|         2 |         5 |
+-----------+-----------+
3 rows in set (0.00 sec)

8. JSON特性增强

MySQL 8 大幅改进了对 JSON 的支持,添加了基于路径查询参数从 JSON 字段中抽取数据的 JSON_EXTRACT() 函数,以及用于将数据分别组合到 JSON 数组和对象中的 JSON_ARRAYAGG() 和 JSON_OBJECTAGG() 聚合函数。

在主从复制中,新增参数 binlog_row_value_options,控制JSON数据的传输方式,允许对于Json类型部分修改,在binlog中只记录修改的部分,减少json大数据在只有少量修改的情况下,对资源的占用。

9. redo & undo 日志加密

增加以下两个参数,用于控制redo、undo日志的加密。
innodb_undo_log_encrypt
innodb_undo_log_encrypt

10. innodb select for update跳过锁等待

select … for update,select … for share(8.0新增语法) 添加 NOWAIT、SKIP LOCKED语法,跳过锁等待,或者跳过锁定。
在5.7及之前的版本,select…for update,如果获取不到锁,会一直等待,直到innodb_lock_wait_timeout超时。

在8.0版本,通过添加nowait,skip locked语法,能够立即返回。如果查询的行已经加锁,那么nowait会立即报错返回,而skip locked也会立即返回,只是返回的结果中不包含被锁定的行。

# session1:
mysql> begin;
mysql> select * from t1 where c1 = 2 for update;
+------+-------+
| c1   | c2    |
+------+-------+
|    2 | 60530 |
|    2 | 24678 |
+------+-------+
2 rows in set (0.00 sec)
 
# session2:
mysql> select * from t1 where c1 = 2 for update nowait;
ERROR 3572 (HY000): Statement aborted because lock(s) could not be acquired  immediately and NOWAIT is set.
mysql> select * from t1 where c1 = 2 for update skip locked;
Empty set (0.00 sec)

11. 增加SET_VAR语法

在sql语法中增加SET_VAR语法,动态调整部分参数,有利于提升语句性能。

  • select /*+ SET_VAR(sort_buffer_size = 16M) */ id from test order id ;
  • insert /*+ SET_VAR(foreign_key_checks=OFF) */ into test(name) values(1);

12. 支持不可见索引

使用INVISIBLE关键字在创建表或者进行表变更中设置索引是否可见。索引不可见只是在查询时优化器不使用该索引,即使使用force index,优化器也不会使用该索引,同时优化器也不会报索引不存在的错误,因为索引仍然真实存在,在必要时,也可以快速的恢复成可见。

# 创建不可见索引
create table t2(c1 int,c2 int,index idx_c1_c2(c1,c2 desc) invisible );
# 索引可见
alter table t2 alter index idx_c1_c2 visible;
# 索引不可见
alter table t2 alter index idx_c1_c2 invisible;

13. 支持直方图

优化器会利用column_statistics的数据,判断字段的值的分布,得到更准确的执行计划。

可以使用 ANALYZE TABLE table_name [UPDATE HISTOGRAM on col_name with N BUCKETS |DROP HISTOGRAM ON clo_name] 来收集或者删除直方图信息。

直方图统计了表中某些字段的数据分布情况,为优化选择高效的执行计划提供参考,直方图与索引有着本质的区别,维护一个索引有代价。每一次的insert、update、delete都会需要更新索引,会对性能有一定的影响。而直方图一次创建永不更新,除非明确去更新它。所以不会影响insert、update、delete的性能。

# 添加/更新直方图
mysql> analyze table t1 update histogram on c1, c2 with 32 buckets;
+--------+-----------+----------+-----------------------------------------------+
| Table  | Op        | Msg_type | Msg_text                                      |
+--------+-----------+----------+-----------------------------------------------+
| db.t1 | histogram | status   | Histogram statistics created for column 'c1'. |
| db.t1 | histogram | status   | Histogram statistics created for column 'c2'. |
+--------+-----------+----------+-----------------------------------------------+
2 rows in set (2.57 sec)
 
# 删除直方图
mysql> analyze table t1 drop histogram on c1, c2;
+--------+-----------+----------+-----------------------------------------------+
| Table  | Op        | Msg_type | Msg_text                                      |
+--------+-----------+----------+-----------------------------------------------+
| db.t1 | histogram | status   | Histogram statistics removed for column 'c1'. |
| db.t1 | histogram | status   | Histogram statistics removed for column 'c2'. |
+--------+-----------+----------+-----------------------------------------------+
2 rows in set (0.13 sec)

14. 新增innodb_dedicated_server参数

能够让InnoDB根据服务器上检测到的内存大小自动配置innodb_buffer_pool_size,innodb_log_file_size,innodb_flush_method三个参数。

15. 日志分类更详细

在错误信息中添加了错误信息编号[MY-010311]和错误所属子系统[Server]

# MySQL 5.7
2018-06-08T09:07:20.114585+08:00 0 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
2018-06-08T09:07:20.117848+08:00 0 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
2018-06-08T09:07:20.117868+08:00 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
 
 
# MySQL 8.0
2018-06-21T17:53:13.040295+08:00 28 [Warning] [MY-010311] [Server] 'proxies_priv'  entry '@ root@localhost' ignored in --skip-name-resolve mode.
2018-06-21T17:53:13.040520+08:00 28 [Warning] [MY-010330] [Server] 'tables_priv'  entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
2018-06-21T17:53:13.040542+08:00 28 [Warning] [MY-010330] [Server] 'tables_priv'  entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.

16. undo空间自动回收

  • innodb_undo_log_truncate参数在8.0.2版本默认值由OFF变为ON,默认开启undo日志表空间自动回收。
  • innodb_undo_tablespaces参数在8.0.2版本默认为2,当一个undo表空间被回收时,还有另外一个提供正常服务。
  • innodb_max_undo_log_size参数定义了undo表空间回收的最大值,当undo表空间超过这个值,该表空间被标记为可回收。

17. 增加资源组

MySQL 8.0新增了一个资源组功能,用于调控线程优先级以及绑定CPU核。
MySQL用户需要有 RESOURCE_GROUP_ADMIN权限才能创建、修改、删除资源组。
在Linux环境下,MySQL进程需要有 CAP_SYS_NICE 权限才能使用资源组完整功能。

[root@localhost~]# sudo setcap cap_sys_nice+ep /usr/local/mysql8.0/bin/mysqld
[root@localhost~]# getcap /usr/local/mysql8.0/bin/mysqld
/usr/local/mysql8.0/bin/mysqld = cap_sys_nice+ep

默认提供两个资源组,分别是USR_default,SYS_default

创建资源组:
create resource group test_resouce_group type=USER vcpu=0,1 thread_priority=5;
将当前线程加入资源组:
SET RESOURCE GROUP test_resouce_group;
将某个线程加入资源组:
SET RESOURCE GROUP test_resouce_group FOR thread_id;
查看资源组里有哪些线程:
select * from Performance_Schema.threads where RESOURCE_GROUP=’test_resouce_group';
修改资源组:
alter resource group test_resouce_group vcpu = 2,3 THREAD_PRIORITY = 8;
删除资源组 :
drop resource group test_resouce_group;

# 创建资源组
mysql>create resource group test_resouce_group type=USER vcpu=0,1 thread_priority=5;
Query OK, 0 rows affected (0.03 sec)

mysql> select * from RESOURCE_GROUPS;
+---------------------+---------------------+------------------------+----------+-----------------+
| RESOURCE_GROUP_NAME | RESOURCE_GROUP_TYPE | RESOURCE_GROUP_ENABLED | VCPU_IDS |  THREAD_PRIORITY |
+---------------------+---------------------+------------------------+----------+-----------------+
| USR_default         | USER                |                      1 | 0-3      |                0 |
| SYS_default         | SYSTEM              |                      1 | 0-3      |                0 |
| test_resouce_group  | USER                |                      1 | 0-1      |                5 |
+---------------------+---------------------+------------------------+----------+-----------------+
3 rows in set (0.00 sec)

# 把线程id为60的线程加入到资源组test_resouce_group中,线程id可通过Performance_Schema.threads获取
mysql> SET RESOURCE GROUP test_resouce_group FOR 60;
Query OK, 0 rows affected (0.00 sec)

# 资源组里有线程时,删除资源组报错
mysql> drop resource group test_resouce_group;
ERROR 3656 (HY000): Resource group test_resouce_group is busy.

# 修改资源组
mysql> alter resource group test_resouce_group vcpu = 2,3 THREAD_PRIORITY = 8;
Query OK, 0 rows affected (0.10 sec)
mysql> select * from RESOURCE_GROUPS;
+---------------------+---------------------+------------------------+----------+-----------------+
| RESOURCE_GROUP_NAME | RESOURCE_GROUP_TYPE | RESOURCE_GROUP_ENABLED | VCPU_IDS | THREAD_PRIORITY |
+---------------------+---------------------+------------------------+----------+-----------------+
| USR_default         | USER                |                      1 | 0-3      |               0 |
| SYS_default         | SYSTEM              |                      1 | 0-3      |               0 |
| test_resouce_group  | USER                |                      1 | 2-3      |               8 |
+---------------------+---------------------+------------------------+----------+-----------------+
3 rows in set (0.00 sec)

# 把资源组里的线程移出到默认资源组USR_default
mysql> SET RESOURCE GROUP USR_default FOR 60;
Query OK, 0 rows affected (0.00 sec)

# 删除资源组
mysql> drop resource group test_resouce_group;
Query OK, 0 rows affected (0.04 sec)

18. 增加角色管理

角色可以认为是一些权限的集合,为用户赋予统一的角色,权限的修改直接通过角色来进行,无需为每个用户单独授权。

# 创建角色
mysql> create role role_test;
Query OK, 0 rows affected (0.03 sec)
 
# 给角色授予权限
mysql> grant select on db.* to 'role_test';
Query OK, 0 rows affected (0.10 sec)
 
# 创建用户
mysql> create user 'read_user'@'%' identified by '123456';
Query OK, 0 rows affected (0.09 sec)
 
# 给用户赋予角色
mysql> grant 'role_test' to 'read_user'@'%';
Query OK, 0 rows affected (0.02 sec)
 
# 给角色role_test增加insert权限
mysql> grant insert on db.* to 'role_test';
Query OK, 0 rows affected (0.08 sec)
 
# 给角色role_test删除insert权限
mysql> revoke insert on db.* from 'role_test';
Query OK, 0 rows affected (0.10 sec)
 
# 查看默认角色信息
mysql> select * from mysql.default_roles;
+------+-----------+-------------------+-------------------+
| HOST | USER      | DEFAULT_ROLE_HOST | DEFAULT_ROLE_USER |
+------+-----------+-------------------+-------------------+
| %    | read_user | %                 | role_test         |
+------+-----------+-------------------+-------------------+
1 row in set (0.00 sec)
 
# 查看角色与用户关系
mysql> select * from mysql.role_edges;
+-----------+-----------+---------+-----------+-------------------+
| FROM_HOST | FROM_USER | TO_HOST | TO_USER   | WITH_ADMIN_OPTION |
+-----------+-----------+---------+-----------+-------------------+
| %         | role_test | %       | read_user | N                 |
+-----------+-----------+---------+-----------+-------------------+
1 row in set (0.00 sec)
 
# 删除角色
mysql> drop role role_test;
Query OK, 0 rows affected (0.06 sec)

mysql utf8_unicode_ci与utf8_general_ci的区别

utf8_unicode_ci和utf8_general_ci对中、英文来说没有实质的差别。
utf8_general_ci 校对速度快,但准确度稍差。
utf8_unicode_ci 准确度高,但校对速度稍慢。

如果你的应用有德语、法语或者俄语,请一定使用utf8_unicode_ci。一般用utf8_general_ci就够了。
附:

ci是 case insensitive, 即 “大小写不敏感”, a 和 A 会在字符判断中会被当做一样的;
bin 是二进制, a 和 A 会别区别对待。

例如你运行:
SELECT * FROM table WHERE txt = ‘a’
那么在utf8_bin中你就找不到 txt = ‘A’ 的那一行, 而 utf8_general_ci 则可以。

utf8_general_ci 不区分大小写,这个你在注册用户名和邮箱的时候就要使用。
utf8_general_cs 区分大小写,如果用户名和邮箱用这个 就会照成不良后果
utf8_bin:字符串每个字符串用二进制数据编译存储。 区分大小写,而且可以存二进制的内容

Spring使用sharding-jdbc实现读写分离或分库分表

逻辑表与实际表之间的对应关系,均匀分布

使用sharding-jdbc来实现无论是读写分离,还是分库分表,都是很简单易用的。

如下图,其中order被拆分为两个表:

 

001

 

//使用默认的分表配置

TableRuleConfiguration orderTableRuleConfig = new TableRuleConfiguration();

orderTableRuleConfig.setLogicTable(“t_order”);

orderTableRuleConfig.setActualDataNodes(“db0.t_order_0, db0.t_order_1, db1.t_order_0, db1.t_order_1″);

LogicTable and ActualTable

数据库分库分表的目的是将数据从原始表传播到不同数据库中的不同表,并在不更改原始sql的情况下查询数据。

这种映射关系将通过使用LogicTable和ActualTable来说明。假设使用PreparedStatement访问数据库,SQL如下:

select * from t_order where user_id = ? and order_id = ?;

当条件user_id = 0 并且 order_id= 0时,Sharding-JDBC会改变这个SQL为以下目标SQL:

select * from db0.t_order_0 where user_id = ? and order_id = ?;

第一个SQL中的t_order是LogicTable和db0。第二个SQL中的t_order_0是ActualTable。

规则配置

我们可以通过配置规则来实现上述功能,本部分将介绍详细的规则配置:

ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration(); shardingRuleConfig.getTableRuleConfigs().add(orderTableRule);

shardingRuleConfig.getTableRuleConfigs().add(orderItemTableRule);

shardingRuleConfig.setDefaultDatabaseShardingStrategyConfig(new ComplexShardingStrategyConfiguration(“user_id”, “xxx.ModuloDatabaseShardingAlgorithm”));

shardingRuleConfig.setDefaultTableShardingStrategyConfig(new ComplexShardingStrategyConfiguration(“order_id”, “xxx.ModuloTableShardingAlgorithm”));

数据源配置

我们需要创建至少一个数据源映射对象,用于描述数据源名称和数据源的映射。如果使用了分库,那么是需要两个创建两个BasicDataSource对象:

private BasicDataSource dataSource1() {
    BasicDataSource dataSource = new BasicDataSource();
    dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver");
    dataSource.setUrl("jdbc:mysql://127.0.0.1:3306/db0?serverTimezone=GMT&characterEncoding=utf8");
    dataSource.setUsername("root");
    dataSource.setPassword("root");

    dataSource.setInitialSize(0);
    dataSource.setMaxIdle(5);
    dataSource.setMinIdle(100);
    dataSource.setMaxOpenPreparedStatements(100);
    dataSource.setTestWhileIdle(true);
    dataSource.setValidationQuery("SELECT 1");
    dataSource.setTimeBetweenEvictionRunsMillis(3600000);
    dataSource.setMinEvictableIdleTimeMillis(18000000);
    dataSource.setTestOnBorrow(true);
    dataSource.setMaxWaitMillis(300000);

    return dataSource;
}

private BasicDataSource dataSource2() {
    BasicDataSource dataSource = new BasicDataSource();
    dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver");
    dataSource.setUrl("jdbc:mysql://127.0.0.1:3306/db1?serverTimezone=GMT&characterEncoding=utf8");
    dataSource.setUsername("root");
    dataSource.setPassword("root");

    dataSource.setInitialSize(0);
    dataSource.setMaxIdle(5);
    dataSource.setMinIdle(100);
    dataSource.setMaxOpenPreparedStatements(100);
    dataSource.setTestWhileIdle(true);
    dataSource.setValidationQuery("SELECT 1");
    dataSource.setTimeBetweenEvictionRunsMillis(3600000);
    dataSource.setMinEvictableIdleTimeMillis(18000000);
    dataSource.setTestOnBorrow(true);
    dataSource.setMaxWaitMillis(300000);

    return dataSource;
}

 

以下是数据源集合的代码:

Map<String, DataSource> dataSourceMap = new HashMap<>();

dataSourceMap.put(“ds_0″, datasource1());

dataSourceMap.put(“ds_1″, datasource());

*注:如果只是为了分表,那么无需创建两个数据源,但是如果你想实现读写分离或者是分库,那么则需要至少个数据源。

策略配置

一共有两个策越,分表是针对数据库跟数据库表

在sharding-jdbc中有两个用于分库分表的策略:

  • DatabaseShardingStrategy
  • TableShardingStrategy

DatabaseShardingStrategy用于分布式数据库的数据源的策略。

TableShardingStrategy用于分布数据库表的策略。

此外,这两种策略的API是相同的,因此我们只要对其中一种API进行详细的介绍就可以了。

特定表规则的全局默认策略

策略与数据表(t_order)规则密切相关,因为策略适用于特定的表规则。

TableRuleConfiguration orderTableRuleConfig = new TableRuleConfiguration();

orderTableRuleConfig.setLogicTable(“t_order”);

orderTableRuleConfig.setActualDataNodes(“ds_0.t_order_0, ds_0.t_order_1, ds_1.t_order_0, ds_1.t_order_1″);

orderTableRuleConfig.setDatabaseShardingStrategyConfig(new ComplexShardingStrategyConfiguration(“user_id”, “xxx.ModuloDatabaseShardingAlgorithm”));

orderTableRuleConfig.setTableShardingStrategyConfig(new ComplexShardingStrategyConfiguration(“order_id”, “xxx.ModuloTableShardingAlgorithm”));

上述的代码,有两种策略,第一种就是通过user_id进行数据库的分配;第二种就是根据order_id再对数据表进行分配。最终实现的逻辑代码其实是:xxx.ModuloDatabaseShardingAlgorithm、xxx.ModuloTableShardingAlgorithm。

如果所有或大部分数据表都使用相同的分片策略,则可以使用默认策略来简化配置。

 

TableRuleConfiguration orderTableRuleConfig = new TableRuleConfiguration();//第个表的策略

orderTableRuleConfig.setLogicTable(“t_order”);

orderTableRuleConfig.setActualDataNodes(“ds_0.t_order_0, ds_0.t_order_1, ds_1.t_order_0, ds_1.t_order_1″);

 

 

TableRuleConfiguration orderItemTableRuleConfig = new TableRuleConfiguration();//第二个表的策略

orderItemTableRuleConfig.setLogicTable(“t_order_item”);

orderItemTableRuleConfig.setActualDataNodes(“ds_0.t_order_item_0,ds_0.t_order_item_1,ds_1.t_order_item_0,ds_1.t_order_item_1″);

 

 

ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();

shardingRuleConfig.getTableRuleConfigs().add(orderTableRuleConfig);

shardingRuleConfig.getTableRuleConfigs().add(orderItemTableRuleConfig);

shardingRuleConfig.setDefaultDatabaseShardingStrategyConfig(new ComplexShardingStrategyConfiguration(“user_id”, “xxx.ModuloDatabaseShardingAlgorithm”));

shardingRuleConfig.setDefaultTableShardingStrategyConfig(new ComplexShardingStrategyConfiguration(“order_id”, “xxx.ModuloTableShardingAlgorithm”));

上述代码其实跟前一段是一致的,但是因为配置了两个数据表的分配策略,所以创建了两个TableRuleConfiguration,然后通过ShardingRuleConfiguration的方法getTableRuleConfigs().add()把规则添加进去。

分表或分库的列字段

分片策略中设置为第一个参数为分库分表的列字段(user_id、order_id)是SQL中WHERE中的条件列。如果你的SQL语句的WHERE中可能会没有这两个列字段,那么你最好在xxx.ModuloDatabaseShardingAlgorithm、xxx.ModuloTableShardingAlgorithm逻辑代码中特殊处理一下,当然你也可以配置多个分片列。

分表分库的算法(以下来自官方的翻译文件)

Sharding-JDBC provides 5 kinds of sharding strategies. Because of the closely connection between specific business and specific sharding algorithms, Sharding-JDBC not carry out sharding algorithm. Instead, after making a higher level of abstraction, we provide API to allow developers to implement sharding algorithms as they need.

Sharding-JDBC提供了5种切分策略。由于特定业务和特定的切分算法之间的紧密联系,Sharding-JDBC没有执行切分算法。相反,在进行了更高级别的抽象之后,我们提供了允许开发人员根据需要实现切分算法的API。

  • StandardShardingStrategy (标准共享策略)

Support =, IN, BETWEEN AND in SQLs for sharding operation. StandardShardingStrategy only supports single sharding column, and provides two sharding algorithms of PreciseShardingAlgorithm and RangeShardingAlgorithm. The PreciseShardingAlgorithm is required to handle the sharding operation of = and IN. The RangeShardingAlgorithm is optional to handle BETWEEN AND. If the RangeShardingAlgorithm is not configured, the BETWEEN-AND SQLs will be executed in all tables.

在SQL语句中支持 =, IN, BETWEEN AND,以便进行切分操作。标准分片策略只支持单分片列,提供了两种分片算法:精确分片算法(PreciseShardingAlgorithm)和测距分片算法(RangeShardingAlgorithm)。精确分片算法(PreciseShardingAlgorithm)需要使用精确的硬件算法来处理“=”和in的切分操作。RangeShardingAlgorithm是在BETWEEN AND处理的可选方法。如果未配置RangeShardingAlgorithm,那么在查询SQL语句中,将在所有表中执行BETWEEN-AND SQLs。

  • ComplexShardingStrategy(综合硬件策略)

Support =, IN, BETWEEN AND in SQLs for sharding operation. ComplexShardingStrategy supports multiple sharding columns. Due to the complex relationship among the multiple sharding columns, Sharding-JDBC only provide algorithm API to allow developers combine different sharding columns and implement the specific algorithm.

在SQL中支持 =, IN, BETWEEN AND in,以便进行切分操作。ComplexShardingStrategy 支持多个切分列。由于多个切分列之间的复杂关系,Sharding-JDBC只提供算法API,允许开发人员组合不同的切分列,实现特定的算法。

  • InlineShardingStrategy (内部共享策略)

This strategy provides sharding support for =, IN in SQLs by means of Groovy’s Inline expression. InlineShardingStrategy only supports single sharding column. Some simple sharding algorithm can be configured, e.g. tuser $ {user_id% 8} shows us the t_user table is divided into 8 tables via mod(user_id), and the child tables is t_user_0 to t_user_7.

此策略通过内联表达式提供支持SQL中的 =, IN ,提供分片支持。InlineShardingStrategy 只支持单个分片的列字段。可以配置一些简单的切分算法,例如, t_user $ {user_id % 8} 向我们显示,t_user表通过取模(user_id)分为8个表,子表分别是t_user_0到t_user_7。

  • HintShardingStrategy

Support spliting table by means of Hint method, not SQL Parsing.

通过提示方法支持分表,而不对SQL进行解析。

  • NoneShardingStrategy

注:这种策略不要拆分数据库或表。

联表操作

它由一组表组成,其中逻辑表和实际表之间的映射关系是相同的。例如order table与order ID进行了分割,order_item table也与order ID进行了分割,因此可以将order table与order_item table配置为彼此的BindingTable。

在这种情况下,SQL语句应该如下:

SELECT i.* FROM t_order o JOIN t_order_item i ON o.order_id=i.order_id WHERE o.user_id=? AND o.order_id=?

t_order位于FROM的左侧,Sharding-JDBC将它视为绑定表的驱动表。所有计算将只使用已配置的驱动表策略。因此,t_order_item的路由计算也将使用t_order的条件。这个实现的核心在于它们相同的分片列。

最后提交我的实验代码:

@Configuration
@EnableTransactionManagement
@MapperScan(value = "com.lanxinbase.repository.mapper")
public class MybatisConfig implements TransactionManagementConfigurer {

    @Resource
    private ShardingDataSource shardingDataSource;

    public MybatisConfig(){
    }


    @Bean(value = "sessionFactoryBean")
    public SqlSessionFactoryBean sessionFactoryBean() throws Exception {
        SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean();
        factoryBean.setDataSource(shardingDataSource);
        factoryBean.setConfiguration(this.getConfiguration());

        String locationPattern = ResourcePatternResolver.CLASSPATH_URL_PREFIX + "com/lanxinbase/repository/resource/**.xml";
        PathMatchingResourcePatternResolver resolver = new PathMatchingResourcePatternResolver();
        factoryBean.setMapperLocations(resolver.getResources(locationPattern));
        return factoryBean;
    }

    @Bean
    public SqlSessionTemplate sqlSessionTemplate(SqlSessionFactory sqlSessionFactory) throws Exception {
        return new SqlSessionTemplate(sqlSessionFactory);
    }


    @Override
    public PlatformTransactionManager annotationDrivenTransactionManager() {
        DataSourceTransactionManager manager = new DataSourceTransactionManager();
        manager.setDataSource(shardingDataSource);
        manager.setDefaultTimeout(40);
        manager.setRollbackOnCommitFailure(true);
        return manager;
    }

    public org.apache.ibatis.session.Configuration getConfiguration() {
        org.apache.ibatis.session.Configuration configuration = new org.apache.ibatis.session.Configuration();
        configuration.setLogImpl(StdOutImpl.class);
        configuration.setLocalCacheScope(LocalCacheScope.SESSION);
        configuration.setCacheEnabled(true);
        return configuration;
    }

    @Bean
    public ShardingDataSource shardingDataSource() throws SQLException {
        ShardingRuleConfiguration conf = new ShardingRuleConfiguration();

        /**
         * 添加分表策略
         */
        conf.getTableRuleConfigs().add(tableRuleConfiguration());
        conf.getBindingTableGroups().add(Constant.SHARDING_TABLE_GPS);

        //http://shardingsphere.apache.org/document/legacy/2.x/en/02-guide/master-slave/
       //conf.getMasterSlaveRuleConfigs().add();配置读写分离

        //这个是分库的
       //conf.setDefaultDatabaseShardingStrategyConfig(new StandardShardingStrategyConfiguration("age", ShardingPreciseShardingAlgorithm.class.getName()));

        /**
         * 分表处理对象类
         */
        conf.setDefaultTableShardingStrategyConfig(new StandardShardingStrategyConfiguration("imei", ShardingPreciseShardingAlgorithm.class.getName()));
        conf.build(this.getDataSourceMap());

        ShardingDataSource dataSource = new ShardingDataSource(conf.build(this.getDataSourceMap()));
        return dataSource;
    }

    /**
     * 数据表配置,我这里只做了分表
     * 如果有多个表,就重复创建tableRuleConfiguration方法,
     * 然后通过getTableRuleConfigs.add(tableRuleConfiguration());
     *
     * @return
     */
    @Bean
    public TableRuleConfiguration tableRuleConfiguration() {
        TableRuleConfiguration ruleConfiguration = new TableRuleConfiguration();

        /**
         * 要逻辑表
         */
        ruleConfiguration.setLogicTable(Constant.SHARDING_TABLE_GPS);

        /**
         * 区分规则
         */
        ruleConfiguration.setActualDataNodes("dataSource2.lx_dev_gps_${0..9}");

        /**
         * 用于区分的字段
         */
        ruleConfiguration.setKeyGeneratorColumnName("imei");

        return ruleConfiguration;
    }

    /**
     * 可以配置多个dataSource,如dataSource1、dataSource2
     * 然后就可以把读写分离分开了
     * @return
     */
    private Map<String, DataSource> getDataSourceMap() {
        Map<String, DataSource> result = new HashMap<>();
        result.put("dataSource2", this.dataSource2());
        return result;
    }

    /**
     * 创建一个BasicDataSource给sharding-jdbc
     * @return
     */
    private BasicDataSource dataSource2() {
        BasicDataSource dataSource = new BasicDataSource();
        dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver");
        dataSource.setUrl("jdbc:mysql://127.0.0.1:3306/test?serverTimezone=GMT&characterEncoding=utf8");
        dataSource.setUsername("root");
        dataSource.setPassword("root");

        dataSource.setInitialSize(0);
        dataSource.setMaxIdle(5);
        dataSource.setMinIdle(100);
        dataSource.setMaxOpenPreparedStatements(100);
        dataSource.setTestWhileIdle(true);
        dataSource.setValidationQuery("SELECT 1");
        dataSource.setTimeBetweenEvictionRunsMillis(3600000);
        dataSource.setMinEvictableIdleTimeMillis(18000000);
        dataSource.setTestOnBorrow(true);
        dataSource.setMaxWaitMillis(300000);

        return dataSource;
    }
}
//这里才是最终处理分表的逻辑代码
public class ShardingPreciseShardingAlgorithm implements PreciseShardingAlgorithm<Integer> {

    private static final Logger logger = Logger.getLogger(ShardingPreciseShardingAlgorithm.class.getName());

    @Override
    public String doSharding(Collection<String> availableTargetNames, PreciseShardingValue<Integer> shardingValue) {
        int flag = shardingValue.getValue() % 10;

        //只对特别的表进行拦截处理
        if (shardingValue.getLogicTableName().contains(Constant.SHARDING_TABLE_GPS)){
            for (String tableName : availableTargetNames) {
                if (tableName.endsWith(flag + "")) {
                    logger.info(">>>>tableName:" + tableName);
                    if (flag == 0) {
                        return tableName.substring(0, tableName.lastIndexOf("_"));
                    }
                    return tableName;
                }
            }
            throw new IllegalArgumentException("No match to the table.");
        }

        return shardingValue.getLogicTableName();
    }
}

 

*注:本人只做了分表的策略,没有做分库,是用于存放硬件设备GPS数据的表,一共10个。

更多文献:http://shardingsphere.apache.org/document/legacy/2.x/en/02-guide/sharding/

 

12345617
 
Copyright © 2008-2021 lanxinbase.com Rights Reserved. | 粤ICP备14086738号-3 |