Hi,
My Zabbix Mysql database become very very large : ibdata1 is about 36 Gb.
(Multiple extend of vg to avoid Zabbix down)
My problemes:
-> Database use defaut huge my.cnf (how to optimize ?)
-> I've two Large table with 150 (History) & 90 (Trends) Billions Records table, where a "select count(*)" take about 30 minutes
-> I've try to reduce History and trend on each object but small effect
-> I've try to make OPTIMIZE TABLE (for rebuild index and defragment my tables) but
In General: What the best Practices for this case ?
-> Use per table datafile is a good solution ??
-> Create multiple fixed size Innodb "tablespace" Better ?
-> I want to prepare 1.6 migration but without this big problem !
Zabbix report:
Mysql info:
(18 Gb of data & 8 Go of index)
Biggest tables:



Regards Pierre.
My Zabbix Mysql database become very very large : ibdata1 is about 36 Gb.
(Multiple extend of vg to avoid Zabbix down)
My problemes:
-> Database use defaut huge my.cnf (how to optimize ?)
-> I've two Large table with 150 (History) & 90 (Trends) Billions Records table, where a "select count(*)" take about 30 minutes
-> I've try to reduce History and trend on each object but small effect
-> I've try to make OPTIMIZE TABLE (for rebuild index and defragment my tables) but
-> It lock table and Zabbix hang (:-) Do you know Workarround like
on Oracle to delay transaction (like ALTER TABLESPACE BEGIN BACKUP)
-> After this Innodb space free was 10 Gb : Unable to reduce a Mysql "Tablespace" ???
on Oracle to delay transaction (like ALTER TABLESPACE BEGIN BACKUP)
-> After this Innodb space free was 10 Gb : Unable to reduce a Mysql "Tablespace" ???
In General: What the best Practices for this case ?
-> Use per table datafile is a good solution ??

-> Create multiple fixed size Innodb "tablespace" Better ?
-> I want to prepare 1.6 migration but without this big problem !
Zabbix report:
Code:
Number of hosts 346 Number of items 5570 Number of triggers 2338 Number of events 265912 Number of alerts 27905
Code:
mysql> select sum(data_length), sum(index_length) from information_schema.tables; +------------------+-------------------+ | sum(data_length) | sum(index_length) | +------------------+-------------------+ | 18029174062 | 8558508032 | +------------------+-------------------+ 1 row in set (1.26 sec)
Biggest tables:



Regards Pierre.
) interval.
: lots of good idea.
I'm cleaning my items retention time ...


Comment