In order to keep database size and speed up IO i would like to propose the following bread crums:
1 - MySQL and Postgres both support methods for compression/decompression of data tables on the fly. Implementing this change seems to be fairly simple from what I have read concerning MySQL.
This could be applied to the trend tables and potentially a host of other tables. Some time series are boolean or have data that can really be compressed (text, blobs).
Excerpt from InnoDB doc:
Many workloads are I/O-bound. The idea of data compression is to pay a small cost in increased CPU utilization for the benefit of smaller databases and reduced I/O to improve throughput, potentially significantly.
The ability to compress user data is an important new capability of the InnoDB Plugin. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes needed to access the user data. For many InnoDB workloads and many typical user tables (especially with read-intensive applications where sufficient memory is available to keep frequently-used data in memory), compression not only significantly reduces the storage required for the database, but also improves throughput by reducing the I/O workload, at a modest cost in processing overhead. The storage cost savings can be important, but the reduction in I/O costs can be even more valuable.
...
To create a compressed table, you might use a statement like this:
CREATE TABLE name
(column1 INT PRIMARY KEY)
ENGINE=InnoDB
ROW_FORMAT=COMPRESSED
KEY_BLOCK_SIZE=4;
fmrapid
+1 SNMP
+1 Scalability
1 - MySQL and Postgres both support methods for compression/decompression of data tables on the fly. Implementing this change seems to be fairly simple from what I have read concerning MySQL.
This could be applied to the trend tables and potentially a host of other tables. Some time series are boolean or have data that can really be compressed (text, blobs).
Excerpt from InnoDB doc:
Many workloads are I/O-bound. The idea of data compression is to pay a small cost in increased CPU utilization for the benefit of smaller databases and reduced I/O to improve throughput, potentially significantly.
The ability to compress user data is an important new capability of the InnoDB Plugin. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes needed to access the user data. For many InnoDB workloads and many typical user tables (especially with read-intensive applications where sufficient memory is available to keep frequently-used data in memory), compression not only significantly reduces the storage required for the database, but also improves throughput by reducing the I/O workload, at a modest cost in processing overhead. The storage cost savings can be important, but the reduction in I/O costs can be even more valuable.
...
To create a compressed table, you might use a statement like this:
CREATE TABLE name
(column1 INT PRIMARY KEY)
ENGINE=InnoDB
ROW_FORMAT=COMPRESSED
KEY_BLOCK_SIZE=4;
fmrapid
+1 SNMP
+1 Scalability

Comment