Ad Widget

Collapse

Data Migration from Non-Partitioned to Partitioned MySQL

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Smoke
    Junior Member
    • Jul 2014
    • 29

    #1

    Data Migration from Non-Partitioned to Partitioned MySQL

    I currently have a huge environment which is using a MySQL server that does not have any table partitioned setup whatsoever.

    That server is running slow, as we started with a Virtual Machine, and we are now migrating this old DB to a more robust/powerfull physical server with more memory, CPU and newer version of MySQL as well.

    I am going to use table partitioning this time, for history and trends tables, as mostly advice, and i would like to get some feedback on how to do the data load.

    I'm taking a full backup of the database using myDumper tool, which is just a multi-threaded MySQL dumper, and the import with myLoader, which does almost the same, and uses DATA LOAD INFILE.

    My question is regarding whatever is preferable to setup the database partitions PRIOR to do the data load (i would have to alter the schema table creation of course) or just load all the data in 1 partition and then do the partitioning manually after the import is complete.

    The reason I am asking is because I still unsure if doing the import with partitioning tables is possible or not and that it will end up with valid data stored on each different partitions, of course I'm aware that i will need to create the valid partitions to store all the data, based on min/max values of clock field.

    In regards to benefits, I'm afraid that if i do the table partitioning afterwards it will take a huge amount of time to move the data to their corresponding partitions and i will end up with a dirty table which was moved all over the place.

    Any feedback on this matter would be greatly appreciated.
  • kloczek
    Senior Member
    • Jun 2006
    • 1771

    #2
    Best way of migration to partitioned tables is add slave and do on slave all partitioning. When you will have tables partitioned all what must be done is:
    - stop zabbix server and web frontend,
    - be sure that on master and slave you have the same binary position,
    - stop master and start using slave as new master.

    Having slave is useful not only on such operation. You can use it to do off-site backups, time to time optimize tables on slave and promote it as new master, making test of possible upgrades or any other changes which may requires full prod database content.
    Additionally slave provides you resilience so IMO every serious zabbix installation should have at last one slave.
    http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
    https://kloczek.wordpress.com/
    zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
    My zabbix templates https://github.com/kloczek/zabbix-templates

    Comment

    • Smoke
      Junior Member
      • Jul 2014
      • 29

      #3
      I may have forgot to mention that we already have slave replication, in the current slow VM setup and will be moving to replication as well on the new servers.

      Comment

      • kloczek
        Senior Member
        • Jun 2006
        • 1771

        #4
        Reading what is new in mysql 5.6 http://dev.mysql.com/doc/refman/5.6/...-nutshell.html I found:

        "InnoDB enhancements. These InnoDB enhancements were added:
        [..]
        Several ALTER TABLE operations can be performed without copying the table, without blocking inserts, updates, and deletes to the table, or both. These enhancements are known collectively as online DDL. See Section 14.10, “InnoDB and Online DDL” for details."

        Seems like with 5.6 may be possible to partition table straight on master db.
        http://uk.linkedin.com/pub/tomasz-k%...zko/6/940/430/
        https://kloczek.wordpress.com/
        zapish - Zabbix API SHell binding https://github.com/kloczek/zapish
        My zabbix templates https://github.com/kloczek/zabbix-templates

        Comment

        • Smoke
          Junior Member
          • Jul 2014
          • 29

          #5
          Looks like you are indeed right

          I ended up by modifying the schema before import, but after the export
          So what i did was take the export as-is now (without partitions) then change the schema.sql to generate the table with partitions and load the data.

          This way i don't have to wait a few hours for the current DB to get partitioned for nothing, and when the new data is loaded it gets loaded already in its partitions.

          Thanks for the help.

          Comment

          Working...