Ad Widget

Collapse

1.4.1 upgrade & convert to node=1 with large database

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • dmz
    Junior Member
    • Jun 2005
    • 26

    #1

    1.4.1 upgrade & convert to node=1 with large database

    Hello,
    I have a fairly old system (that is already upgraded to 1.4.1) that has lots of events in the database (1.7G). Running zabbix_server -n 1 doesn't seem to work for me.

    In looking at the queries, update %s set %s=%s+" ZBX_FS_UI64 " where %s>0, for example, does an update which is killing my system with 10,000,000+ trends & history (each).

    I let it run 36 hours once and it went through some of the tables but never actually did the update (I saw the update process running but it wasn't updating anything).

    Has there been any though to using a temp table or such?

    Are the various operations that the node upgrade does documented somewhere that I could follow to run manually?

    While I'm asking, any reason the max node is set to 1000? Any chance of adding another 0?

    Thanks!
    David
    Last edited by dmz; 25-07-2007, 21:27.
  • dmz
    Junior Member
    • Jun 2005
    • 26

    #2
    Also, while I'm asking, I was looking at the data in the latest data screen and noticed that my older system doesn't have the items catagorized by applications: (new system example below)
    image Availability (51 Items)
    image CPU (7 Items)
    image Filesystem (44 Items)
    image General (7 Items)
    image Integrity (5 Items)
    image Log files (2 Items)
    image Memory (5 Items)
    image Network (6 Items)
    image OS (8 Items)
    image Performance (13 Items)
    image Processes (10 Items)
    image Services (2 Items)

    Did upgrade not do that? What do I need to upgrade to get it there. this is also preventing me from setting up web monitors too, i think..

    Comment

    • dmz
      Junior Member
      • Jun 2005
      • 26

      #3
      Some more digging:
      if(tables[i].fields[j].type == ZBX_TYPE_ID)
      {
      DBexecute("update %s set %s=%s+" ZBX_FS_UI64 " where %s>0\n",
      tables[i].table,
      tables[i].fields[j].name,
      tables[i].fields[j].name, (zbx_uint64_t)__UINT64_C(100000000000000)*(zbx_uin t64_t)new_id,
      tables[i].fields[j].name);

      There are 5 items but only 4 variables. Is this meant to be that way

      Also, i just started reading the code, is tables[] a global variable? is it just all of the tables in the database?

      And ZBX_TYPE_ID, i see is defined as 6, but now just need to figure out which fields are classified as type 6..
      Last edited by dmz; 25-07-2007, 21:57.

      Comment

      • dmz
        Junior Member
        • Jun 2005
        • 26

        #4
        I just changed all the DBExecutes in nodechange.c to printf and will be trying to do each line manually 1110 sql statments...

        Comment

        • dmz
          Junior Member
          • Jun 2005
          • 26

          #5
          Ahhhh, i think i know the issue.

          In reading the mysql UPDATE manual it appears that the update clause will search for all records that match _first_, then update them. With this many records we're killing it with that update record.

          I'm going to try to do multiple updates with it limited to 1,000,000 (i have another table with 1MM and it updated fine) and then run through it 10x.

          I'm updating history & trends updates to be:
          update history set itemid=itemid+100000000000000 whre itemid>0 and itemid < 100000000000000 limit 1000000;

          update:

          1000000 was too much, 10,000 took 32 seconds. 100000 5 min, may settle on 10,000 & loop it as it looks like a little over 300 records / second. I know I could do a larger group, but i'd rather this succeed than get i all @ once

          update:
          Ok this is wierd. The history update takes the time mentioned above. I ran a similar update on trends for 1000000 records & it only took 3 min to complete. No idea why it takes so long for history to update..anyone??
          Last edited by dmz; 25-07-2007, 23:23.

          Comment

          Working...