Ad Widget

Collapse

Migrating from Zabbix 6.2 (mySQL) to Zabbix 7.4 (Postgres+Timescale DB) Issues

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • WirelessGuru
    Junior Member
    • Feb 2022
    • 12

    #1

    Migrating from Zabbix 6.2 (mySQL) to Zabbix 7.4 (Postgres+Timescale DB) Issues

    Hi Guys,

    I'm running into a whole host of import issues from Zabbix 6.2 (mySQL) to Zabbix 7.4 (Postgres+Timescale DB). Is there any guidance on export/import procedures to make this upgrade? I'm attempting to use API scripting to make this happen.

    The export files I did were below.

    templates.json
    hosts.json
    maps.json
    images.json
    template_groups.json
    host_groups.json

    I changed the apache settings to allow for the larger input

    memory_limit=4096M
    post_max_size=1024M
    upload_max_filesize=1024M
    max_execution_time=1800
    max_input_time=1800
    max_input_vars=300000

    Here is the summary of the issues faced:

    1. Image import/compare issues

    images_all_from_62.json was very large. Images were eventually handled by skipping already-existing target image names. This was acceptable because the target already had 683 matching image names.

    2. Map import issue

    Some map elements, especially static/icon-style elements, were missing/empty element references in a way 7.4 import did not like. We normalized map selement structure instead of treating those as broken functional host links.

    3. LLD item prototype validation issue

    Zabbix 7.4 rejected item prototypes whose keys did not include an LLD macro. The problem examples were:

    net.if.speed[nic1]
    mimoPowerLevelHorizontal.0
    mimoPowerLevelVertical.0

    We determined these were not simply disposable. For example, net.if.speed[nic1] belonged to a discovery rule filtered to {#IFNAME}=nic1, so the better remediation was:

    net.if.speed[{#IFNAME}]

    For Cambium PMP 450 Mimo items, we remediated to keys/OIDs using {#CUMBSITENAME}, {#CUMBMAC}, and {#SNMPINDEX}.

    4. LLD graph prototype dangling references

    After initially removing bad item prototypes, graph prototypes failed because they referenced the removed item keys. That proved we should not remove isolated items without handling dependent graph/trigger/dependent prototypes.

    5. Morningstar graph prototype issue

    Three Morningstar graph prototypes referenced:

    load.voltage[loadVoltage.0]

    but that item prototype did not exist in the discovery rule. We treated this as a dangling graph item reference, not a reason to invent a new item.

    6. Dashboard graph reference issue caused by batching

    The dashboard error was:

    Cannot find graph "Wireless MAC Speed Total" used in dashboard "C5C MAC"

    This turned out not to be a missing graph export. The graph existed as a top-level zabbix_export.graphs[] object, while the dashboard was nested under the template. Batching templates without carrying related top-level graphs split the relationship.

    Conclusion: full template import is better than template batching when possible, because templates.json contains:

    templates[]
    graphs[]
    triggers[]

    as related top-level sections.

    7. Diagnostic batching found trigger dependency failure

    The diagnostic v3 batching exposed the specific issue:

    Trigger "Unavailable by ICMP ping" depends on trigger "Unavailable by ICMP ping", which does not exist.

    The export contains many trigger dependencies by name/expression. Examples show ICMP triggers depending on other triggers with similar or identical names, often across host/template references.

    8. Preflight result

    Our local preflight found:

    blockers: 0
    warnings: 6274
    trigger_dependency warnings: 6166
    trigger_prototype warnings: 108

    So the package is internally clean for obvious missing graph/item/LLD problems, but trigger dependency resolution remains the main risk.​


    Please advise,

    Dan
  • WirelessGuru
    Junior Member
    • Feb 2022
    • 12

    #2
    Here is the AI summary if it helps:

    We are testing a migration from Zabbix 6.2.1 on MySQL to a new Zabbix 7.4.9 server intended to use PostgreSQL/TimescaleDB.

    Environment:
    - Source: Zabbix 6.2.1, MySQL
    - Target: Zabbix 7.4.9, fresh install
    - Target hardware: 20 vCPU, 48 GB RAM, 250 GB SSD
    - Target frontend: Apache prefork + mod_php PHP 8.3
    - Approximate exported objects:
    - 3,866 templates
    - 32,071 hosts
    - 999 top-level graphs in templates.json
    - 10,665 top-level triggers in templates.json
    - 541 maps
    - 683 images

    Goal:
    We want to recreate the full environment on the new 7.4 server for testing, keep all hosts disabled, validate the result, and only cut over later.

    Approach tried:
    - Used Zabbix API configuration.export from 6.2.
    - Imported to 7.4 using configuration.import / configuration.importcompare.
    - Hosts were transformed to DISABLED before import.
    - Full template importcompare initially failed because Apache/PHP defaults were too low.
    - We temporarily raised frontend PHP limits:
    memory_limit=4096M
    post_max_size=1024M
    upload_max_filesize=1024M
    max_execution_time=1800
    max_input_time=1800
    max_input_vars=300000
    - After that, full templates importcompare passed.
    - Full templates import still failed with:
    "No permissions to referred object or it does not exist!"
    - Diagnostic batching later exposed:
    Trigger "Unavailable by ICMP ping" depends on trigger "Unavailable by ICMP ping", which does not exist.

    Issues found and remediated during testing:
    1. LLD item prototype keys without LLD macros:
    - net.if.speed[nic1]
    - mimoPowerLevelHorizontal.0
    - mimoPowerLevelVertical.0

    These were remediated to macro-based keys, not deleted.

    2. Dashboard graph reference:
    - Dashboard "C5C MAC" referenced graph "Wireless MAC Speed Total".
    - The graph existed in top-level zabbix_export.graphs[], while the dashboard was nested under the template.
    - This means batching templates without related top-level graphs/triggers can split valid export relationships.

    3. Trigger dependencies:
    - Many triggers depend on other triggers with same/similar names, especially ICMP-derived templates.
    - importcompare can pass but import fails on a dependency resolution issue.

    Questions:
    1. Is API configuration.export/configuration.import a recommended/supported approach for full environment migration from 6.2 to 7.4 at this scale, or should we instead perform a database-backed staged upgrade?

    2. For a source system on MySQL and a desired target on PostgreSQL/TimescaleDB, what is the Zabbix-recommended migration path if we need to preserve templates, hosts, trigger dependencies, maps, dashboards, graphs, actions, media types, and value maps?

    3. Does configuration.importcompare guarantee that configuration.import will succeed? If not, what classes of references are only validated during import?

    4. For top-level graphs/triggers in an exported templates.json:
    - Is it expected that graphs[] and triggers[] are top-level while dashboards are nested under templates?
    - If importing in batches, is there an official method to compute the dependency closure so related graphs/triggers/dashboard references stay together?

    5. For trigger dependency import failures such as:
    Trigger "Unavailable by ICMP ping" depends on trigger "Unavailable by ICMP ping", which does not exist.
    What is the recommended remediation?
    - Import triggers without dependencies first, then add dependencies later?
    - Preserve dependencies by importing all templates/triggers in one payload?
    - Remove dependency blocks before import?
    - Use another API endpoint after import to recreate dependencies?

    6. How does Zabbix resolve trigger dependencies during configuration.import?
    - By UUID?
    - By name + expression?
    - By template/host context?
    - Are duplicate trigger names across many templates/hosts expected to cause ambiguity?

    7. We have many generated ICMP templates where triggers share names like:
    - Unavailable by ICMP ping
    - High ICMP ping loss
    - High ICMP ping response time
    but expressions are host/template-specific.
    Is this pattern expected to import reliably across major versions?

    8. Is there an official way to increase logging/debug detail for configuration.import so the exact unresolved referred object is shown instead of only:
    "No permissions to referred object or it does not exist!"

    9. If the target server has already had partial imports from failed attempts, should we reset the target database before continuing, or should updateExisting=true safely handle retries?

    10. Given the size of the environment, should we continue API import testing, or is the correct engineering answer to clone/upgrade the 6.2 database through supported upgrade paths and then address MySQL-to-PostgreSQL migration separately?

    Comment

    • cyber
      Senior Member
      Zabbix Certified SpecialistZabbix Certified Professional
      • Dec 2006
      • 4908

      #3
      I am quite sure you cannot do upgrade like this... exporting from 6.x and importing to 7.x... I think it would be better to create a side by side environment (with different DB flavor) but with same version, then try to export-import things and then do version upgrade.
      Or even ...As tables and contents are the same in mysql and PG and "version upgrade" is DB modifications... create that 7.x env, but for DB use 6.x schema, do not start your server... copy DB contents over, "whatever mysql equivalent of copy data to CSV is" piped to "psql postgresql://your.psql.server/zabbix -c "COPY ${table} FROM STDIN CSV;"" (this might take some experimenting and testing, as tables have to come in correct order... )
      At that moment you have new server but old DB version ... now when you start you new server, in ideal world, it should do version upgrade to 7.x (all DB modification) and you have your new PG+TS env with new version.

      Comment

      Working...