So, my boss wants to see graphs of i/o rates on our database servers. No sweat, I think, just whip together a few vfs.dev.{read,write} items, and build the graphs. All the different databases actually use space on filesystems, so I figure this'll be easy.... Not so! What I soon realized is that even the few machines that actually use the same mount point (/data01) have different device names underlying them (/dev/cciss/c2d0p1, /dev/sdd1, /dev/mapper/blah-blah (for multipath devices), etc.)! Most of the rest of the machines use (for all intents and purposes) random mount points for their data. 
The only thing I can think of to get around this is to have a program run on the client machine to generate an XML file with a template to create the proper items, triggers, and graphs.
Does anyone have any better ideas?

The only thing I can think of to get around this is to have a program run on the client machine to generate an XML file with a template to create the proper items, triggers, and graphs.
Does anyone have any better ideas?