I'm using Python as part of a custom low-level-discovery that discovers a REST/API endpoint. When the polling is on, the CPU utilization goes through the roof. All the CPU usage is caused by setroubleshootd as show in top:
That script is a bash file that calls my python script. and the call looks like this:
When zabbix calls the script, it passes the unique filters, like a server ID or network card ID, as one of the arguments. The python script opens up an https session using requests, leveraging a bearer token if the token file exists. If the token file doesn't exist it creates it.
The script works fine and does everything it is supposed to but setroubleshoot is rebooting a slew of issues, specifically around file folder access. The huge number of setroubleshootd responses is causing the CPU to go nuts. Here is an example of the error:
The file name is random and changes with every execution. I've tried adding an exception using the selinux tools such as:
But since the file name is random, the errors persist. I've tried uninstalling setroubleshootd, selinux just reinstalls it. Unfortunately, I need to run enforcing mode, so dropping to permissive or disabling are not options.
I've tried changing so that I'm not running a bash script, that zabbix calls the python script directly, or declaring shebang /usr/bin/python, but passing arguments doesn't seem to work properly. I get an error stating the $1 $2... are unknown arguments.
At a loss at this point. It is running, but I'd really like to get the CPU usage down as 60% of 4 cores is unreasonable for 30-40 HTTPS calls.
When I run the script as a user, there are no setroubleshoot errors, so it has to be related to being run as the zabbix user. Additionally, all the scripts work despite the setroubleshoot issue, would just like to get the CPU usage down and stop flooding the logs with setroubleshoot errors.
EDIT:
Running this all on 3.4 with CentOS7
Code:
top - 13:51:56 up 15:33, 1 user, load average: 1.52, 1.43, 1.37 Tasks: 127 total, 3 running, 124 sleeping, 0 stopped, 0 zombie %Cpu(s): 35.8 us, 6.7 sy, 0.0 ni, 57.3 id, 0.1 wa, 0.0 hi, 0.2 si, 0.0 st KiB Mem : 8010508 total, 6211020 free, 397104 used, 1402384 buff/cache KiB Swap: 1679356 total, 1679356 free, 0 used. 6852016 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7986 setroub+ 20 0 424072 130856 11548 R 77.4 1.6 7:12.16
Code:
#!/usr/bin/env bash /usr/bin/python /etc/zabbix/externalscripts/discovery.py $1 $2 $3 $4 $5
The script works fine and does everything it is supposed to but setroubleshoot is rebooting a slew of issues, specifically around file folder access. The huge number of setroubleshootd responses is causing the CPU to go nuts. Here is an example of the error:
Code:
python: SELinux is preventing /usr/bin/python2.7 from create access on the file 7WMXFl.
Code:
ausearch -c 'python' --raw | audit2allow -M my-python
I've tried changing so that I'm not running a bash script, that zabbix calls the python script directly, or declaring shebang /usr/bin/python, but passing arguments doesn't seem to work properly. I get an error stating the $1 $2... are unknown arguments.
At a loss at this point. It is running, but I'd really like to get the CPU usage down as 60% of 4 cores is unreasonable for 30-40 HTTPS calls.
When I run the script as a user, there are no setroubleshoot errors, so it has to be related to being run as the zabbix user. Additionally, all the scripts work despite the setroubleshoot issue, would just like to get the CPU usage down and stop flooding the logs with setroubleshoot errors.
EDIT:
Running this all on 3.4 with CentOS7
Comment