python.d.plugin update (#4936)
<!-- Describe the change in summary section, including rationale and degin decisions. Include "Fixes #nnn" if you are fixing an existing issue. In "Component Name" section write which component is changed in this PR. This will help us review your PR quicker. If you have more information you want to add, write them in "Additional Information" section. This is usually used to help others understand your motivation behind this change. A step-by-step reproduction of the problem is helpful if there is no related issue. --> ##### Summary Fix: #4756 `python.d.plugin` updates: * remove `retries` option * make `penalty` configurable (enabled by default, max is 10 minutes) > penalty indicates whether to apply penalty to update_every in case of failures. > Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes. > penalty: yes ##### Component Name `python.d.plugin` ##### Additional Information
This commit is contained in:
parent
e096424786
commit
5286dae8eb
|
@ -21,7 +21,6 @@ Every configuration file must have one of two formats:
|
|||
|
||||
```yaml
|
||||
update_every : 2 # update frequency
|
||||
retries : 1 # how many failures in update() is tolerated
|
||||
priority : 20000 # where it is shown on dashboard
|
||||
|
||||
other_var1 : bla # variables passed to module
|
||||
|
@ -33,7 +32,6 @@ other_var2 : alb
|
|||
```yaml
|
||||
# module defaults:
|
||||
update_every : 2
|
||||
retries : 1
|
||||
priority : 20000
|
||||
|
||||
local: # job name
|
||||
|
@ -42,11 +40,10 @@ local: # job name
|
|||
|
||||
other_job:
|
||||
priority : 5 # job position on dashboard
|
||||
retries : 20 # job retries
|
||||
other_var2 : val # module specific variable
|
||||
```
|
||||
|
||||
`update_every`, `retries`, and `priority` are always optional.
|
||||
`update_every` and `priority` are always optional.
|
||||
|
||||
## How to debug a python module
|
||||
|
||||
|
|
|
@ -19,11 +19,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -50,6 +48,6 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
# ----------------------------------------------------------------------
|
||||
|
|
|
@ -46,12 +46,10 @@ priority : 90100
|
|||
|
||||
local:
|
||||
url : 'http://localhost/server-status?auto'
|
||||
retries : 20
|
||||
|
||||
remote:
|
||||
url : 'http://www.apache.org/server-status?auto'
|
||||
update_every : 5
|
||||
retries : 4
|
||||
```
|
||||
|
||||
Without configuration, module attempts to connect to `http://localhost/server-status?auto`
|
||||
|
|
|
@ -8,7 +8,6 @@ from bases.FrameworkServices.UrlService import UrlService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 2
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# default job configuration (overridden by python.d.plugin)
|
||||
# config = {'local': {
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, apache also supports the following:
|
||||
|
|
|
@ -15,7 +15,6 @@ from bases.loaders import safe_load
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 2
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
ORDER = ['cpu_usage', 'jobs_rate', 'connections_rate', 'commands_rate', 'current_tubes', 'current_jobs',
|
||||
'current_connections', 'binlog', 'uptime']
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -68,7 +66,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
# chart_cleanup: 10 # the JOB's chart cleanup interval in iterations
|
||||
#
|
||||
|
|
|
@ -12,7 +12,6 @@ from bases.collection import find_binary
|
|||
from bases.FrameworkServices.SimpleService import SimpleService
|
||||
|
||||
priority = 60000
|
||||
retries = 60
|
||||
update_every = 30
|
||||
|
||||
ORDER = ['name_server_statistics', 'incoming_queries', 'outgoing_queries', 'named_stats_size']
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, bind_rndc also supports the following:
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, boinc also supports the following:
|
||||
|
|
|
@ -16,7 +16,6 @@ from bases.FrameworkServices.SimpleService import SimpleService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
update_every = 10
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
ORDER = [
|
||||
'general_usage',
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 10 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, ceph plugin also supports the following:
|
||||
|
|
|
@ -8,7 +8,6 @@ from bases.FrameworkServices.ExecutableService import ExecutableService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
update_every = 5
|
||||
priority = 60000
|
||||
retries = 10
|
||||
|
||||
# charts order (can be overridden if you want less charts, or different order)
|
||||
ORDER = ['system', 'offsets', 'stratum', 'root', 'frequency', 'residualfreq', 'skew']
|
||||
|
|
|
@ -27,11 +27,9 @@ update_every: 5
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@ update_every: 5
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, chrony also supports the following:
|
||||
|
|
|
@ -18,7 +18,6 @@ from bases.FrameworkServices.UrlService import UrlService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
update_every = 1
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
METHODS = namedtuple('METHODS', ['get_data', 'url', 'stats'])
|
||||
|
||||
|
|
|
@ -28,11 +28,9 @@ update_every: 10
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -59,7 +57,7 @@ update_every: 10
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, the couchdb plugin also supports the following:
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, dns_query_time also supports the following:
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
#retries: 600000
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
#
|
||||
|
|
|
@ -13,7 +13,6 @@ from bases.FrameworkServices.SimpleService import SimpleService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 1
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# charts order (can be overridden if you want less charts, or different order)
|
||||
ORDER = [
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 10 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, dockerd plugin also supports the following:
|
||||
|
|
|
@ -8,7 +8,6 @@ from bases.FrameworkServices.SocketService import SocketService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 2
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# charts order (can be overridden if you want less charts, or different order)
|
||||
ORDER = [
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, dovecot also supports the following:
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, elasticsearch plugin also supports the following:
|
||||
|
|
|
@ -10,7 +10,6 @@ from bases.FrameworkServices.SimpleService import SimpleService
|
|||
# default module values
|
||||
# update_every = 4
|
||||
priority = 90000
|
||||
retries = 60
|
||||
|
||||
ORDER = ['random']
|
||||
CHARTS = {
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, example also supports the following:
|
||||
|
|
|
@ -8,7 +8,6 @@ from bases.FrameworkServices.ExecutableService import ExecutableService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 2
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# charts order (can be overridden if you want less charts, or different order)
|
||||
ORDER = ['qemails']
|
||||
|
|
|
@ -28,11 +28,9 @@ update_every: 10
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -59,7 +57,7 @@ update_every: 10
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, exim also supports the following:
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, fail2ban also supports the following:
|
||||
|
|
|
@ -11,7 +11,6 @@ from bases.FrameworkServices.SimpleService import SimpleService
|
|||
|
||||
# default module values (can be overridden per job in `config`)
|
||||
priority = 60000
|
||||
retries = 60
|
||||
update_every = 15
|
||||
|
||||
RADIUS_MSG = 'Message-Authenticator = 0x00, FreeRADIUS-Statistics-Type = 15, Response-Packet-Type = Access-Accept'
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, freeradius also supports the following:
|
||||
|
|
|
@ -169,7 +169,6 @@ and its base `UrlService` class. These are:
|
|||
|
||||
update_every: 1 # the job's data collection frequency
|
||||
priority: 60000 # the job's order on the dashboard
|
||||
retries: 60 # the job's number of restoration attempts
|
||||
user: admin # use when the expvar endpoint is protected by HTTP Basic Auth
|
||||
password: sekret # use when the expvar endpoint is protected by HTTP Basic Auth
|
||||
|
||||
|
|
|
@ -11,8 +11,6 @@ from bases.FrameworkServices.UrlService import UrlService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 2
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
|
||||
MEMSTATS_CHARTS = {
|
||||
'memstats_heap': {
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -53,7 +51,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, this plugin also supports the following:
|
||||
|
|
|
@ -18,7 +18,6 @@ from bases.FrameworkServices.UrlService import UrlService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 2
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# charts order (can be overridden if you want less charts, or different order)
|
||||
ORDER = [
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, haproxy also supports the following:
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, hddtemp also supports the following:
|
||||
|
|
|
@ -16,7 +16,6 @@ from bases.FrameworkServices.UrlService import UrlService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
update_every = 3
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# Response
|
||||
HTTP_RESPONSE_TIME = 'time'
|
||||
|
|
|
@ -27,6 +27,10 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# chart_cleanup sets the default chart cleanup interval in iterations.
|
||||
# A chart is marked as obsolete if it has not been updated
|
||||
# 'chart_cleanup' iterations in a row.
|
||||
|
@ -61,7 +65,7 @@ chart_cleanup: 0
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 3 # [optional] the JOB's data collection frequency
|
||||
# priority: 60000 # [optional] the JOB's order on the dashboard
|
||||
# retries: 60 # [optional] the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# timeout: 1 # [optional] the timeout when connecting, supports decimals (e.g. 0.5s)
|
||||
# url: 'http[s]://host-ip-or-dns[:port][path]'
|
||||
# # [required] the remote host url to connect to. If [:port] is missing, it defaults to 80
|
||||
|
|
|
@ -9,7 +9,6 @@ from bases.FrameworkServices.UrlService import UrlService
|
|||
|
||||
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# charts order (can be overridden if you want less charts, or different order)
|
||||
ORDER = ['listeners']
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, icecast also supports the following:
|
||||
|
|
|
@ -10,7 +10,6 @@ from bases.FrameworkServices.UrlService import UrlService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 2
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# default job configuration (overridden by python.d.plugin)
|
||||
# config = {'local': {
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, ipfs also supports the following:
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, isc_dhcpd supports the following:
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_everye
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# In addition to the above parameters, linux_power_supply also supports
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, lightspeed also supports the following:
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,5 +56,5 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
|
|
|
@ -19,11 +19,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
|
|
@ -19,11 +19,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -50,7 +48,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, megacli also supports the following:
|
||||
|
|
|
@ -8,7 +8,6 @@ from bases.FrameworkServices.SocketService import SocketService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 2
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# default job configuration (overridden by python.d.plugin)
|
||||
# config = {'local': {
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, memcached also supports the following:
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, mongodb also supports the following:
|
||||
|
|
|
@ -9,7 +9,6 @@ from bases.FrameworkServices.UrlService import UrlService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 2
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# see enum State_Type from monit.h (https://bitbucket.org/tildeslash/monit/src/master/src/monit.h)
|
||||
MONIT_SERVICE_NAMES = ['Filesystem', 'Directory', 'File', 'Process', 'Host', 'System', 'Fifo', 'Program', 'Net']
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, this plugin also supports the following:
|
||||
|
|
|
@ -65,7 +65,6 @@ Here is an example for 3 servers:
|
|||
```yaml
|
||||
update_every : 10
|
||||
priority : 90100
|
||||
retries : 5
|
||||
|
||||
local:
|
||||
'my.cnf' : '/etc/mysql/my.cnf'
|
||||
|
@ -82,7 +81,6 @@ remote:
|
|||
pass : 'bla'
|
||||
host : 'example.org'
|
||||
port : 9000
|
||||
retries : 20
|
||||
```
|
||||
|
||||
If no configuration is given, module will attempt to connect to mysql server via unix socket at `/var/run/mysqld/mysqld.sock` without password and with username `root`
|
||||
|
|
|
@ -9,7 +9,6 @@ from bases.FrameworkServices.MySQLService import MySQLService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 3
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# query executed on MySQL server
|
||||
QUERY_GLOBAL = 'SHOW GLOBAL STATUS;'
|
||||
|
|
|
@ -27,11 +27,10 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +57,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, mysql also supports the following:
|
||||
|
|
|
@ -37,7 +37,6 @@ priority : 90100
|
|||
|
||||
local:
|
||||
url : 'http://localhost/stub_status'
|
||||
retries : 10
|
||||
```
|
||||
|
||||
Without configuration, module attempts to connect to `http://localhost/stub_status`
|
||||
|
|
|
@ -8,7 +8,6 @@ from bases.FrameworkServices.UrlService import UrlService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 2
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# default job configuration (overridden by python.d.plugin)
|
||||
# config = {'local': {
|
||||
|
|
|
@ -39,11 +39,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -70,7 +68,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, this plugin also supports the following:
|
||||
|
|
|
@ -19,7 +19,6 @@ from bases.FrameworkServices.UrlService import UrlService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
update_every = 1
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# charts order (can be overridden if you want less charts, or different order)
|
||||
ORDER = [
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, nginx_plus also supports the following:
|
||||
|
|
|
@ -9,7 +9,6 @@ from bases.FrameworkServices.ExecutableService import ExecutableService
|
|||
|
||||
# default module values (can be overridden per job in `config`)
|
||||
priority = 60000
|
||||
retries = 5
|
||||
update_every = 30
|
||||
|
||||
# charts order (can be overridden if you want less charts, or different order)
|
||||
|
|
|
@ -28,11 +28,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -59,7 +57,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, nsd also supports the following:
|
||||
|
|
|
@ -12,7 +12,6 @@ from bases.FrameworkServices.SocketService import SocketService
|
|||
# default module values
|
||||
update_every = 1
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# NTP Control Message Protocol constants
|
||||
MODE = 6
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# ----------------------------------------------------------------------
|
||||
# JOBS (data collection sources)
|
||||
|
@ -52,7 +50,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
#
|
||||
# Additionally to the above, ntp also supports the following:
|
||||
#
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, example also supports the following:
|
||||
|
|
|
@ -28,11 +28,9 @@ update_every: 10
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -59,7 +57,7 @@ update_every: 10
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# ----------------------------------------------------------------------
|
||||
|
|
|
@ -8,7 +8,6 @@ from re import compile as r_compile
|
|||
from bases.FrameworkServices.SimpleService import SimpleService
|
||||
|
||||
priority = 60000
|
||||
retries = 60
|
||||
update_every = 10
|
||||
|
||||
ORDER = ['users', 'traffic']
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, openvpn status log also supports the following:
|
||||
|
|
|
@ -32,7 +32,6 @@ priority : 90100
|
|||
|
||||
local:
|
||||
url : 'http://localhost/status'
|
||||
retries : 10
|
||||
```
|
||||
|
||||
Without configuration, module attempts to connect to `http://localhost/status`
|
||||
|
|
|
@ -12,7 +12,6 @@ from bases.FrameworkServices.UrlService import UrlService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 2
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# default job configuration (overridden by python.d.plugin)
|
||||
# config = {'local': {
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, PHP-FPM also supports the following:
|
||||
|
|
|
@ -14,7 +14,6 @@ from bases.FrameworkServices.SimpleService import SimpleService
|
|||
|
||||
# default module values (can be overridden per job in `config`)
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
PORT_LATENCY = 'connect'
|
||||
|
||||
|
|
|
@ -27,6 +27,10 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# chart_cleanup sets the default chart cleanup interval in iterations.
|
||||
# A chart is marked as obsolete if it has not been updated
|
||||
# 'chart_cleanup' iterations in a row.
|
||||
|
@ -60,7 +64,7 @@ chart_cleanup: 0
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # [optional] the JOB's data collection frequency
|
||||
# priority: 60000 # [optional] the JOB's order on the dashboard
|
||||
# retries: 60 # [optional] the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# timeout: 1 # [optional] the socket timeout when connecting
|
||||
# host: 'dns or ip' # [required] the remote host address in either IPv4, IPv6 or as DNS name.
|
||||
# port: 22 # [required] the port number to check. Specify an integer, not service name.
|
||||
|
|
|
@ -8,7 +8,6 @@ from bases.FrameworkServices.ExecutableService import ExecutableService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 2
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# charts order (can be overridden if you want less charts, or different order)
|
||||
ORDER = ['qemails', 'qsize']
|
||||
|
|
|
@ -28,11 +28,9 @@ update_every: 10
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -59,7 +57,7 @@ update_every: 10
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, postfix also supports the following:
|
||||
|
|
|
@ -19,7 +19,6 @@ from bases.FrameworkServices.SimpleService import SimpleService
|
|||
# default module values
|
||||
update_every = 1
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
METRICS = {
|
||||
'DATABASE': [
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# A single connection is required in order to pull statistics.
|
||||
|
|
|
@ -9,8 +9,6 @@ from json import loads
|
|||
from bases.FrameworkServices.UrlService import UrlService
|
||||
|
||||
priority = 60000
|
||||
retries = 60
|
||||
# update_every = 3
|
||||
|
||||
ORDER = ['questions', 'cache_usage', 'cache_size', 'latency']
|
||||
CHARTS = {
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, apache also supports the following:
|
||||
|
|
|
@ -8,7 +8,6 @@ from bases.FrameworkServices.MySQLService import MySQLService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 3
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
|
||||
def query(table, *params):
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, proxysql also supports the following:
|
||||
|
|
|
@ -26,16 +26,13 @@ puppetdb:
|
|||
tls_cert_file: /path/to/client.crt
|
||||
tls_key_file: /path/to/client.key
|
||||
autodetection_retry: 1
|
||||
retries: 3600
|
||||
|
||||
puppetserver:
|
||||
url: 'https://fqdn.example.com:8140'
|
||||
autodetection_retry: 1
|
||||
retries: 3600
|
||||
```
|
||||
|
||||
When no configuration is given then `https://fqdn.example.com:8140` is
|
||||
tried without any retries.
|
||||
When no configuration is given, module uses `https://fqdn.example.com:8140`.
|
||||
|
||||
### notes
|
||||
|
||||
|
|
|
@ -17,8 +17,7 @@ import socket
|
|||
|
||||
update_every = 5
|
||||
priority = 60000
|
||||
# very long clojure-based service startup time
|
||||
retries = 180
|
||||
|
||||
|
||||
MB = 1048576
|
||||
CPU_SCALE = 1000
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# These configuration comes from UrlService base:
|
||||
|
@ -89,10 +87,8 @@
|
|||
# tls_cert_file: /path/to/client.crt
|
||||
# tls_key_file: /path/to/client.key
|
||||
# autodetection_retry: 1
|
||||
# retries: 3600
|
||||
#
|
||||
# puppetserver:
|
||||
# url: 'https://fqdn.example.com:8140'
|
||||
# autodetection_retry: 1
|
||||
# retries: 3600
|
||||
#
|
||||
|
|
|
@ -48,10 +48,10 @@ except ImportError:
|
|||
from third_party.ordereddict import OrderedDict
|
||||
|
||||
BASE_CONFIG = {'update_every': os.getenv('NETDATA_UPDATE_EVERY', 1),
|
||||
'retries': 60,
|
||||
'priority': 60000,
|
||||
'autodetection_retry': 0,
|
||||
'chart_cleanup': 10,
|
||||
'penalty': True,
|
||||
'name': str()}
|
||||
|
||||
|
||||
|
|
|
@ -18,6 +18,7 @@ RUNTIME_CHART_UPDATE = 'BEGIN netdata.runtime_{job_name} {since_last}\n' \
|
|||
'END\n'
|
||||
|
||||
PENALTY_EVERY = 5
|
||||
MAX_PENALTY = 10 * 60 # 10 minutes
|
||||
|
||||
|
||||
class RuntimeCounters:
|
||||
|
@ -26,7 +27,7 @@ class RuntimeCounters:
|
|||
:param configuration: <dict>
|
||||
"""
|
||||
self.update_every = int(configuration.pop('update_every'))
|
||||
self.max_retries = int(configuration.pop('retries'))
|
||||
self.do_penalty = configuration.pop('penalty')
|
||||
|
||||
self.start_mono = 0
|
||||
self.start_real = 0
|
||||
|
@ -34,6 +35,7 @@ class RuntimeCounters:
|
|||
self.penalty = 0
|
||||
self.elapsed = 0
|
||||
self.prev_update = 0
|
||||
|
||||
self.runs = 1
|
||||
|
||||
def calc_next(self):
|
||||
|
@ -49,10 +51,8 @@ class RuntimeCounters:
|
|||
|
||||
def handle_retries(self):
|
||||
self.retries += 1
|
||||
if self.retries % PENALTY_EVERY:
|
||||
return True
|
||||
self.penalty = self.retries * self.update_every / 2
|
||||
return self.retries < self.max_retries
|
||||
if self.do_penalty and self.retries % PENALTY_EVERY == 0:
|
||||
self.penalty = round(min(self.retries * self.update_every / 2, MAX_PENALTY))
|
||||
|
||||
|
||||
class SimpleService(Thread, PythonDLimitedLogger, OldVersionCompatibility, object):
|
||||
|
@ -180,10 +180,7 @@ class SimpleService(Thread, PythonDLimitedLogger, OldVersionCompatibility, objec
|
|||
:return: None
|
||||
"""
|
||||
job = self._runtime_counters
|
||||
self.debug('started, update frequency: {freq}, retries: {retries}'.format(
|
||||
freq=job.update_every,
|
||||
retries=job.max_retries - job.retries),
|
||||
)
|
||||
self.debug('started, update frequency: {freq}'.format(freq=job.update_every))
|
||||
|
||||
while True:
|
||||
job.sleep_until_next()
|
||||
|
@ -201,8 +198,7 @@ class SimpleService(Thread, PythonDLimitedLogger, OldVersionCompatibility, objec
|
|||
job.runs += 1
|
||||
|
||||
if not updated:
|
||||
if not job.handle_retries():
|
||||
return
|
||||
job.handle_retries()
|
||||
else:
|
||||
job.elapsed = int((monotonic() - job.start_mono) * 1e3)
|
||||
job.prev_update = job.start_real
|
||||
|
@ -210,10 +206,10 @@ class SimpleService(Thread, PythonDLimitedLogger, OldVersionCompatibility, objec
|
|||
safe_print(RUNTIME_CHART_UPDATE.format(job_name=self.name,
|
||||
since_last=since,
|
||||
elapsed=job.elapsed))
|
||||
self.debug('update => [{status}] (elapsed time: {elapsed}, '
|
||||
'retries left: {retries})'.format(status='OK' if updated else 'FAILED',
|
||||
elapsed=job.elapsed if updated else '-',
|
||||
retries=job.max_retries - job.retries))
|
||||
self.debug('update => [{status}] (elapsed time: {elapsed}, failed retries in a row: {retries})'.format(
|
||||
status='OK' if updated else 'FAILED',
|
||||
elapsed=job.elapsed if updated else '-',
|
||||
retries=job.retries))
|
||||
|
||||
def update(self, interval):
|
||||
"""
|
||||
|
|
|
@ -17,7 +17,6 @@ from bases.FrameworkServices.UrlService import UrlService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
update_every = 1
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
METHODS = namedtuple('METHODS', ['get_data', 'url', 'stats'])
|
||||
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, rabbitmq plugin also supports the following:
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, redis also supports the following:
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, rethinkdb also supports the following:
|
||||
|
|
|
@ -10,7 +10,6 @@ from bases.FrameworkServices.UrlService import UrlService
|
|||
# default module values (can be overridden per job in `config`)
|
||||
# update_every = 2
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
# charts order (can be overridden if you want less charts, or different order)
|
||||
ORDER = ['bandwidth', 'peers', 'dht']
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, RetroShare also supports the following:
|
||||
|
|
|
@ -27,7 +27,6 @@ disabled_by_default = True
|
|||
# default module values (can be overridden per job in `config`)
|
||||
update_every = 5
|
||||
priority = 60000
|
||||
retries = 60
|
||||
|
||||
ORDER = [
|
||||
'syscall_rw',
|
||||
|
|
|
@ -27,11 +27,9 @@ update_every: 5
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,5 +56,5 @@ update_every: 5
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
|
@ -19,11 +19,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# Additionally to the above, smartd_log also supports the following:
|
||||
|
|
|
@ -27,11 +27,9 @@
|
|||
# If unset, the default for python.d.plugin is used.
|
||||
# priority: 60000
|
||||
|
||||
# retries sets the number of retries to be made in case of failures.
|
||||
# If unset, the default for python.d.plugin is used.
|
||||
# Attempts to restore the service are made once every update_every
|
||||
# and only if the module has collected values in the past.
|
||||
# retries: 60
|
||||
# penalty indicates whether to apply penalty to update_every in case of failures.
|
||||
# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
|
||||
# penalty: yes
|
||||
|
||||
# autodetection_retry sets the job re-check interval in seconds.
|
||||
# The job is not deleted if check fails.
|
||||
|
@ -58,7 +56,7 @@
|
|||
# # JOBs sharing a name are mutually exclusive
|
||||
# update_every: 1 # the JOB's data collection frequency
|
||||
# priority: 60000 # the JOB's order on the dashboard
|
||||
# retries: 60 # the JOB's number of restoration attempts
|
||||
# penalty: yes # the JOB's penalty
|
||||
# autodetection_retry: 0 # the JOB's re-check interval in seconds
|
||||
#
|
||||
# In addition to the above, spigotmc supports the following:
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue