modularized all source code (#4391)

* modularized all external plugins

* added README.md in plugins

* fixed title

* fixed typo

* relative link to external plugins

* external plugins configuration README

* added plugins link

* remove plugins link

* plugin names are links

* added links to external plugins

* removed unecessary spacing

* list to table

* added language

* fixed typo

* list to table on internal plugins

* added more documentation to internal plugins

* moved python, node, and bash code and configs into the external plugins

* added statsd README

* fix bug with corrupting config.h every 2nd compilation

* moved all config files together with their code

* more documentation

* diskspace info

* fixed broken links in apps.plugin

* added backends docs

* updated plugins readme

* move nc-backend.sh to backends

* created daemon directory

* moved all code outside src/

* fixed readme identation

* renamed plugins.d.plugin to plugins.d

* updated readme

* removed linux- from linux plugins

* updated readme

* updated readme

* updated readme

* updated readme

* updated readme

* updated readme

* fixed README.md links

* fixed netdata tree links

* updated codacy, codeclimate and lgtm excluded paths

* update CMakeLists.txt

* updated automake options at top directory

* libnetdata slit into directories

* updated READMEs

* updated READMEs

* updated ARL docs

* updated ARL docs

* moved /plugins to /collectors

* moved all external plugins outside plugins.d

* updated codacy, codeclimate, lgtm

* updated README

* updated url

* updated readme

* updated readme

* updated readme

* updated readme

* moved api and web into webserver

* web/api web/gui web/server

* modularized webserver

* removed web/gui/version.txt
This commit is contained in:
Costa Tsaousis 2018-10-15 23:16:42 +03:00 committed by GitHub
parent 1ad4f1bcfc
commit 8fbf817ef8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
835 changed files with 9851 additions and 6415 deletions

View File

@ -1,15 +1,15 @@
---
exclude_paths:
- python.d/python_modules/pyyaml2/**
- python.d/python_modules/pyyaml3/**
- python.d/python_modules/urllib3/**
- python.d/python_modules/lm_sensors.py
- collectors/python.d.plugin/python_modules/pyyaml2/**
- collectors/python.d.plugin/python_modules/pyyaml3/**
- collectors/python.d.plugin/python_modules/urllib3/**
- collectors/python.d.plugin/python_modules/lm_sensors.py
- web/css/**
- web/lib/**
- web/old/**
- node.d/node_modules/lib/**
- node.d/node_modules/asn1-ber.js
- node.d/node_modules/net-snmp.js
- node.d/node_modules/pixl-xml.js
- node.d/node_modules/extend.js
- collectors/node.d.plugin/node_modules/lib/**
- collectors/node.d.plugin/node_modules/asn1-ber.js
- collectors/node.d.plugin/node_modules/net-snmp.js
- collectors/node.d.plugin/node_modules/pixl-xml.js
- collectors/node.d.plugin/node_modules/extend.js
- tests/**

View File

@ -81,7 +81,6 @@ plugins:
enabled: false
exclude_patterns:
- ".gitignore"
- "conf.d/"
- ".githooks/"
- "tests/"
- "m4/"
@ -89,12 +88,12 @@ exclude_patterns:
- "web/lib/"
- "web/fonts/"
- "web/old/"
- "python.d/python_modules/pyyaml2/"
- "python.d/python_modules/pyyaml3/"
- "python.d/python_modules/urllib3/"
- "node.d/node_modules/lib/"
- "node.d/node_modules/asn1-ber.js"
- "node.d/node_modules/extend.js"
- "node.d/node_modules/pixl-xml.js"
- "node.d/node_modules/net-snmp.js"
- "collectors/python.d.plugin/python_modules/pyyaml2/"
- "collectors/python.d.plugin/python_modules/pyyaml3/"
- "collectors/python.d.plugin/python_modules/urllib3/"
- "collectors/node.d.plugin/node_modules/lib/"
- "collectors/node.d.plugin/node_modules/asn1-ber.js"
- "collectors/node.d.plugin/node_modules/extend.js"
- "collectors/node.d.plugin/node_modules/pixl-xml.js"
- "collectors/node.d.plugin/node_modules/net-snmp.js"

38
.gitignore vendored
View File

@ -1,6 +1,8 @@
.deps
.libs
.dirstamp
.project
.pydevproject
*.o
*.a
@ -62,15 +64,15 @@ netdata-coverity-analysis.tgz
.settings/
README
TODO.md
conf.d/netdata.conf
src/TODO.txt
netdata.conf
TODO.txt
web/chart-info/
web/control.html
web/datasource.css
web/gadget.xml
web/index_new.html
web/version.txt
web/gui/chart-info/
web/gui/control.html
web/gui/datasource.css
web/gui/gadget.xml
web/gui/index_new.html
web/gui/version.txt
# related to karma/javascript/node
/node_modules/
@ -83,15 +85,15 @@ system/netdata.logrotate
system/netdata.service
system/netdata.plist
system/netdata-freebsd
system/edit-config
conf.d/edit-config
plugins.d/alarm-notify.sh
src/plugins/linux-cgroups.plugin/cgroup-name.sh
plugins.d/charts.d.plugin
plugins.d/fping.plugin
plugins.d/node.d.plugin
plugins.d/python.d.plugin
plugins.d/tc-qos-helper.sh
health/alarm-notify.sh
collectors/cgroups.plugin/cgroup-name.sh
collectors/tc.plugin/tc-qos-helper.sh
collectors/charts.d.plugin/charts.d.plugin
collectors/node.d.plugin/node.d.plugin
collectors/python.d.plugin/python.d.plugin
collectors/fping.plugin/fping.plugin
# installer generated files
netdata-uninstaller.sh
@ -117,7 +119,9 @@ diagrams/*.atxt
diagrams/plantuml.jar
# cppcheck
src/cppcheck-build/
cppcheck-build/
venv/
# debugging / profiling
makeself/debug/

View File

@ -8,15 +8,15 @@
# https://lgtm.com/help/lgtm/lgtm.yml-configuration-file
path_classifiers:
library:
- python.d/python_modules/third_party/
- python.d/python_modules/urllib3/
- python.d/python_modules/pyyaml2/
- python.d/python_modules/pyyaml3/
- node.d/node_modules/lib/
- node.d/node_modules/asn1-ber.js
- node.d/node_modules/extend.js
- node.d/node_modules/net-snmp.js
- node.d/node_modules/pixl-xml.js
- collectors/python.d.plugin/python_modules/third_party/
- collectors/python.d.plugin/python_modules/urllib3/
- collectors/python.d.plugin/python_modules/pyyaml2/
- collectors/python.d.plugin/python_modules/pyyaml3/
- collectors/node.d.plugin/node_modules/lib/
- collectors/node.d.plugin/node_modules/asn1-ber.js
- collectors/node.d.plugin/node_modules/extend.js
- collectors/node.d.plugin/node_modules/net-snmp.js
- collectors/node.d.plugin/node_modules/pixl-xml.js
- web/lib/
- web/css/
test:

View File

@ -139,250 +139,254 @@ ENDIF(LINUX)
# netdata files
set(LIBNETDATA_FILES
src/libnetdata/adaptive_resortable_list.c
src/libnetdata/adaptive_resortable_list.h
src/libnetdata/appconfig.c
src/libnetdata/appconfig.h
src/libnetdata/avl.c
src/libnetdata/avl.h
src/libnetdata/clocks.c
src/libnetdata/clocks.h
src/libnetdata/common.c
src/libnetdata/dictionary.c
src/libnetdata/dictionary.h
src/libnetdata/eval.c
src/libnetdata/eval.h
src/libnetdata/inlined.h
src/libnetdata/libnetdata.h
src/libnetdata/locks.c
src/libnetdata/locks.h
src/libnetdata/log.c
src/libnetdata/log.h
src/libnetdata/os.c
src/libnetdata/os.h
src/libnetdata/popen.c
src/libnetdata/popen.h
src/libnetdata/procfile.c
src/libnetdata/procfile.h
src/libnetdata/simple_pattern.c
src/libnetdata/simple_pattern.h
src/libnetdata/socket.c
src/libnetdata/socket.h
src/libnetdata/statistical.c
src/libnetdata/statistical.h
src/libnetdata/storage_number.c
src/libnetdata/storage_number.h
src/libnetdata/threads.c
src/libnetdata/threads.h
src/libnetdata/web_buffer.c
src/libnetdata/web_buffer.h
src/libnetdata/url.c
src/libnetdata/url.h
libnetdata/adaptive_resortable_list/adaptive_resortable_list.c
libnetdata/adaptive_resortable_list/adaptive_resortable_list.h
libnetdata/config/appconfig.c
libnetdata/config/appconfig.h
libnetdata/avl/avl.c
libnetdata/avl/avl.h
libnetdata/buffer/buffer.c
libnetdata/buffer/buffer.h
libnetdata/clocks/clocks.c
libnetdata/clocks/clocks.h
libnetdata/dictionary/dictionary.c
libnetdata/dictionary/dictionary.h
libnetdata/eval/eval.c
libnetdata/eval/eval.h
libnetdata/inlined.h
libnetdata/libnetdata.c
libnetdata/libnetdata.h
libnetdata/locks/locks.c
libnetdata/locks/locks.h
libnetdata/log/log.c
libnetdata/log/log.h
libnetdata/os.c
libnetdata/os.h
libnetdata/popen/popen.c
libnetdata/popen/popen.h
libnetdata/procfile/procfile.c
libnetdata/procfile/procfile.h
libnetdata/simple_pattern/simple_pattern.c
libnetdata/simple_pattern/simple_pattern.h
libnetdata/socket/socket.c
libnetdata/socket/socket.h
libnetdata/statistical/statistical.c
libnetdata/statistical/statistical.h
libnetdata/storage_number/storage_number.c
libnetdata/storage_number/storage_number.h
libnetdata/threads/threads.c
libnetdata/threads/threads.h
libnetdata/url/url.c
libnetdata/url/url.h
)
add_library(libnetdata OBJECT ${LIBNETDATA_FILES})
set(APPS_PLUGIN_FILES
src/plugins/apps.plugin/apps_plugin.c
collectors/apps.plugin/apps_plugin.c
)
set(CHECKS_PLUGIN_FILES
src/plugins/checks.plugin/plugin_checks.c
src/plugins/checks.plugin/plugin_checks.h
collectors/checks.plugin/plugin_checks.c
collectors/checks.plugin/plugin_checks.h
)
set(FREEBSD_PLUGIN_FILES
src/plugins/freebsd.plugin/plugin_freebsd.c
src/plugins/freebsd.plugin/plugin_freebsd.h
src/plugins/freebsd.plugin/freebsd_sysctl.c
src/plugins/freebsd.plugin/freebsd_getmntinfo.c
src/plugins/freebsd.plugin/freebsd_getifaddrs.c
src/plugins/freebsd.plugin/freebsd_devstat.c
src/plugins/freebsd.plugin/freebsd_kstat_zfs.c
src/plugins/freebsd.plugin/freebsd_ipfw.c
src/plugins/linux-proc.plugin/zfs_common.c
src/plugins/linux-proc.plugin/zfs_common.h
collectors/freebsd.plugin/plugin_freebsd.c
collectors/freebsd.plugin/plugin_freebsd.h
collectors/freebsd.plugin/freebsd_sysctl.c
collectors/freebsd.plugin/freebsd_getmntinfo.c
collectors/freebsd.plugin/freebsd_getifaddrs.c
collectors/freebsd.plugin/freebsd_devstat.c
collectors/freebsd.plugin/freebsd_kstat_zfs.c
collectors/freebsd.plugin/freebsd_ipfw.c
collectors/proc.plugin/zfs_common.c
collectors/proc.plugin/zfs_common.h
)
set(HEALTH_PLUGIN_FILES
src/health/health.c
src/health/health.h
src/health/health_config.c
src/health/health_json.c
src/health/health_log.c
health/health.c
health/health.h
health/health_config.c
health/health_json.c
health/health_log.c
)
set(IDLEJITTER_PLUGIN_FILES
src/plugins/idlejitter.plugin/plugin_idlejitter.c
src/plugins/idlejitter.plugin/plugin_idlejitter.h
collectors/idlejitter.plugin/plugin_idlejitter.c
collectors/idlejitter.plugin/plugin_idlejitter.h
)
set(CGROUPS_PLUGIN_FILES
src/plugins/linux-cgroups.plugin/sys_fs_cgroup.c
src/plugins/linux-cgroups.plugin/sys_fs_cgroup.h
collectors/cgroups.plugin/sys_fs_cgroup.c
collectors/cgroups.plugin/sys_fs_cgroup.h
)
set(CGROUP_NETWORK_FILES
src/plugins/linux-cgroups.plugin/cgroup-network.c
collectors/cgroups.plugin/cgroup-network.c
)
set(DISKSPACE_PLUGIN_FILES
src/plugins/linux-diskspace.plugin/plugin_diskspace.h
src/plugins/linux-diskspace.plugin/plugin_diskspace.c
collectors/diskspace.plugin/plugin_diskspace.h
collectors/diskspace.plugin/plugin_diskspace.c
)
set(FREEIPMI_PLUGIN_FILES
src/plugins/linux-freeipmi.plugin/freeipmi_plugin.c
collectors/freeipmi.plugin/freeipmi_plugin.c
)
set(NFACCT_PLUGIN_FILES
src/plugins/linux-nfacct.plugin/plugin_nfacct.c
src/plugins/linux-nfacct.plugin/plugin_nfacct.h
collectors/nfacct.plugin/plugin_nfacct.c
collectors/nfacct.plugin/plugin_nfacct.h
)
set(PROC_PLUGIN_FILES
src/plugins/linux-proc.plugin/ipc.c
src/plugins/linux-proc.plugin/plugin_proc.c
src/plugins/linux-proc.plugin/plugin_proc.h
src/plugins/linux-proc.plugin/proc_diskstats.c
src/plugins/linux-proc.plugin/proc_interrupts.c
src/plugins/linux-proc.plugin/proc_softirqs.c
src/plugins/linux-proc.plugin/proc_loadavg.c
src/plugins/linux-proc.plugin/proc_meminfo.c
src/plugins/linux-proc.plugin/proc_net_dev.c
src/plugins/linux-proc.plugin/proc_net_ip_vs_stats.c
src/plugins/linux-proc.plugin/proc_net_netstat.c
src/plugins/linux-proc.plugin/proc_net_rpc_nfs.c
src/plugins/linux-proc.plugin/proc_net_rpc_nfsd.c
src/plugins/linux-proc.plugin/proc_net_snmp.c
src/plugins/linux-proc.plugin/proc_net_snmp6.c
src/plugins/linux-proc.plugin/proc_net_sctp_snmp.c
src/plugins/linux-proc.plugin/proc_net_sockstat.c
src/plugins/linux-proc.plugin/proc_net_sockstat6.c
src/plugins/linux-proc.plugin/proc_net_softnet_stat.c
src/plugins/linux-proc.plugin/proc_net_stat_conntrack.c
src/plugins/linux-proc.plugin/proc_net_stat_synproxy.c
src/plugins/linux-proc.plugin/proc_self_mountinfo.c
src/plugins/linux-proc.plugin/proc_self_mountinfo.h
src/plugins/linux-proc.plugin/zfs_common.c
src/plugins/linux-proc.plugin/zfs_common.h
src/plugins/linux-proc.plugin/proc_spl_kstat_zfs.c
src/plugins/linux-proc.plugin/proc_stat.c
src/plugins/linux-proc.plugin/proc_sys_kernel_random_entropy_avail.c
src/plugins/linux-proc.plugin/proc_vmstat.c
src/plugins/linux-proc.plugin/proc_uptime.c
src/plugins/linux-proc.plugin/sys_kernel_mm_ksm.c
src/plugins/linux-proc.plugin/sys_devices_system_edac_mc.c
src/plugins/linux-proc.plugin/sys_devices_system_node.c
src/plugins/linux-proc.plugin/sys_fs_btrfs.c
collectors/proc.plugin/ipc.c
collectors/proc.plugin/plugin_proc.c
collectors/proc.plugin/plugin_proc.h
collectors/proc.plugin/proc_diskstats.c
collectors/proc.plugin/proc_interrupts.c
collectors/proc.plugin/proc_softirqs.c
collectors/proc.plugin/proc_loadavg.c
collectors/proc.plugin/proc_meminfo.c
collectors/proc.plugin/proc_net_dev.c
collectors/proc.plugin/proc_net_ip_vs_stats.c
collectors/proc.plugin/proc_net_netstat.c
collectors/proc.plugin/proc_net_rpc_nfs.c
collectors/proc.plugin/proc_net_rpc_nfsd.c
collectors/proc.plugin/proc_net_snmp.c
collectors/proc.plugin/proc_net_snmp6.c
collectors/proc.plugin/proc_net_sctp_snmp.c
collectors/proc.plugin/proc_net_sockstat.c
collectors/proc.plugin/proc_net_sockstat6.c
collectors/proc.plugin/proc_net_softnet_stat.c
collectors/proc.plugin/proc_net_stat_conntrack.c
collectors/proc.plugin/proc_net_stat_synproxy.c
collectors/proc.plugin/proc_self_mountinfo.c
collectors/proc.plugin/proc_self_mountinfo.h
collectors/proc.plugin/zfs_common.c
collectors/proc.plugin/zfs_common.h
collectors/proc.plugin/proc_spl_kstat_zfs.c
collectors/proc.plugin/proc_stat.c
collectors/proc.plugin/proc_sys_kernel_random_entropy_avail.c
collectors/proc.plugin/proc_vmstat.c
collectors/proc.plugin/proc_uptime.c
collectors/proc.plugin/sys_kernel_mm_ksm.c
collectors/proc.plugin/sys_devices_system_edac_mc.c
collectors/proc.plugin/sys_devices_system_node.c
collectors/proc.plugin/sys_fs_btrfs.c
)
set(TC_PLUGIN_FILES
src/plugins/linux-tc.plugin/plugin_tc.c
src/plugins/linux-tc.plugin/plugin_tc.h
collectors/tc.plugin/plugin_tc.c
collectors/tc.plugin/plugin_tc.h
)
set(MACOS_PLUGIN_FILES
src/plugins/macos.plugin/plugin_macos.c
src/plugins/macos.plugin/plugin_macos.h
src/plugins/macos.plugin/macos_sysctl.c
src/plugins/macos.plugin/macos_mach_smi.c
src/plugins/macos.plugin/macos_fw.c
collectors/macos.plugin/plugin_macos.c
collectors/macos.plugin/plugin_macos.h
collectors/macos.plugin/macos_sysctl.c
collectors/macos.plugin/macos_mach_smi.c
collectors/macos.plugin/macos_fw.c
)
set(PLUGINSD_PLUGIN_FILES
src/plugins/plugins.d.plugin/plugins_d.c
src/plugins/plugins.d.plugin/plugins_d.h
collectors/plugins.d/plugins_d.c
collectors/plugins.d/plugins_d.h
)
set(REGISTRY_PLUGIN_FILES
src/registry/registry.c
src/registry/registry.h
src/registry/registry_db.c
src/registry/registry_init.c
src/registry/registry_internals.c
src/registry/registry_internals.h
src/registry/registry_log.c
src/registry/registry_machine.c
src/registry/registry_machine.h
src/registry/registry_person.c
src/registry/registry_person.h
src/registry/registry_url.c
src/registry/registry_url.h
registry/registry.c
registry/registry.h
registry/registry_db.c
registry/registry_init.c
registry/registry_internals.c
registry/registry_internals.h
registry/registry_log.c
registry/registry_machine.c
registry/registry_machine.h
registry/registry_person.c
registry/registry_person.h
registry/registry_url.c
registry/registry_url.h
)
set(STATSD_PLUGIN_FILES
src/plugins/statsd.plugin/statsd.c
src/plugins/statsd.plugin/statsd.h
collectors/statsd.plugin/statsd.c
collectors/statsd.plugin/statsd.h
)
set(RRD_PLUGIN_FILES
src/database/rrdcalc.c
src/database/rrdcalc.h
src/database/rrdcalctemplate.c
src/database/rrdcalctemplate.h
src/database/rrddim.c
src/database/rrddimvar.c
src/database/rrddimvar.h
src/database/rrdfamily.c
src/database/rrdhost.c
src/database/rrd.c
src/database/rrd.h
src/database/rrdset.c
src/database/rrdsetvar.c
src/database/rrdsetvar.h
src/database/rrdvar.c
src/database/rrdvar.h
database/rrdcalc.c
database/rrdcalc.h
database/rrdcalctemplate.c
database/rrdcalctemplate.h
database/rrddim.c
database/rrddimvar.c
database/rrddimvar.h
database/rrdfamily.c
database/rrdhost.c
database/rrd.c
database/rrd.h
database/rrdset.c
database/rrdsetvar.c
database/rrdsetvar.h
database/rrdvar.c
database/rrdvar.h
)
set(WEB_PLUGIN_FILES
src/webserver/web_client.c
src/webserver/web_client.h
src/webserver/web_server.c
src/webserver/web_server.h
)
web/server/web_client.c
web/server/web_client.h
web/server/web_server.c
web/server/web_server.h
web/server/single/single-threaded.c web/server/single/single-threaded.h web/server/multi/multi-threaded.c web/server/multi/multi-threaded.h web/server/static/static-threaded.c web/server/static/static-threaded.h web/server/web_client_cache.c web/server/web_client_cache.h)
set(API_PLUGIN_FILES
src/api/rrd2json.c
src/api/rrd2json.h
src/api/web_api_v1.c
src/api/web_api_v1.h
src/api/web_buffer_svg.c
src/api/web_buffer_svg.h
web/api/rrd2json.c
web/api/rrd2json.h
web/api/web_api_v1.c
web/api/web_api_v1.h
web/api/web_buffer_svg.c
web/api/web_buffer_svg.h
)
set(STREAMING_PLUGIN_FILES
src/streaming/rrdpush.c
src/streaming/rrdpush.h
streaming/rrdpush.c
streaming/rrdpush.h
)
set(BACKENDS_PLUGIN_FILES
src/backends/backends.c
src/backends/backends.h
src/backends/graphite/graphite.c
src/backends/graphite/graphite.h
src/backends/json/json.c
src/backends/json/json.h
src/backends/opentsdb/opentsdb.c
src/backends/opentsdb/opentsdb.h
src/backends/prometheus/backend_prometheus.c
src/backends/prometheus/backend_prometheus.h
backends/backends.c
backends/backends.h
backends/graphite/graphite.c
backends/graphite/graphite.h
backends/json/json.c
backends/json/json.h
backends/opentsdb/opentsdb.c
backends/opentsdb/opentsdb.h
backends/prometheus/backend_prometheus.c
backends/prometheus/backend_prometheus.h
)
set(DAEMON_FILES
daemon/common.c
daemon/common.h
daemon/daemon.c
daemon/daemon.h
daemon/global_statistics.c
daemon/global_statistics.h
daemon/main.c
daemon/main.h
daemon/signals.c
daemon/signals.h
daemon/unit_test.c
daemon/unit_test.h
)
set(NETDATA_FILES
src/plugins/all.h
src/common.c
src/common.h
src/daemon.c
src/daemon.h
src/global_statistics.c
src/global_statistics.h
src/main.c
src/main.h
src/signals.c
src/signals.h
src/unit_test.c
src/unit_test.h
collectors/all.h
${DAEMON_FILES}
${API_PLUGIN_FILES}
${BACKENDS_PLUGIN_FILES}
${CHECKS_PLUGIN_FILES}

View File

@ -1,8 +1,6 @@
#
# Copyright (C) 2015 Alon Bar-Lev <alon.barlev@gmail.com>
# SPDX-License-Identifier: GPL-3.0-or-later
#
AUTOMAKE_OPTIONS=foreign 1.10
AUTOMAKE_OPTIONS=foreign subdir-objects 1.10
ACLOCAL_AMFLAGS = -I build/m4
MAINTAINERCLEANFILES= \
@ -47,16 +45,9 @@ EXTRA_DIST = \
$(NULL)
SUBDIRS = \
charts.d \
conf.d \
diagrams \
makeself \
node.d \
plugins.d \
python.d \
src \
system \
web \
contrib \
tests \
$(NULL)
@ -79,3 +70,369 @@ dist_noinst_SCRIPTS= \
netdata-installer.sh \
installer/functions.sh \
$(NULL)
# -----------------------------------------------------------------------------
# Compile netdata binaries
SUBDIRS += \
backends \
collectors \
database \
health \
libnetdata \
registry \
streaming \
web \
$(NULL)
AM_CFLAGS = \
$(OPTIONAL_MATH_CFLAGS) \
$(OPTIONAL_NFACCT_CLFAGS) \
$(OPTIONAL_ZLIB_CFLAGS) \
$(OPTIONAL_UUID_CFLAGS) \
$(OPTIONAL_LIBCAP_LIBS) \
$(OPTIONAL_IPMIMONITORING_CFLAGS) \
$(NULL)
sbin_PROGRAMS =
dist_cache_DATA = installer/.keep
dist_varlib_DATA = installer/.keep
dist_registry_DATA = installer/.keep
dist_log_DATA = installer/.keep
plugins_PROGRAMS =
LIBNETDATA_FILES = \
libnetdata/adaptive_resortable_list/adaptive_resortable_list.c \
libnetdata/adaptive_resortable_list/adaptive_resortable_list.h \
libnetdata/config/appconfig.c \
libnetdata/config/appconfig.h \
libnetdata/avl/avl.c \
libnetdata/avl/avl.h \
libnetdata/buffer/buffer.c \
libnetdata/buffer/buffer.h \
libnetdata/clocks/clocks.c \
libnetdata/clocks/clocks.h \
libnetdata/dictionary/dictionary.c \
libnetdata/dictionary/dictionary.h \
libnetdata/eval/eval.c \
libnetdata/eval/eval.h \
libnetdata/inlined.h \
libnetdata/libnetdata.c \
libnetdata/libnetdata.h \
libnetdata/locks/locks.c \
libnetdata/locks/locks.h \
libnetdata/log/log.c \
libnetdata/log/log.h \
libnetdata/popen/popen.c \
libnetdata/popen/popen.h \
libnetdata/procfile/procfile.c \
libnetdata/procfile/procfile.h \
libnetdata/os.c \
libnetdata/os.h \
libnetdata/simple_pattern/simple_pattern.c \
libnetdata/simple_pattern/simple_pattern.h \
libnetdata/socket/socket.c \
libnetdata/socket/socket.h \
libnetdata/statistical/statistical.c \
libnetdata/statistical/statistical.h \
libnetdata/storage_number/storage_number.c \
libnetdata/storage_number/storage_number.h \
libnetdata/threads/threads.c \
libnetdata/threads/threads.h \
libnetdata/url/url.c \
libnetdata/url/url.h \
$(NULL)
APPS_PLUGIN_FILES = \
collectors/apps.plugin/apps_plugin.c \
$(LIBNETDATA_FILES) \
$(NULL)
CHECKS_PLUGIN_FILES = \
collectors/checks.plugin/plugin_checks.c \
collectors/checks.plugin/plugin_checks.h \
$(NULL)
FREEBSD_PLUGIN_FILES = \
collectors/freebsd.plugin/plugin_freebsd.c \
collectors/freebsd.plugin/plugin_freebsd.h \
collectors/freebsd.plugin/freebsd_sysctl.c \
collectors/freebsd.plugin/freebsd_getmntinfo.c \
collectors/freebsd.plugin/freebsd_getifaddrs.c \
collectors/freebsd.plugin/freebsd_devstat.c \
collectors/freebsd.plugin/freebsd_kstat_zfs.c \
collectors/freebsd.plugin/freebsd_ipfw.c \
collectors/proc.plugin/zfs_common.c \
collectors/proc.plugin/zfs_common.h \
$(NULL)
HEALTH_PLUGIN_FILES = \
health/health.c \
health/health.h \
health/health_config.c \
health/health_json.c \
health/health_log.c \
$(NULL)
IDLEJITTER_PLUGIN_FILES = \
collectors/idlejitter.plugin/plugin_idlejitter.c \
collectors/idlejitter.plugin/plugin_idlejitter.h \
$(NULL)
CGROUPS_PLUGIN_FILES = \
collectors/cgroups.plugin/sys_fs_cgroup.c \
collectors/cgroups.plugin/sys_fs_cgroup.h \
$(NULL)
CGROUP_NETWORK_FILES = \
collectors/cgroups.plugin/cgroup-network.c \
$(LIBNETDATA_FILES) \
$(NULL)
DISKSPACE_PLUGIN_FILES = \
collectors/diskspace.plugin/plugin_diskspace.h \
collectors/diskspace.plugin/plugin_diskspace.c \
$(NULL)
FREEIPMI_PLUGIN_FILES = \
collectors/freeipmi.plugin/freeipmi_plugin.c \
$(LIBNETDATA_FILES) \
$(NULL)
NFACCT_PLUGIN_FILES = \
collectors/nfacct.plugin/plugin_nfacct.c \
collectors/nfacct.plugin/plugin_nfacct.h \
$(NULL)
PROC_PLUGIN_FILES = \
collectors/proc.plugin/ipc.c \
collectors/proc.plugin/plugin_proc.c \
collectors/proc.plugin/plugin_proc.h \
collectors/proc.plugin/proc_diskstats.c \
collectors/proc.plugin/proc_interrupts.c \
collectors/proc.plugin/proc_softirqs.c \
collectors/proc.plugin/proc_loadavg.c \
collectors/proc.plugin/proc_meminfo.c \
collectors/proc.plugin/proc_net_dev.c \
collectors/proc.plugin/proc_net_ip_vs_stats.c \
collectors/proc.plugin/proc_net_netstat.c \
collectors/proc.plugin/proc_net_rpc_nfs.c \
collectors/proc.plugin/proc_net_rpc_nfsd.c \
collectors/proc.plugin/proc_net_snmp.c \
collectors/proc.plugin/proc_net_snmp6.c \
collectors/proc.plugin/proc_net_sctp_snmp.c \
collectors/proc.plugin/proc_net_sockstat.c \
collectors/proc.plugin/proc_net_sockstat6.c \
collectors/proc.plugin/proc_net_softnet_stat.c \
collectors/proc.plugin/proc_net_stat_conntrack.c \
collectors/proc.plugin/proc_net_stat_synproxy.c \
collectors/proc.plugin/proc_self_mountinfo.c \
collectors/proc.plugin/proc_self_mountinfo.h \
collectors/proc.plugin/zfs_common.c \
collectors/proc.plugin/zfs_common.h \
collectors/proc.plugin/proc_spl_kstat_zfs.c \
collectors/proc.plugin/proc_stat.c \
collectors/proc.plugin/proc_sys_kernel_random_entropy_avail.c \
collectors/proc.plugin/proc_vmstat.c \
collectors/proc.plugin/proc_uptime.c \
collectors/proc.plugin/sys_kernel_mm_ksm.c \
collectors/proc.plugin/sys_devices_system_edac_mc.c \
collectors/proc.plugin/sys_devices_system_node.c \
collectors/proc.plugin/sys_fs_btrfs.c \
$(NULL)
TC_PLUGIN_FILES = \
collectors/tc.plugin/plugin_tc.c \
collectors/tc.plugin/plugin_tc.h \
$(NULL)
MACOS_PLUGIN_FILES = \
collectors/macos.plugin/plugin_macos.c \
collectors/macos.plugin/plugin_macos.h \
collectors/macos.plugin/macos_sysctl.c \
collectors/macos.plugin/macos_mach_smi.c \
collectors/macos.plugin/macos_fw.c \
$(NULL)
PLUGINSD_PLUGIN_FILES = \
collectors/plugins.d/plugins_d.c \
collectors/plugins.d/plugins_d.h \
$(NULL)
RRD_PLUGIN_FILES = \
database/rrdcalc.c \
database/rrdcalc.h \
database/rrdcalctemplate.c \
database/rrdcalctemplate.h \
database/rrddim.c \
database/rrddimvar.c \
database/rrddimvar.h \
database/rrdfamily.c \
database/rrdhost.c \
database/rrd.c \
database/rrd.h \
database/rrdset.c \
database/rrdsetvar.c \
database/rrdsetvar.h \
database/rrdvar.c \
database/rrdvar.h \
$(NULL)
API_PLUGIN_FILES = \
web/api/rrd2json.c \
web/api/rrd2json.h \
web/api/web_api_v1.c \
web/api/web_api_v1.h \
web/api/web_buffer_svg.c \
web/api/web_buffer_svg.h \
$(NULL)
STREAMING_PLUGIN_FILES = \
streaming/rrdpush.c \
streaming/rrdpush.h \
$(NULL)
REGISTRY_PLUGIN_FILES = \
registry/registry.c \
registry/registry.h \
registry/registry_db.c \
registry/registry_init.c \
registry/registry_internals.c \
registry/registry_internals.h \
registry/registry_log.c \
registry/registry_machine.c \
registry/registry_machine.h \
registry/registry_person.c \
registry/registry_person.h \
registry/registry_url.c \
registry/registry_url.h \
$(NULL)
STATSD_PLUGIN_FILES = \
collectors/statsd.plugin/statsd.c \
collectors/statsd.plugin/statsd.h \
$(NULL)
WEB_PLUGIN_FILES = \
web/server/web_client.c \
web/server/web_client.h \
web/server/web_server.c \
web/server/web_server.h \
web/server/web_client_cache.c \
web/server/web_client_cache.h \
web/server/single/single-threaded.c \
web/server/single/single-threaded.h \
web/server/multi/multi-threaded.c \
web/server/multi/multi-threaded.h \
web/server/static/static-threaded.c \
web/server/static/static-threaded.h \
$(NULL)
BACKENDS_PLUGIN_FILES = \
backends/backends.c \
backends/backends.h \
backends/graphite/graphite.c \
backends/graphite/graphite.h \
backends/json/json.c \
backends/json/json.h \
backends/opentsdb/opentsdb.c \
backends/opentsdb/opentsdb.h \
backends/prometheus/backend_prometheus.c \
backends/prometheus/backend_prometheus.h \
$(NULL)
DAEMON_FILES = \
daemon/common.c \
daemon/common.h \
daemon/daemon.c \
daemon/daemon.h \
daemon/global_statistics.c \
daemon/global_statistics.h \
daemon/main.c \
daemon/main.h \
daemon/signals.c \
daemon/signals.h \
daemon/unit_test.c \
daemon/unit_test.h \
$(NULL)
NETDATA_FILES = \
collectors/all.h \
$(DAEMON_FILES) \
$(LIBNETDATA_FILES) \
$(API_PLUGIN_FILES) \
$(BACKENDS_PLUGIN_FILES) \
$(CHECKS_PLUGIN_FILES) \
$(HEALTH_PLUGIN_FILES) \
$(IDLEJITTER_PLUGIN_FILES) \
$(PLUGINSD_PLUGIN_FILES) \
$(REGISTRY_PLUGIN_FILES) \
$(RRD_PLUGIN_FILES) \
$(STREAMING_PLUGIN_FILES) \
$(STATSD_PLUGIN_FILES) \
$(WEB_PLUGIN_FILES) \
$(NULL)
if FREEBSD
NETDATA_FILES += \
$(FREEBSD_PLUGIN_FILES) \
$(NULL)
endif
if MACOS
NETDATA_FILES += \
$(MACOS_PLUGIN_FILES) \
$(NULL)
endif
if LINUX
NETDATA_FILES += \
$(CGROUPS_PLUGIN_FILES) \
$(DISKSPACE_PLUGIN_FILES) \
$(NFACCT_PLUGIN_FILES) \
$(PROC_PLUGIN_FILES) \
$(TC_PLUGIN_FILES) \
$(NULL)
endif
NETDATA_COMMON_LIBS = \
$(OPTIONAL_MATH_LIBS) \
$(OPTIONAL_ZLIB_LIBS) \
$(OPTIONAL_UUID_LIBS) \
$(NULL)
sbin_PROGRAMS += netdata
netdata_SOURCES = ../config.h $(NETDATA_FILES)
netdata_LDADD = \
$(NETDATA_COMMON_LIBS) \
$(OPTIONAL_NFACCT_LIBS) \
$(NULL)
if ENABLE_PLUGIN_APPS
plugins_PROGRAMS += apps.plugin
apps_plugin_SOURCES = ../config.h $(APPS_PLUGIN_FILES)
apps_plugin_LDADD = \
$(NETDATA_COMMON_LIBS) \
$(OPTIONAL_LIBCAP_LIBS) \
$(NULL)
endif
if ENABLE_PLUGIN_CGROUP_NETWORK
plugins_PROGRAMS += cgroup-network
cgroup_network_SOURCES = ../config.h $(CGROUP_NETWORK_FILES)
cgroup_network_LDADD = \
$(NETDATA_COMMON_LIBS) \
$(NULL)
endif
if ENABLE_PLUGIN_FREEIPMI
plugins_PROGRAMS += freeipmi.plugin
freeipmi_plugin_SOURCES = ../config.h $(FREEIPMI_PLUGIN_FILES)
freeipmi_plugin_LDADD = \
$(NETDATA_COMMON_LIBS) \
$(OPTIONAL_IPMIMONITORING_LIBS) \
$(NULL)
endif

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-3.0-or-later
AUTOMAKE_OPTIONS = subdir-objects
MAINTAINERCLEANFILES = Makefile.in
MAINTAINERCLEANFILES = $(srcdir)/Makefile.in
SUBDIRS = \
graphite \
@ -9,3 +9,11 @@ SUBDIRS = \
opentsdb \
prometheus \
$(NULL)
dist_noinst_DATA = \
README.md \
$(NULL)
dist_noinst_SCRIPTS = \
nc-backend.sh \
$(NULL)

137
backends/README.md Normal file
View File

@ -0,0 +1,137 @@
netdata supports backends for archiving the metrics, or providing long term dashboards, using grafana or other tools, like this:
![image](https://cloud.githubusercontent.com/assets/2662304/20649711/29f182ba-b4ce-11e6-97c8-ab2c0ab59833.png)
Since netdata collects thousands of metrics per server per second, which would easily congest any backend server when several netdata servers are sending data to it, netdata allows sending metrics at a lower frequency. So, although netdata collects metrics every second, it can send to the backend servers averages or sums every X seconds (though, it can send them per second if you need it to).
## features
1. Supported backends
1. **graphite** (`plaintext interface`, used by **Graphite**, **InfluxDB**, **KairosDB**, **Blueflood**, **ElasticSearch** via logstash tcp input and the graphite codec, etc)
metrics are sent to the backend server as `prefix.hostname.chart.dimension`. `prefix` is configured below, `hostname` is the hostname of the machine (can also be configured).
2. **opentsdb** (`telnet interface`, used by **OpenTSDB**, **InfluxDB**, **KairosDB**, etc)
metrics are sent to opentsdb as `prefix.chart.dimension` with tag `host=hostname`.
3. **json** document DBs
metrics are sent to a document db, `JSON` formatted.
4. **prometheus** is described at [prometheus page](prometheus/) since it pulls data from netdata.
2. Only one backend may be active at a time.
3. All metrics are transferred to the backend - netdata does not implement any metric filtering.
4. Three modes of operation (for all backends):
1. `as collected`: the latest collected value is sent to the backend. This means that if netdata is configured to send data to the backend every 10 seconds, only 1 out of 10 values will appear at the backend server. The values are sent exactly as collected, before any multipliers or dividers applied and before any interpolation. This mode emulates other data collectors, such as `collectd`.
2. `average`: the average of the interpolated values shown on the netdata graphs is sent to the backend. So, if netdata is configured to send data to the backend server every 10 seconds, the average of the 10 values shown on the netdata charts will be used. **If you can't decide which mode to use, use `average`.**
3. `sum` or `volume`: the sum of the interpolated values shown on the netdata graphs is sent to the backend. So, if netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the netdata charts will be used.
5. This code is smart enough, not to slow down netdata, independently of the speed of the backend server.
## configuration
In `/etc/netdata/netdata.conf` you should have something like this (if not download the latest version of `netdata.conf` from your netdata):
```
[backend]
enabled = yes | no
type = graphite | opentsdb | json
host tags = list of TAG=VALUE
destination = space separated list of [PROTOCOL:]HOST[:PORT] - the first working will be used
data source = average | sum | as collected
prefix = netdata
hostname = my-name
update every = 10
buffer on failures = 10
timeout ms = 20000
send charts matching = *
send hosts matching = localhost *
send names instead of ids = yes
```
- `enabled = yes | no`, enables or disables sending data to a backend
- `type = graphite | opentsdb | json`, selects the backend type
- `destination = host1 host2 host3 ...`, accepts **a space separated list** of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the **first available** to send the metrics.
The format of each item in this list, is: `[PROTOCOL:]IP[:PORT]`.
`PROTOCOL` can be `udp` or `tcp`. `tcp` is the default and only supported by the current backends.
`IP` can be `XX.XX.XX.XX` (IPv4), or `[XX:XX...XX:XX]` (IPv6). For IPv6 you can to enclose the IP in `[]` to separate it from the port.
`PORT` can be a number of a service name. If omitted, the default port for the backend will be used (graphite = 2003, opentsdb = 4242).
Example IPv4:
```
destination = 10.11.14.2:4242 10.11.14.3:4242 10.11.14.4:4242
```
Example IPv6 and IPv4 together:
```
destination = [ffff:...:0001]:2003 10.11.12.1:2003
```
When multiple servers are defined, netdata will try the next one when the first one fails. This allows you to load-balance different servers: give your backend servers in different order on each netdata.
netdata also ships [`nc-backend.sh`](https://github.com/netdata/netdata/blob/master/contrib/nc-backend.sh), a script that can be used as a fallback backend to save the metrics to disk and push them to the time-series database when it becomes available again. It can also be used to monitor / trace / debug the metrics netdata generates.
- `data source = as collected`, or `data source = average`, or `data source = sum`, selects the kind of data that will be sent to the backend.
- `hostname = my-name`, is the hostname to be used for sending data to the backend server. By default this is `[global].hostname`.
- `prefix = netdata`, is the prefix to add to all metrics.
- `update every = 10`, is the number of seconds between sending data to the backend. netdata will add some randomness to this number, to prevent stressing the backend server when many netdata servers send data to the same backend. This randomness does not affect the quality of the data, only the time they are sent.
- `buffer on failures = 10`, is the number of iterations (each iteration is `[backend].update every` seconds) to buffer data, when the backend is not available. If the backend fails to receive the data after that many failures, data loss on the backend is expected (netdata will also log it).
- `timeout ms = 20000`, is the timeout in milliseconds to wait for the backend server to process the data. By default this is `2 * update_every * 1000`.
- `send hosts matching = localhost *` includes one or more space separated patterns, using ` * ` as wildcard (any number of times within each pattern). The patterns are checked against the hostname (the localhost is always checked as `localhost`), allowing us to filter which hosts will be sent to the backend when this netdata is a central netdata aggregating multiple hosts. A pattern starting with ` ! ` gives a negative match. So to match all hosts named `*db*` except hosts containing `*slave*`, use `!*slave* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
- `send charts matching = *` includes one or more space separated patterns, using ` * ` as wildcard (any number of times within each pattern). The patterns are checked against both chart id and chart name. A pattern starting with ` ! ` gives a negative match. So to match all charts named `apps.*` except charts ending in `*reads`, use `!*reads apps.*` (so, the order is important: the first pattern matching the chart id or the chart name will be used - positive or negative).
- `send names instead of ids = yes | no` controls the metric names netdata should send to backend. netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are different: disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
- `host tags = list of TAG=VALUE` defines tags that should be appended on all metrics for the given host. These are currently only sent to opentsdb and prometheus. Please use the appropriate format for each time-series db. For example opentsdb likes them like `TAG1=VALUE1 TAG2=VALUE2`, but prometheus like `tag1="value1",tag2="value2"`. Host tags are mirrored with database replication (streaming of metrics between netdata servers).
## monitoring operation
netdata provides 5 charts:
1. **Buffered metrics**, the number of metrics netdata added to the buffer for dispatching them to the backend server.
2. **Buffered data size**, the amount of data (in KB) netdata added the buffer.
3. ~~**Backend latency**, the time the backend server needed to process the data netdata sent. If there was a re-connection involved, this includes the connection time.~~ (this chart has been removed, because it only measures the time netdata needs to give the data to the O/S - since the backend servers do not ack the reception, netdata does not have any means to measure this properly)
4. **Backend operations**, the number of operations performed by netdata.
5. **Backend thread CPU usage**, the CPU resources consumed by the netdata thread, that is responsible for sending the metrics to the backend server.
![image](https://cloud.githubusercontent.com/assets/2662304/20463536/eb196084-af3d-11e6-8ee5-ddbd3b4d8449.png)
## alarms
The latest version of the alarms configuration for monitoring the backend is here: https://github.com/netdata/netdata/blob/master/conf.d/health.d/backend.conf
netdata adds 4 alarms:
1. `backend_last_buffering`, number of seconds since the last successful buffering of backend data
2. `backend_metrics_sent`, percentage of metrics sent to the backend server
3. `backend_metrics_lost`, number of metrics lost due to repeating failures to contact the backend server
4. ~~`backend_slow`, the percentage of time between iterations needed by the backend time to process the data sent by netdata~~ (this was misleading and has been removed).
![image](https://cloud.githubusercontent.com/assets/2662304/20463779/a46ed1c2-af43-11e6-91a5-07ca4533cac3.png)
## InfluxDB setup as netdata backend (example)
You can find blog post with example: how to use InfluxDB with netdata [here](https://blog.hda.me/2017/01/09/using-netdata-with-influxdb-backend.html)

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-3.0-or-later
#include "../common.h"
#include "backends.h"
// ----------------------------------------------------------------------------
// How backends work in netdata:

View File

@ -3,7 +3,7 @@
#ifndef NETDATA_BACKENDS_H
#define NETDATA_BACKENDS_H 1
#include "../common.h"
#include "daemon/common.h"
typedef enum backend_options {
BACKEND_OPTION_NONE = 0,
@ -42,9 +42,9 @@ extern size_t backend_name_copy(char *d, const char *s, size_t usable);
extern int discard_response(BUFFER *b, const char *backend);
#endif // BACKENDS_INTERNALS
#include "prometheus/backend_prometheus.h"
#include "graphite/graphite.h"
#include "json/json.h"
#include "opentsdb/opentsdb.h"
#include "backends/prometheus/backend_prometheus.h"
#include "backends/graphite/graphite.h"
#include "backends/json/json.h"
#include "backends/opentsdb/opentsdb.h"
#endif /* NETDATA_BACKENDS_H */

View File

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-3.0-or-later
AUTOMAKE_OPTIONS = subdir-objects
MAINTAINERCLEANFILES = Makefile.in
MAINTAINERCLEANFILES = $(srcdir)/Makefile.in

View File

@ -4,7 +4,7 @@
#ifndef NETDATA_BACKEND_GRAPHITE_H
#define NETDATA_BACKEND_GRAPHITE_H
#include "../backends.h"
#include "backends/backends.h"
extern int format_dimension_collected_graphite_plaintext(
BUFFER *b // the buffer to write data to

View File

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-3.0-or-later
AUTOMAKE_OPTIONS = subdir-objects
MAINTAINERCLEANFILES = Makefile.in
MAINTAINERCLEANFILES = $(srcdir)/Makefile.in

View File

@ -3,7 +3,7 @@
#ifndef NETDATA_BACKEND_JSON_H
#define NETDATA_BACKEND_JSON_H
#include "../backends.h"
#include "backends/backends.h"
extern int format_dimension_collected_json_plaintext(
BUFFER *b // the buffer to write data to

View File

@ -1,6 +1,10 @@
#!/usr/bin/env bash
# SPDX-License-Identifier: GPL-3.0-or-later
# This is a simple backend database proxy, written in BASH, using the nc command.
# Run the script without any parameters for help.
MODE="${1}"
MY_PORT="${2}"
BACKEND_HOST="${3}"

View File

@ -1,4 +1,4 @@
# SPDX-License-Identifier: GPL-3.0-or-later
AUTOMAKE_OPTIONS = subdir-objects
MAINTAINERCLEANFILES = Makefile.in
MAINTAINERCLEANFILES = $(srcdir)/Makefile.in

View File

@ -3,7 +3,7 @@
#ifndef NETDATA_BACKEND_OPENTSDB_H
#define NETDATA_BACKEND_OPENTSDB_H
#include "../backends.h"
#include "backends/backends.h"
extern int format_dimension_collected_opentsdb_telnet(
BUFFER *b // the buffer to write data to

View File

@ -0,0 +1,8 @@
# SPDX-License-Identifier: GPL-3.0-or-later
AUTOMAKE_OPTIONS = subdir-objects
MAINTAINERCLEANFILES = $(srcdir)/Makefile.in
dist_noinst_DATA = \
README.md \
$(NULL)

View File

@ -0,0 +1,376 @@
> IMPORTANT: the format netdata sends metrics to prometheus has changed since netdata v1.7. The new prometheus backend for netdata supports a lot more features and is aligned to the development of the rest of the netdata backends.
# Using netdata with Prometheus
Prometheus is a distributed monitoring system which offers a very simple setup along with a robust data model. Recently netdata added support for Prometheus. I'm going to quickly show you how to install both netdata and prometheus on the same server. We can then use grafana pointed at Prometheus to obtain long term metrics netdata offers. I'm assuming we are starting at a fresh ubuntu shell (whether you'd like to follow along in a VM or a cloud instance is up to you).
## Installing netdata and prometheus
### Installing netdata
There are number of ways to install netdata according to [Installation](https://github.com/netdata/netdata/wiki/Installation)
The suggested way of installing the latest netdata and keep it upgrade automatically. Using one line installation:
```
bash <(curl -Ss https://my-netdata.io/kickstart.sh)
```
At this point we should have netdata listening on port 19999. Attempt to take your browser here:
```
http://your.netdata.ip:19999
```
*(replace `your.netdata.ip` with the IP or hostname of the server running netdata)*
### Installing Prometheus
In order to install prometheus we are going to introduce our own systemd startup script along with an example of prometheus.yaml configuration. Prometheus needs to be pointed to your server at a specific target url for it to scrape netdata's api. Prometheus is always a pull model meaning netdata is the passive client within this architecture. Prometheus always initiates the connection with netdata.
##### Download Prometheus
```sh
wget -O /tmp/prometheus-2.3.2.linux-amd64.tar.gz https://github.com/prometheus/prometheus/releases/download/v2.3.2/prometheus-2.3.2.linux-amd64.tar.gz
```
##### Create prometheus system user
```sh
sudo useradd -r prometheus
```
#### Create prometheus directory
```sh
sudo mkdir /opt/prometheus
sudo chown prometheus:prometheus /opt/prometheus
```
#### Untar prometheus directory
```sh
sudo tar -xvf /tmp/prometheus-2.3.2.linux-amd64.tar.gz -C /opt/prometheus --strip=1
```
#### Install prometheus.yml
We will use the following `prometheus.yml` file. Save it at `/opt/prometheus/prometheus.yml`.
Make sure to replace `your.netdata.ip` with the IP or hostname of the host running netdata.
```yaml
# my global config
global:
scrape_interval: 5s # Set the scrape interval to every 5 seconds. Default is every 1 minute.
evaluation_interval: 5s # Evaluate rules every 5 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['0.0.0.0:9090']
- job_name: 'netdata-scrape'
metrics_path: '/api/v1/allmetrics'
params:
# format: prometheus | prometheus_all_hosts
# You can use `prometheus_all_hosts` if you want Prometheus to set the `instance` to your hostname instead of IP
format: [prometheus]
#
# sources: as-collected | raw | average | sum | volume
# default is: average
#source: [as-collected]
#
# server name for this prometheus - the default is the client IP
# for netdata to uniquely identify it
#server: ['prometheus1']
honor_labels: true
static_configs:
- targets: ['{your.netdata.ip}:19999']
```
#### Install nodes.yml
The following is completely optional, it will enable Prometheus to generate alerts from some NetData sources. Tweak the values to your own needs. We will use the following `nodes.yml` file below. Save it at `/opt/prometheus/nodes.yml`, and add a *- "nodes.yml"* entry under the *rule_files:* section in the example prometheus.yml file above.
```
groups:
- name: nodes
rules:
- alert: node_high_cpu_usage_70
expr: avg(rate(netdata_cpu_cpu_percentage_average{dimension="idle"}[1m])) by (job) > 70
for: 1m
annotations:
description: '{{ $labels.job }} on ''{{ $labels.job }}'' CPU usage is at {{ humanize $value }}%.'
summary: CPU alert for container node '{{ $labels.job }}'
- alert: node_high_memory_usage_70
expr: 100 / sum(netdata_system_ram_MB_average) by (job)
* sum(netdata_system_ram_MB_average{dimension=~"free|cached"}) by (job) < 30
for: 1m
annotations:
description: '{{ $labels.job }} memory usage is {{ humanize $value}}%.'
summary: Memory alert for container node '{{ $labels.job }}'
- alert: node_low_root_filesystem_space_20
expr: 100 / sum(netdata_disk_space_GB_average{family="/"}) by (job)
* sum(netdata_disk_space_GB_average{family="/",dimension=~"avail|cached"}) by (job) < 20
for: 1m
annotations:
description: '{{ $labels.job }} root filesystem space is {{ humanize $value}}%.'
summary: Root filesystem alert for container node '{{ $labels.job }}'
- alert: node_root_filesystem_fill_rate_6h
expr: predict_linear(netdata_disk_space_GB_average{family="/",dimension=~"avail|cached"}[1h], 6 * 3600) < 0
for: 1h
labels:
severity: critical
annotations:
description: Container node {{ $labels.job }} root filesystem is going to fill up in 6h.
summary: Disk fill alert for Swarm node '{{ $labels.job }}'
```
#### Install prometheus.service
Save this service file as `/etc/systemd/system/prometheus.service`:
```
[Unit]
Description=Prometheus Server
AssertPathExists=/opt/prometheus
[Service]
Type=simple
WorkingDirectory=/opt/prometheus
User=prometheus
Group=prometheus
ExecStart=/opt/prometheus/prometheus --config.file=/opt/prometheus/prometheus.yml --log.level=info
ExecReload=/bin/kill -SIGHUP $MAINPID
ExecStop=/bin/kill -SIGINT $MAINPID
[Install]
WantedBy=multi-user.target
```
##### Start Prometheus
```
sudo systemctl start prometheus
sudo systemctl enable prometheus
```
Prometheus should now start and listen on port 9090. Attempt to head there with your browser.
If everything is working correctly when you fetch `http://your.prometheus.ip:9090` you will see a 'Status' tab. Click this and click on 'targets' We should see the netdata host as a scraped target.
---
## netdata support for prometheus
> IMPORTANT: the format netdata sends metrics to prometheus has changed since netdata v1.6. The new format allows easier queries for metrics and supports both `as collected` and normalized metrics.
Before explaining the changes, we have to understand the key differences between netdata and prometheus.
### understanding netdata metrics
##### charts
Each chart in netdata has several properties (common to all its metrics):
- `chart_id` - uniquely identifies a chart.
- `chart_name` - a more human friendly name for `chart_id`, also unique.
- `context` - this is the template of the chart. All disk I/O charts have the same context, all mysql requests charts have the same context, etc. This is used for alarm templates to match all the charts they should be attached to.
- `family` groups a set of charts together. It is used as the submenu of the dashboard.
- `units` is the units for all the metrics attached to the chart.
##### dimensions
Then each netdata chart contains metrics called `dimensions`. All the dimensions of a chart have the same units of measurement, and are contextually in the same category (ie. the metrics for disk bandwidth are `read` and `write` and they are both in the same chart).
### netdata data source
netdata can send metrics to prometheus from 3 data sources:
- `as collected` or `raw` - this data source sends the metrics to prometheus as they are collected. No conversion is done by netdata. The latest value for each metric is just given to prometheus. This is the most preferred method by prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how to get meaningful values out of them.
The format of the metrics is: `CONTEXT{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
If the metric is a counter (`incremental` in netdata lingo), `_total` is appended the context.
Unlike prometheus, netdata allows each dimension of a chart to have a different algorithm and conversion constants (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, netdata will use this format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}`
- `average` - this data source uses the netdata database to send the metrics to prometheus as they are presented on the netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the netdata dashboard charts. This is the easiest to work with.
The format of the metrics is: `CONTEXT_UNITS_average{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
When this source is used, netdata keeps track of the last access time for each prometheus server fetching the metrics. This last access time is used at the subsequent queries of the same prometheus server to identify the time-frame the `average` will be calculated. So, no matter how frequently prometheus scrapes netdata, it will get all the database data. To identify each prometheus server, netdata uses by default the IP of the client fetching the metrics. If there are multiple prometheus servers fetching data from the same netdata, using the same IP, each prometheus server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the prometheus server.
- `sum` or `volume`, is like `average` but instead of averaging the values, it sums them.
The format of the metrics is: `CONTEXT_UNITS_sum{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
All the other operations are the same with `average`.
Keep in mind that early versions of netdata were sending the metrics as: `CHART_DIMENSION{}`.
### Querying Metrics
Fetch with your web browser this URL:
`http://your.netdata.ip:19999/api/v1/allmetrics?format=prometheus&help=yes`
*(replace `your.netdata.ip` with the ip or hostname of your netdata server)*
netdata will respond with all the metrics it sends to prometheus.
If you search that page for `"system.cpu"` you will find all the metrics netdata is exporting to prometheus for this chart. `system.cpu` is the chart name on the netdata dashboard (on the netdata dashboard all charts have a text heading such as : `Total CPU utilization (system.cpu)`. What we are interested here in the chart name: `system.cpu`).
Searching for `"system.cpu"` reveals:
```sh
# COMMENT homogeneus chart "system.cpu", context "system.cpu", family "cpu", units "percentage"
# COMMENT netdata_system_cpu_percentage_average: dimension "guest_nice", value is percentage, gauge, dt 1500066653 to 1500066662 inclusive
netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension="guest_nice"} 0.0000000 1500066662000
# COMMENT netdata_system_cpu_percentage_average: dimension "guest", value is percentage, gauge, dt 1500066653 to 1500066662 inclusive
netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension="guest"} 1.7837326 1500066662000
# COMMENT netdata_system_cpu_percentage_average: dimension "steal", value is percentage, gauge, dt 1500066653 to 1500066662 inclusive
netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension="steal"} 0.0000000 1500066662000
# COMMENT netdata_system_cpu_percentage_average: dimension "softirq", value is percentage, gauge, dt 1500066653 to 1500066662 inclusive
netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension="softirq"} 0.5275442 1500066662000
# COMMENT netdata_system_cpu_percentage_average: dimension "irq", value is percentage, gauge, dt 1500066653 to 1500066662 inclusive
netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension="irq"} 0.2260836 1500066662000
# COMMENT netdata_system_cpu_percentage_average: dimension "user", value is percentage, gauge, dt 1500066653 to 1500066662 inclusive
netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension="user"} 2.3362762 1500066662000
# COMMENT netdata_system_cpu_percentage_average: dimension "system", value is percentage, gauge, dt 1500066653 to 1500066662 inclusive
netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension="system"} 1.7961062 1500066662000
# COMMENT netdata_system_cpu_percentage_average: dimension "nice", value is percentage, gauge, dt 1500066653 to 1500066662 inclusive
netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension="nice"} 0.0000000 1500066662000
# COMMENT netdata_system_cpu_percentage_average: dimension "iowait", value is percentage, gauge, dt 1500066653 to 1500066662 inclusive
netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension="iowait"} 0.9671802 1500066662000
# COMMENT netdata_system_cpu_percentage_average: dimension "idle", value is percentage, gauge, dt 1500066653 to 1500066662 inclusive
netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension="idle"} 92.3630770 1500066662000
```
*(netdata response for `system.cpu` with source=`average`)*
In `average` or `sum` data sources, all values are normalized and are reported to prometheus as gauges. Now, use the 'expression' text form in prometheus. Begin to type the metrics we are looking for: `netdata_system_cpu`. You should see that the text form begins to auto-fill as prometheus knows about this metric.
If the data source was `as collected`, the response would be:
```sh
# COMMENT homogeneus chart "system.cpu", context "system.cpu", family "cpu", units "percentage"
# COMMENT netdata_system_cpu_total: chart "system.cpu", context "system.cpu", family "cpu", dimension "guest_nice", value * 1 / 1 delta gives percentage (counter)
netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="guest_nice"} 0 1500066716438
# COMMENT netdata_system_cpu_total: chart "system.cpu", context "system.cpu", family "cpu", dimension "guest", value * 1 / 1 delta gives percentage (counter)
netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="guest"} 63945 1500066716438
# COMMENT netdata_system_cpu_total: chart "system.cpu", context "system.cpu", family "cpu", dimension "steal", value * 1 / 1 delta gives percentage (counter)
netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="steal"} 0 1500066716438
# COMMENT netdata_system_cpu_total: chart "system.cpu", context "system.cpu", family "cpu", dimension "softirq", value * 1 / 1 delta gives percentage (counter)
netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="softirq"} 8295 1500066716438
# COMMENT netdata_system_cpu_total: chart "system.cpu", context "system.cpu", family "cpu", dimension "irq", value * 1 / 1 delta gives percentage (counter)
netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="irq"} 4079 1500066716438
# COMMENT netdata_system_cpu_total: chart "system.cpu", context "system.cpu", family "cpu", dimension "user", value * 1 / 1 delta gives percentage (counter)
netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="user"} 116488 1500066716438
# COMMENT netdata_system_cpu_total: chart "system.cpu", context "system.cpu", family "cpu", dimension "system", value * 1 / 1 delta gives percentage (counter)
netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="system"} 35084 1500066716438
# COMMENT netdata_system_cpu_total: chart "system.cpu", context "system.cpu", family "cpu", dimension "nice", value * 1 / 1 delta gives percentage (counter)
netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="nice"} 505 1500066716438
# COMMENT netdata_system_cpu_total: chart "system.cpu", context "system.cpu", family "cpu", dimension "iowait", value * 1 / 1 delta gives percentage (counter)
netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="iowait"} 23314 1500066716438
# COMMENT netdata_system_cpu_total: chart "system.cpu", context "system.cpu", family "cpu", dimension "idle", value * 1 / 1 delta gives percentage (counter)
netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="idle"} 918470 1500066716438
```
*(netdata response for `system.cpu` with source=`as-collected`)*
For more information check prometheus documentation.
### Streaming data from upstream hosts
The `format=prometheus` parameter only exports the host's netdata metrics. If you are using the master/slave functionality of netdata this ignores any upstream hosts - so you should consider using the below in your **prometheus.yml**:
```
metrics_path: '/api/v1/allmetrics'
params:
format: [prometheus_all_hosts]
honor_labels: true
```
This will report all upstream host data, and `honor_labels` will make Prometheus take note of the instance names provided.
### timestamps
To pass the metrics through prometheus pushgateway, netdata supports the option `&timestamps=no` to send the metrics without timestamps.
## netdata host variables
netdata collects various system configuration metrics, like the max number of TCP sockets supported, the max number of files allowed system-wide, various IPC sizes, etc. These metrics are not exposed to prometheus by default.
To expose them, append `variables=yes` to the netdata URL.
### TYPE and HELP
To save bandwidth, and because prometheus does not use them anyway, `# TYPE` and `# HELP` lines are suppressed. If wanted they can be re-enabled via `types=yes` and `help=yes`, e.g. `/api/v1/allmetrics?format=prometheus&types=yes&help=yes`
### Names and IDs
netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names are human friendly labels (also unique).
Most charts and metrics have the same ID and name, but in several cases they are different: disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
The default is controlled in `netdata.conf`:
```
[backend]
send names instead of ids = yes | no
```
You can overwrite it from prometheus, by appending to the URL:
* `&names=no` to get IDs (the old behaviour)
* `&names=yes` to get names
### Filtering metrics sent to prometheus
netdata can filter the metrics it sends to prometheus with this setting:
```
[backend]
send charts matching = *
```
This settings accepts a space separated list of patterns to match the **charts** to be sent to prometheus. Each pattern can use ` * ` as wildcard, any number of times (e.g `*a*b*c*` is valid). Patterns starting with ` ! ` give a negative match (e.g `!*.bad users.* groups.*` will send all the users and groups except `bad` user and `bad` group). The order is important: the first match (positive or negative) left to right, is used.
### Changing the prefix of netdata metrics
netdata sends all metrics prefixed with `netdata_`. You can change this in `netdata.conf`, like this:
```
[backend]
prefix = netdata
```
It can also be changed from the URL, by appending `&prefix=netdata`.
### accuracy of `average` and `sum` data sources
When the data source is set to `average` or `sum`, netdata remembers the last access of each client accessing prometheus metrics and uses this last access time to respond with the `average` or `sum` of all the entries in the database since that. This means that prometheus servers are not losing data when they access netdata with data source = `average` or `sum`.
To uniquely identify each prometheus server, netdata uses the IP of the client accessing the metrics. If however the IP is not good enough for identifying a single prometheus server (e.g. when prometheus servers are accessing netdata through a web proxy, or when multiple prometheus servers are NATed to a single IP), each prometheus may append `&server=NAME` to the URL. This `NAME` is used by netdata to uniquely identify each prometheus server and keep track of its last access time.

View File

@ -3,7 +3,7 @@
#ifndef NETDATA_BACKEND_PROMETHEUS_H
#define NETDATA_BACKEND_PROMETHEUS_H 1
#include "../backends.h"
#include "backends/backends.h"
typedef enum prometheus_output_flags {
PROMETHEUS_OUTPUT_NONE = 0,

View File

@ -1,32 +0,0 @@
#
# Copyright (C) 2015 Alon Bar-Lev <alon.barlev@gmail.com>
# SPDX-License-Identifier: GPL-3.0-or-later
#
MAINTAINERCLEANFILES= $(srcdir)/Makefile.in
dist_charts_SCRIPTS = \
$(NULL)
dist_charts_DATA = \
README.md \
ap.chart.sh \
apcupsd.chart.sh \
apache.chart.sh \
cpu_apps.chart.sh \
cpufreq.chart.sh \
example.chart.sh \
exim.chart.sh \
hddtemp.chart.sh \
libreswan.chart.sh \
load_average.chart.sh \
mem_apps.chart.sh \
mysql.chart.sh \
nginx.chart.sh \
nut.chart.sh \
opensips.chart.sh \
phpfpm.chart.sh \
postfix.chart.sh \
sensors.chart.sh \
squid.chart.sh \
tomcat.chart.sh \
$(NULL)

View File

@ -1,344 +0,0 @@
The following charts.d plugins are supported:
---
# hddtemp
The plugin will collect temperatures from disks
It will create one chart with all active disks
1. **temperature in Celsius**
### configuration
hddtemp needs to be running in daemonized mode
```sh
# host with daemonized hddtemp
hddtemp_host="localhost"
# port on which hddtemp is showing data
hddtemp_port="7634"
# array of included disks
# the default is to include all
hddtemp_disks=()
```
---
# libreswan
The plugin will collects bytes-in, bytes-out and uptime for all established libreswan IPSEC tunnels.
The following charts are created, **per tunnel**:
1. **Uptime**
* the uptime of the tunnel
2. **Traffic**
* bytes in
* bytes out
### configuration
Its config file is `/etc/netdata/charts.d/libreswan.conf`.
The plugin executes 2 commands to collect all the information it needs:
```sh
ipsec whack --status
ipsec whack --trafficstatus
```
The first command is used to extract the currently established tunnels, their IDs and their names.
The second command is used to extract the current uptime and traffic.
Most probably user `netdata` will not be able to query libreswan, so the `ipsec` commands will be denied.
The plugin attempts to run `ipsec` as `sudo ipsec ...`, to get access to libreswan statistics.
To allow user `netdata` execute `sudo ipsec ...`, create the file `/etc/sudoers.d/netdata` with this content:
```
netdata ALL = (root) NOPASSWD: /sbin/ipsec whack --status
netdata ALL = (root) NOPASSWD: /sbin/ipsec whack --trafficstatus
```
Make sure the path `/sbin/ipsec` matches your setup (execute `which ipsec` to find the right path).
---
# mysql
The plugin will monitor one or more mysql servers
It will produce the following charts:
1. **Bandwidth** in kbps
* in
* out
2. **Queries** in queries/sec
* queries
* questions
* slow queries
3. **Operations** in operations/sec
* opened tables
* flush
* commit
* delete
* prepare
* read first
* read key
* read next
* read prev
* read random
* read random next
* rollback
* save point
* update
* write
4. **Table Locks** in locks/sec
* immediate
* waited
5. **Select Issues** in issues/sec
* full join
* full range join
* range
* range check
* scan
6. **Sort Issues** in issues/sec
* merge passes
* range
* scan
### configuration
You can configure many database servers, like this:
You can provide, per server, the following:
1. a name, anything you like, but keep it short
2. the mysql command to connect to the server
3. the mysql command line options to be used for connecting to the server
Here is an example for 2 servers:
```sh
mysql_opts[server1]="-h server1.example.com"
mysql_opts[server2]="-h server2.example.com --connect_timeout 2"
```
The above will use the `mysql` command found in the system path.
You can also provide a custom mysql command per server, like this:
```sh
mysql_cmds[server2]="/opt/mysql/bin/mysql"
```
The above sets the mysql command only for server2. server1 will use the system default.
If no configuration is given, the plugin will attempt to connect to mysql server at localhost.
---
# nut
The plugin will collect UPS data for all UPSes configured in the system.
The following charts will be created:
1. **UPS Charge**
* percentage changed
2. **UPS Battery Voltage**
* current voltage
* high voltage
* low voltage
* nominal voltage
3. **UPS Input Voltage**
* current voltage
* fault voltage
* nominal voltage
4. **UPS Input Current**
* nominal current
5. **UPS Input Frequency**
* current frequency
* nominal frequency
6. **UPS Output Voltage**
* current voltage
7. **UPS Load**
* current load
8. **UPS Temperature**
* current temperature
### configuration
This is the internal default for `/etc/netdata/nut.conf`
```sh
# a space separated list of UPS names
# if empty, the list returned by 'upsc -l' will be used
nut_ups=
# how frequently to collect UPS data
nut_update_every=2
```
---
# postfix
The plugin will collect the postfix queue size.
It will create two charts:
1. **queue size in emails**
2. **queue size in KB**
### configuration
This is the internal default for `/etc/netdata/postfix.conf`
```sh
# the postqueue command
# if empty, it will use the one found in the system path
postfix_postqueue=
# how frequently to collect queue size
postfix_update_every=15
```
---
# sensors
The plugin will provide charts for all configured system sensors
> This plugin is reading sensors directly from the kernel.
> The `lm-sensors` package is able to perform calculations on the
> kernel provided values, this plugin will not perform.
> So, the values graphed, are the raw hardware values of the sensors.
The plugin will create netdata charts for:
1. **Temperature**
2. **Voltage**
3. **Current**
4. **Power**
5. **Fans Speed**
6. **Energy**
7. **Humidity**
One chart for every sensor chip found and each of the above will be created.
### configuration
This is the internal default for `/etc/netdata/sensors.conf`
```sh
# the directory the kernel keeps sensor data
sensors_sys_dir="${NETDATA_HOST_PREFIX}/sys/devices"
# how deep in the tree to check for sensor data
sensors_sys_depth=10
# if set to 1, the script will overwrite internal
# script functions with code generated ones
# leave to 1, is faster
sensors_source_update=1
# how frequently to collect sensor data
# the default is to collect it at every iteration of charts.d
sensors_update_every=
# array of sensors which are excluded
# the default is to include all
sensors_excluded=()
```
---
# squid
The plugin will monitor a squid server.
It will produce 4 charts:
1. **Squid Client Bandwidth** in kbps
* in
* out
* hits
2. **Squid Client Requests** in requests/sec
* requests
* hits
* errors
3. **Squid Server Bandwidth** in kbps
* in
* out
4. **Squid Server Requests** in requests/sec
* requests
* errors
### autoconfig
The plugin will by itself detect squid servers running on
localhost, on ports 3128 or 8080.
It will attempt to download URLs in the form:
- `cache_object://HOST:PORT/counters`
- `/squid-internal-mgr/counters`
If any succeeds, it will use this.
### configuration
If you need to configure it by hand, create the file
`/etc/netdata/squid.conf` with the following variables:
- `squid_host=IP` the IP of the squid host
- `squid_port=PORT` the port the squid is listening
- `squid_url="URL"` the URL with the statistics to be fetched from squid
- `squid_timeout=SECONDS` how much time we should wait for squid to respond
- `squid_update_every=SECONDS` the frequency of the data collection
Example `/etc/netdata/squid.conf`:
```sh
squid_host=127.0.0.1
squid_port=3128
squid_url="cache_object://127.0.0.1:3128/counters"
squid_timeout=2
squid_update_every=5
```

28
collectors/Makefile.am Normal file
View File

@ -0,0 +1,28 @@
# SPDX-License-Identifier: GPL-3.0-or-later
MAINTAINERCLEANFILES = $(srcdir)/Makefile.in
SUBDIRS = \
plugins.d \
apps.plugin \
cgroups.plugin \
charts.d.plugin \
checks.plugin \
diskspace.plugin \
fping.plugin \
freebsd.plugin \
freeipmi.plugin \
idlejitter.plugin \
macos.plugin \
nfacct.plugin \
node.d.plugin \
proc.plugin \
python.d.plugin \
statsd.plugin \
tc.plugin \
$(NULL)
dist_noinst_DATA = \
README.md \
$(NULL)

118
collectors/README.md Normal file
View File

@ -0,0 +1,118 @@
# Data Collection Plugins
netdata supports **internal** and **external** data collection plugins:
- **internal** plugins are written in `C` and run as threads inside the netdata daemon.
- **external** plugins may be written in any computer language and are spawn as independent long-running processes by the netdata daemon.
They communicate with the netdata daemon via `pipes` (`stdout` communication).
To minimize the number of processes spawn for data collection, netdata also supports **plugin orchestrators**.
- **plugin orchestrators** are external plugins that do not collect any data by themeselves.
Instead they support data collection **modules** written in the language of the orchestrator.
Usually the orchestrator provides a higher level abstraction, making it ideal for writing new
data collection modules with the minimum of code.
Currently netdata provides plugin orchestrators
BASH v4+ [charts.d.plugin](charts.d.plugin),
node.js [node.d.plugin](node.d.plugin) and
python v2+ (including v3) [python.d.plugin](python.d.plugin).
## Netdata Plugins
plugin|lang|O/S|runs as|modular|description
:---:|:---:|:---:|:---:|:---:|:---
[apps.plugin](apps.plugin/)|`C`|linux, freebsd|external|-|monitors the whole process tree on Linux and FreeBSD and breaks down system resource usage by **process**, **user** and **user group**.
[cgroups.plugin](cgroups.plugin/)|`C`|linux|internal|-|collects resource usage of **Containers**, libvirt **VMs** and **systemd services**, on Linux systems
[charts.d.plugin](charts.d.plugin/)|`BASH` v4+|any|external|yes|a **plugin orchestrator** for data collection modules written in `BASH` v4+.
[checks.plugin](checks.plugin/)|`C`|any|internal|-|a debugging plugin (by default it is disabled)
[diskspace.plugin](diskspace.plugin/)|`C`|linux|internal|-|collects disk space usage metrics on Linux mount points
[fping.plugin](fping.plugin/)|`C`|any|external|-|measures network latency, jitter and packet loss between the monitored node and any number of remote network end points.
[freebsd.plugin](freebsd.plugin/)|`C`|freebsd|internal|yes|collects resource usage and performance data on FreeBSD systems
[freeipmi.plugin](freeipmi.plugin/)|`C`|linux|external|-|collects metrics from enterprise hardware sensors, on Linux servers.
[idlejitter.plugin](idlejitter.plugin/)|`C`|any|internal|-|measures CPU latency and jitter on all operating systems
[macos.plugin](macos.plugin/)|`C`|macos|internal|yes|collects resource usage and performance data on MacOS systems
[nfacct.plugin](nfacct.plugin/)|`C`|linux|internal|-|collects netfilter firewall, connection tracker and accounting metrics using `libmnl` and `libnetfilter_acct`
[node.d.plugin](node.d.plugin/)|`node.js`|any|external|yes|a **plugin orchestrator** for data collection modules written in `node.js`.
[plugins.d](plugins.d/)|`C`|any|internal|-|implements the **external plugins** API and serves external plugins
[proc.plugin](proc.plugin/)|`C`|linux|internal|yes|collects resource usage and performance data on Linux systems
[python.d.plugin](python.d.plugin/)|`python` v2+|any|external|yes|a **plugin orchestrator** for data collection modules written in `python` v2 or v3 (both are supported).
[statsd.plugin](statsd.plugin/)|`C`|any|internal|-|implements a high performance **statsd** server for netdata
[tc.plugin](tc.plugin/)|`C`|linux|internal|-|collects traffic QoS metrics (`tc`) of Linux network interfaces
## Enabling and Disabling plugins
Each plugin can be enabled or disabled via `netdata.conf`, section `[plugins]`.
At this section there a list of all the plugins with a boolean setting to enable them or disable them.
The exception is `statsd.plugin` that has its own `[statsd]` section.
Once a plugin is enabled, consult the page of each plugin for additional configuration options.
All **external plugins** are managed by [plugins.d](plugins.d/), which provides additional management options.
### Internal Plugins
Each of the internal plugins runs as a thread inside the netdata daemon.
Once this thread has started, the plugin may spawn additional threads according to its design.
#### Internal Plugins API
The internal data collection API consists of the following calls:
```c
collect_data() {
// collect data here (one iteration)
collected_number collected_value = collect_a_value();
// give the metrics to netdata
static RRDSET *st = NULL; // the chart
static RRDDIM *rd = NULL; // a dimension attached to this chart
if(unlikely(!st)) {
// we haven't created this chart before
// create it now
st = rrdset_create_localhost(
"type"
, "id"
, "name"
, "family"
, "context"
, "Chart Title"
, "units"
, "plugin-name"
, "module-name"
, priority
, update_every
, chart_type
);
// attach a metric to it
rd = rrddim_add(st, "id", "name", multiplier, divider, algorithm);
}
else {
// this chart is already created
// let netdata know we start a new iteration on it
rrdset_next(st);
}
// give the collected value(s) to the chart
rrddim_set_by_pointer(st, rd, collected_value);
// signal netdata we are done with this iteration
rrdset_done(st);
}
```
Of course netdata has a lot of libraries to help you also in collecting the metrics.
The best way to find your way through this, is to examine what other similar plugins do.
### External Plugins
**External plugins** use the API and are managed by [plugins.d](plugins.d/).

View File

@ -3,22 +3,23 @@
#ifndef NETDATA_ALL_H
#define NETDATA_ALL_H 1
#include "../common.h"
#include "../daemon/common.h"
// netdata internal data collection plugins
#include "checks.plugin/plugin_checks.h"
#include "freebsd.plugin/plugin_freebsd.h"
#include "idlejitter.plugin/plugin_idlejitter.h"
#include "linux-cgroups.plugin/sys_fs_cgroup.h"
#include "linux-diskspace.plugin/plugin_diskspace.h"
#include "linux-nfacct.plugin/plugin_nfacct.h"
#include "linux-proc.plugin/plugin_proc.h"
#include "linux-tc.plugin/plugin_tc.h"
#include "cgroups.plugin/sys_fs_cgroup.h"
#include "diskspace.plugin/plugin_diskspace.h"
#include "nfacct.plugin/plugin_nfacct.h"
#include "proc.plugin/plugin_proc.h"
#include "tc.plugin/plugin_tc.h"
#include "macos.plugin/plugin_macos.h"
#include "plugins.d.plugin/plugins_d.h"
#include "statsd.plugin/statsd.h"
#include "plugins.d/plugins_d.h"
// ----------------------------------------------------------------------------
// netdata chart priorities

View File

@ -0,0 +1,13 @@
# SPDX-License-Identifier: GPL-3.0-or-later
AUTOMAKE_OPTIONS = subdir-objects
MAINTAINERCLEANFILES = $(srcdir)/Makefile.in
dist_noinst_DATA = \
README.md \
$(NULL)
dist_libconfig_DATA = \
apps_groups.conf \
$(NULL)

View File

@ -0,0 +1,103 @@
# apps.plugin
This plugin provides charts for 3 sections of the default dashboard:
1. Per application charts
2. Per user charts
3. Per user group charts
## Per application charts
This plugin walks through the entire `/proc` filesystem and aggregates statistics for applications of interest, defined in `/etc/netdata/apps_groups.conf` (the default is [here](apps_groups.conf)) (to edit it on your system run `/etc/netdata/edit-config apps_groups.conf`).
The plugin internally builds a process tree (much like `ps fax` does), and groups processes together (evaluating both child and parent processes) so that the result is always a chart with a predefined set of dimensions (of course, only application groups found running are reported).
Using this information it provides the following charts (per application group defined in `/etc/netdata/apps_groups.conf` - to edit it on your system run `/etc/netdata/edit-config apps_groups.conf`):
1. Total CPU usage
2. Total User CPU usage
3. Total System CPU usage
4. Total Disk Physical Reads
5. Total Disk Physical Writes
6. Total Disk Logical Reads
7. Total Disk Logical Writes
8. Total Open Files (unique files - if a file is found open multiple times, it is counted just once)
9. Total Dedicated Memory (non shared)
10. Total Minor Page Faults
11. Total Number of Processes
12. Total Number of Threads
13. Total Number of Pipes
14. Total Swap Activity (Major Page Faults)
15. Total Open Sockets
## Per User Charts
All the above charts, are also grouped by username, using the effective uid of each process.
## Per Group Charts
All the above charts, are also grouped by group name, using the effective gid of each process.
## CPU Usage
`apps.plugin` is a complex software piece and has a lot of work to do (actually this plugin requires more CPU resources that the netdata daemon). For each process running, `apps.plugin` reads several `/proc` files to get CPU usage, memory allocated, I/O usage, open file descriptors, etc. Doing this work per-second, especially on hosts with several thousands of processes, may increase the CPU resources consumed by the plugin.
In such cases, you many need to lower its data collection frequency. To do this, edit `/etc/netdata/netdata.conf` and find this section:
```
[plugin:apps]
# update every = 1
# command options =
```
Uncomment the line `update every` and set it to a higher number. If you just set it to ` 2 `, its CPU resources will be cut in half, and data collection will be once every 2 seconds.
## Configuration
The configuration file is `/etc/netdata/apps_groups.conf` (the default is [here](apps_groups.conf)).
To edit it on your system run `/etc/netdata/edit-config apps_groups.conf`.
The configuration file works accepts multiple lines, each having this format:
```txt
group: process1 process2 ...
```
Process names should be given as they appear when running `ps -e`. The program will actually match the process names in the `/proc/PID/status` file. So, to be sure the name is right for a process running with PID ` X `, do this:
```sh
cat /proc/X/status
```
The first line on the output is `Name: xxxxx`. This is the process name `apps.plugin` sees.
The order of the lines in the file is important only if you include the same process name to multiple groups.
## Apps plugin is missing information
`apps.plugin` requires additional privileges to collect all the information it needs. The problem is described in issue #157.
When netdata is installed, `apps.plugin` is given the capabilities `cap_dac_read_search,cap_sys_ptrace+ep`. If that is not possible (i.e. `setcap` fails), `apps.plugin` is setuid to `root`.
## linux capabilities in containers
There are a few cases, like `docker` and `virtuozzo` containers, where `setcap` succeeds, but the capabilities are silently ignored (in `lxc` containers `setcap` fails).
In these cases that `setcap` succeeds by capabilities do not work, you will have to setuid to root `apps.plugin` by running these commands:
```sh
chown root:netdata /usr/libexec/netdata/plugins.d/apps.plugin
chmod 4750 /usr/libexec/netdata/plugins.d/apps.plugin
```
You will have to run these, every time you update netdata.
### Is is safe to give `apps.plugin` these privileges?
`apps.plugin` performs a hard-coded function of building the process tree in memory, iterating forever, collecting metrics for each running process and sending them to netdata. This is a one-way communication, from `apps.plugin` to netdata.
So, since `apps.plugin` cannot be instructed by netdata for the actions it performs, we think it is pretty safe to allow it have these increased privileges.
Keep in mind that `apps.plugin` will still run without these permissions, but it will not be able to collect all the data for every process.

View File

@ -0,0 +1,20 @@
# SPDX-License-Identifier: GPL-3.0-or-later
AUTOMAKE_OPTIONS = subdir-objects
MAINTAINERCLEANFILES = $(srcdir)/Makefile.in
CLEANFILES = \
cgroup-name.sh \
$(NULL)
include $(top_srcdir)/build/subst.inc
SUFFIXES = .in
dist_plugins_SCRIPTS = \
cgroup-name.sh \
cgroup-network-helper.sh \
$(NULL)
dist_noinst_DATA = \
cgroup-name.sh.in \
$(NULL)

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-3.0-or-later
#include "../../common.h"
#include "../../daemon/common.h"
#ifdef HAVE_SETNS
#ifndef _GNU_SOURCE

View File

@ -3,7 +3,7 @@
#ifndef NETDATA_SYS_FS_CGROUP_H
#define NETDATA_SYS_FS_CGROUP_H 1
#include "../../common.h"
#include "../../daemon/common.h"
#if (TARGET_OS == OS_LINUX)
@ -20,7 +20,7 @@
extern void *cgroups_main(void *ptr);
#include "../linux-proc.plugin/plugin_proc.h"
#include "../proc.plugin/plugin_proc.h"
#else // (TARGET_OS == OS_LINUX)

View File

@ -0,0 +1,94 @@
# SPDX-License-Identifier: GPL-3.0-or-later
MAINTAINERCLEANFILES = $(srcdir)/Makefile.in
CLEANFILES = \
charts.d.plugin \
$(NULL)
include $(top_srcdir)/build/subst.inc
SUFFIXES = .in
dist_libconfig_DATA = \
charts.d.conf \
$(NULL)
dist_plugins_SCRIPTS = \
charts.d.dryrun-helper.sh \
charts.d.plugin \
loopsleepms.sh.inc \
$(NULL)
dist_noinst_DATA = \
charts.d.plugin.in \
ap/README.md \
apache/README.md \
apcupsd/README.md \
cpu_apps/README.md \
cpufreq/README.md \
example/README.md \
exim/README.md \
hddtemp/README.md \
libreswan/README.md \
load_average/README.md \
mem_apps/README.md \
mysql/README.md \
nginx/README.md \
nut/README.md \
opensips/README.md \
phpfpm/README.md \
postfix/README.md \
sensors/README.md \
squid/README.md \
tomcat/README.md \
$(NULL)
dist_charts_SCRIPTS = \
$(NULL)
dist_charts_DATA = \
ap/ap.chart.sh \
apcupsd/apcupsd.chart.sh \
apache/apache.chart.sh \
cpu_apps/cpu_apps.chart.sh \
cpufreq/cpufreq.chart.sh \
example/example.chart.sh \
exim/exim.chart.sh \
hddtemp/hddtemp.chart.sh \
libreswan/libreswan.chart.sh \
load_average/load_average.chart.sh \
mem_apps/mem_apps.chart.sh \
mysql/mysql.chart.sh \
nginx/nginx.chart.sh \
nut/nut.chart.sh \
opensips/opensips.chart.sh \
phpfpm/phpfpm.chart.sh \
postfix/postfix.chart.sh \
sensors/sensors.chart.sh \
squid/squid.chart.sh \
tomcat/tomcat.chart.sh \
$(NULL)
chartsconfigdir=$(libconfigdir)/charts.d
dist_chartsconfig_DATA = \
ap/ap.conf \
apache/apache.conf \
apcupsd/apcupsd.conf \
cpu_apps/cpu_apps.conf \
cpufreq/cpufreq.conf \
example/example.conf \
exim/exim.conf \
hddtemp/hddtemp.conf \
libreswan/libreswan.conf \
load_average/load_average.conf \
mem_apps/mem_apps.conf \
mysql/mysql.conf \
nginx/nginx.conf \
nut/nut.conf \
opensips/opensips.conf \
phpfpm/phpfpm.conf \
postfix/postfix.conf \
sensors/sensors.conf \
squid/squid.conf \
tomcat/tomcat.conf \
$(NULL)

View File

@ -0,0 +1,193 @@
# charts.d.plugin
`charts.d.plugin` is a netdata external plugin. It is an **orchestrator** for data collection modules written in `BASH` v4+.
1. It runs as an independent process `ps fax` shows it
2. It is started and stopped automatically by netdata
3. It communicates with netdata via a unidirectional pipe (sending data to the netdata daemon)
4. Supports any number of data collection **modules**
`charts.d.plugin` has been designed so that the actual script that will do data collection will be permanently in
memory, collecting data with as little overheads as possible
(i.e. initialize once, repeatedly collect values with minimal overhead).
`charts.d.plugin` looks for scripts in `/usr/lib/netdata/charts.d`.
The scripts should have the filename suffix: `.chart.sh`.
## Configuration
`charts.d.plugin` itself can be configured using the configuration file `/etc/netdata/charts.d.conf`
(to edit it on your system run `/etc/netdata/edit-config charts.d.conf`). This file is also a BASH script.
In this file, you can place statements like this:
```
enable_all_charts="yes"
X="yes"
Y="no"
```
where `X` and `Y` are the names of individual charts.d collector scripts.
When set to `yes`, charts.d will evaluate the collector script (see below).
When set to `no`, charts.d will ignore the collector script.
The variable `enable_all_charts` sets the default enable/disable state for all charts.
## A charts.d module
A `charts.d.plugin` module is a BASH script defining a few functions.
For a module called `X`, the following criteria must be met:
1. The module script must be called `X.chart.sh` and placed in `/usr/libexec/netdata/charts.d`.
2. If the module needs a configuration, it should be called `X.conf` and placed in `/etc/netdata/charts.d`.
The configuration file `X.conf` is also a BASH script itself.
To edit the default files supplied by netdata run `/etc/netdata/edit-config charts.d/X.conf`,
where `X` is the name of the module.
3. All functions and global variables defined in the script and its configuration, must begin with `X_`.
4. The following functions must be defined:
- `X_check()` - returns 0 or 1 depending on whether the module is able to run or not
(following the standard Linux command line return codes: 0 = OK, the collector can operate and 1 = FAILED,
the collector cannot be used).
- `X_create()` - creates the netdata charts, following the standard netdata plugin guides as described in
**[External Plugins](../plugins.d/)** (commands `CHART` and `DIMENSION`).
The return value does matter: 0 = OK, 1 = FAILED.
- `X_update()` - collects the values for the defined charts, following the standard netdata plugin guides
as described in **[External Plugins](../plugins.d/)** (commands `BEGIN`, `SET`, `END`).
The return value also matters: 0 = OK, 1 = FAILED.
5. The following global variables are available to be set:
- `X_update_every` - is the data collection frequency for the module script, in seconds.
The module script may use more functions or variables. But all of them must begin with `X_`.
The standard netdata plugin variables are also available (check **[External Plugins](../plugins.d/)**).
### X_check()
The purpose of the BASH function `X_check()` is to check if the module can collect data (or check its config).
For example, if the module is about monitoring a local mysql database, the `X_check()` function may attempt to
connect to a local mysql database to find out if it can read the values it needs.
`X_check()` is run only once for the lifetime of the module.
### X_create()
The purpose of the BASH function `X_create()` is to create the charts and dimensions using the standard netdata
plugin guides (**[External Plugins](../plugins.d/)**).
`X_create()` will be called just once and only after `X_check()` was successful.
You can however call it yourself when there is need for it (for example to add a new dimension to an existing chart).
A non-zero return value will disable the collector.
### X_update()
`X_update()` will be called repeatedly every `X_update_every` seconds, to collect new values and send them to netdata,
following the netdata plugin guides (**[External Plugins](../plugins.d/)**).
The function will be called with one parameter: microseconds since the last time it was run. This value should be
appended to the `BEGIN` statement of every chart updated by the collector script.
A non-zero return value will disable the collector.
### Useful functions charts.d provides
Module scripts can use the following charts.d functions:
#### require_cmd command
`require_cmd()` will check if a command is available in the running system.
For example, your `X_check()` function may use it like this:
```sh
mysql_check() {
require_cmd mysql || return 1
return 0
}
```
Using the above, if the command `mysql` is not available in the system, the `mysql` module will be disabled.
#### fixid "string"
`fixid()` will get a string and return a properly formatted id for a chart or dimension.
This is an expensive function that should not be used in `X_update()`.
You can keep the generated id in a BASH associative array to have the values availables in `X_update()`, like this:
```sh
declare -A X_ids=()
X_create() {
local name="a very bad name for id"
X_ids[$name]="$(fixid "$name")"
}
X_update() {
local microseconds="$1"
...
local name="a very bad name for id"
...
echo "BEGIN ${X_ids[$name]} $microseconds"
...
}
```
### Debugging your collectors
You can run `charts.d.plugin` by hand with something like this:
```sh
# become user netdata
sudo su -s /bin/sh netdata
# run the plugin in debug mode
/usr/libexec/netdata/plugins.d/charts.d.plugin debug 1 X Y Z
```
Charts.d will run in `debug` mode, with an update frequency of `1`, evaluating only the collector scripts
`X`, `Y` and `Z`. You can define zero or more module scripts. If none is defined, charts.d will evaluate all
module scripts available.
Keep in mind that if your configs are not in `/etc/netdata`, you should do the following before running
`charts.d.plugin`:
```sh
export NETDATA_USER_CONFIG_DIR="/path/to/etc/netdata"
```
Also, remember that netdata runs `chart.d.plugin` as user `netdata` (or any other user netdata is configured to run as).
## Running multiple instances of charts.d.plugin
`charts.d.plugin` will call the `X_update()` function one after another. This means that a delay in collector `X`
will also delay the collection of `Y` and `Z`.
You can have multiple `charts.d.plugin` running to overcome this problem.
This is what you need to do:
1. Decide a new name for the new charts.d instance: example `charts2.d`.
2. Create/edit the files `/etc/netdata/charts.d.conf` and `/etc/netdata/charts2.d.conf` and enable / disable the
module you want each to run. Remember to set `enable_all_charts="no"` to both of them, and enable the individual
modules for each.
3. link `/usr/libexec/netdata/plugins.d/charts.d.plugin` to `/usr/libexec/netdata/plugins.d/charts2.d.plugin`.
Netdata will spawn a new charts.d process.
Execute the above in this order, since netdata will (by default) attempt to start new plugins soon after they are
created in `/usr/libexec/netdata/plugins.d/`.

View File

@ -0,0 +1,86 @@
# Access Point Plugin (ap)
The `ap` collector visualizes data related to access points.
The source code is [here](https://github.com/netdata/netdata/blob/master/charts.d/ap.chart.sh).
## Example netdata charts
![image](https://cloud.githubusercontent.com/assets/2662304/12377654/9f566e88-bd2d-11e5-855a-e0ba96b8fd98.png)
## How it works
It does the following:
1. Runs `iw dev` searching for interfaces that have `type AP`.
From the same output it collects the SSIDs each AP supports by looking for lines `ssid NAME`.
Example:
```sh
# iw dev
phy#0
Interface wlan0
ifindex 3
wdev 0x1
addr 7c:dd:90:77:34:2a
ssid TSAOUSIS
type AP
channel 7 (2442 MHz), width: 20 MHz, center1: 2442 MHz
```
2. For each interface found, it runs `iw INTERFACE station dump`.
From the output is collects:
- rx/tx bytes
- rx/tx packets
- tx retries
- tx failed
- signal strength
- rx/tx bitrate
- expected throughput
Example:
```sh
# iw wlan0 station dump
Station 40:b8:37:5a:ed:5e (on wlan0)
inactive time: 910 ms
rx bytes: 15588897
rx packets: 127772
tx bytes: 52257763
tx packets: 95802
tx retries: 2162
tx failed: 28
signal: -43 dBm
signal avg: -43 dBm
tx bitrate: 65.0 MBit/s MCS 7
rx bitrate: 1.0 MBit/s
expected throughput: 32.125Mbps
authorized: yes
authenticated: yes
preamble: long
WMM/WME: yes
MFP: no
TDLS peer: no
```
3. For each interface found, it creates 6 charts:
- Number of Connected clients
- Bandwidth for all clients
- Packets for all clients
- Transmit Issues for all clients
- Average Signal among all clients
- Average Bitrate (including average expected throughput) among all clients
## Configuration
You can only set `ap_update_every=NUMBER` to `/etc/netdata/charts.d/ap.conf`, to give the data collection frequency.
To edit this file on your system run `/etc/netdata/edit-config charts.d/ap.conf`.
## Auto-detection
The plugin is able to auto-detect if you are running access points on your linux box.

View File

@ -0,0 +1,2 @@
> THIS MODULE IS OBSOLETE.
> USE THE PYTHON ONE - IT SUPPORTS MULTIPLE JOBS AND IT IS MORE EFFICIENT

View File

@ -0,0 +1,2 @@
> THIS MODULE IS OBSOLETE.
> USE APPS.PLUGIN.

View File

@ -0,0 +1,2 @@
> THIS MODULE IS OBSOLETE.
> USE THE PYTHON ONE - IT SUPPORTS MULTIPLE JOBS AND IT IS MORE EFFICIENT

View File

@ -0,0 +1,2 @@
This is just an example charts.d data collector.

View File

@ -0,0 +1,2 @@
> THIS MODULE IS OBSOLETE.
> USE THE PYTHON ONE - IT SUPPORTS MULTIPLE JOBS AND IT IS MORE EFFICIENT

View File

@ -0,0 +1,28 @@
> THIS MODULE IS OBSOLETE.
> USE THE PYTHON ONE - IT SUPPORTS MULTIPLE JOBS AND IT IS MORE EFFICIENT
# hddtemp
The plugin will collect temperatures from disks
It will create one chart with all active disks
1. **temperature in Celsius**
### configuration
hddtemp needs to be running in daemonized mode
```sh
# host with daemonized hddtemp
hddtemp_host="localhost"
# port on which hddtemp is showing data
hddtemp_port="7634"
# array of included disks
# the default is to include all
hddtemp_disks=()
```
---

View File

@ -0,0 +1,42 @@
# libreswan
The plugin will collects bytes-in, bytes-out and uptime for all established libreswan IPSEC tunnels.
The following charts are created, **per tunnel**:
1. **Uptime**
* the uptime of the tunnel
2. **Traffic**
* bytes in
* bytes out
### configuration
Its config file is `/etc/netdata/charts.d/libreswan.conf`.
The plugin executes 2 commands to collect all the information it needs:
```sh
ipsec whack --status
ipsec whack --trafficstatus
```
The first command is used to extract the currently established tunnels, their IDs and their names.
The second command is used to extract the current uptime and traffic.
Most probably user `netdata` will not be able to query libreswan, so the `ipsec` commands will be denied.
The plugin attempts to run `ipsec` as `sudo ipsec ...`, to get access to libreswan statistics.
To allow user `netdata` execute `sudo ipsec ...`, create the file `/etc/sudoers.d/netdata` with this content:
```
netdata ALL = (root) NOPASSWD: /sbin/ipsec whack --status
netdata ALL = (root) NOPASSWD: /sbin/ipsec whack --trafficstatus
```
Make sure the path `/sbin/ipsec` matches your setup (execute `which ipsec` to find the right path).
---

View File

@ -0,0 +1,2 @@
> THIS MODULE IS OBSOLETE.
> THE NETDATA DAEMON COLLECTS LOAD AVERAGE BY ITSELF

View File

@ -0,0 +1,2 @@
> THIS MODULE IS OBSOLETE.
> USE APPS.PLUGIN.

View File

@ -0,0 +1,81 @@
> THIS MODULE IS OBSOLETE.
> USE THE PYTHON ONE - IT SUPPORTS MULTIPLE JOBS AND IT IS MORE EFFICIENT
# mysql
The plugin will monitor one or more mysql servers
It will produce the following charts:
1. **Bandwidth** in kbps
* in
* out
2. **Queries** in queries/sec
* queries
* questions
* slow queries
3. **Operations** in operations/sec
* opened tables
* flush
* commit
* delete
* prepare
* read first
* read key
* read next
* read prev
* read random
* read random next
* rollback
* save point
* update
* write
4. **Table Locks** in locks/sec
* immediate
* waited
5. **Select Issues** in issues/sec
* full join
* full range join
* range
* range check
* scan
6. **Sort Issues** in issues/sec
* merge passes
* range
* scan
### configuration
You can configure many database servers, like this:
You can provide, per server, the following:
1. a name, anything you like, but keep it short
2. the mysql command to connect to the server
3. the mysql command line options to be used for connecting to the server
Here is an example for 2 servers:
```sh
mysql_opts[server1]="-h server1.example.com"
mysql_opts[server2]="-h server2.example.com --connect_timeout 2"
```
The above will use the `mysql` command found in the system path.
You can also provide a custom mysql command per server, like this:
```sh
mysql_cmds[server2]="/opt/mysql/bin/mysql"
```
The above sets the mysql command only for server2. server1 will use the system default.
If no configuration is given, the plugin will attempt to connect to mysql server at localhost.
---

View File

@ -0,0 +1,2 @@
> THIS MODULE IS OBSOLETE.
> USE THE PYTHON ONE - IT SUPPORTS MULTIPLE JOBS AND IT IS MORE EFFICIENT

View File

@ -0,0 +1,59 @@
# nut
The plugin will collect UPS data for all UPSes configured in the system.
The following charts will be created:
1. **UPS Charge**
* percentage changed
2. **UPS Battery Voltage**
* current voltage
* high voltage
* low voltage
* nominal voltage
3. **UPS Input Voltage**
* current voltage
* fault voltage
* nominal voltage
4. **UPS Input Current**
* nominal current
5. **UPS Input Frequency**
* current frequency
* nominal frequency
6. **UPS Output Voltage**
* current voltage
7. **UPS Load**
* current load
8. **UPS Temperature**
* current temperature
### configuration
This is the internal default for `/etc/netdata/nut.conf`
```sh
# a space separated list of UPS names
# if empty, the list returned by 'upsc -l' will be used
nut_ups=
# how frequently to collect UPS data
nut_update_every=2
```
---

View File

@ -0,0 +1,2 @@
> THIS MODULE IS OBSOLETE.
> USE THE PYTHON ONE - IT SUPPORTS MULTIPLE JOBS AND IT IS MORE EFFICIENT

View File

@ -0,0 +1,26 @@
> THIS MODULE IS OBSOLETE.
> USE THE PYTHON ONE - IT SUPPORTS MULTIPLE JOBS AND IT IS MORE EFFICIENT
# postfix
The plugin will collect the postfix queue size.
It will create two charts:
1. **queue size in emails**
2. **queue size in KB**
### configuration
This is the internal default for `/etc/netdata/postfix.conf`
```sh
# the postqueue command
# if empty, it will use the one found in the system path
postfix_postqueue=
# how frequently to collect queue size
postfix_update_every=15
```
---

View File

@ -0,0 +1,52 @@
> THIS MODULE IS OBSOLETE.
> USE THE PYTHON ONE - IT SUPPORTS MULTIPLE JOBS AND IT IS MORE EFFICIENT
> Unlike the python one, this module can collect temperature on RPi.
# sensors
The plugin will provide charts for all configured system sensors
> This plugin is reading sensors directly from the kernel.
> The `lm-sensors` package is able to perform calculations on the
> kernel provided values, this plugin will not perform.
> So, the values graphed, are the raw hardware values of the sensors.
The plugin will create netdata charts for:
1. **Temperature**
2. **Voltage**
3. **Current**
4. **Power**
5. **Fans Speed**
6. **Energy**
7. **Humidity**
One chart for every sensor chip found and each of the above will be created.
### configuration
This is the internal default for `/etc/netdata/sensors.conf`
```sh
# the directory the kernel keeps sensor data
sensors_sys_dir="${NETDATA_HOST_PREFIX}/sys/devices"
# how deep in the tree to check for sensor data
sensors_sys_depth=10
# if set to 1, the script will overwrite internal
# script functions with code generated ones
# leave to 1, is faster
sensors_source_update=1
# how frequently to collect sensor data
# the default is to collect it at every iteration of charts.d
sensors_update_every=
# array of sensors which are excluded
# the default is to include all
sensors_excluded=()
```
---

View File

@ -0,0 +1,66 @@
> THIS MODULE IS OBSOLETE.
> USE THE PYTHON ONE - IT SUPPORTS MULTIPLE JOBS AND IT IS MORE EFFICIENT
# squid
The plugin will monitor a squid server.
It will produce 4 charts:
1. **Squid Client Bandwidth** in kbps
* in
* out
* hits
2. **Squid Client Requests** in requests/sec
* requests
* hits
* errors
3. **Squid Server Bandwidth** in kbps
* in
* out
4. **Squid Server Requests** in requests/sec
* requests
* errors
### autoconfig
The plugin will by itself detect squid servers running on
localhost, on ports 3128 or 8080.
It will attempt to download URLs in the form:
- `cache_object://HOST:PORT/counters`
- `/squid-internal-mgr/counters`
If any succeeds, it will use this.
### configuration
If you need to configure it by hand, create the file
`/etc/netdata/squid.conf` with the following variables:
- `squid_host=IP` the IP of the squid host
- `squid_port=PORT` the port the squid is listening
- `squid_url="URL"` the URL with the statistics to be fetched from squid
- `squid_timeout=SECONDS` how much time we should wait for squid to respond
- `squid_update_every=SECONDS` the frequency of the data collection
Example `/etc/netdata/squid.conf`:
```sh
squid_host=127.0.0.1
squid_port=3128
squid_url="cache_object://127.0.0.1:3128/counters"
squid_timeout=2
squid_update_every=5
```
---

Some files were not shown because too many files have changed in this diff Show More